Skip to main content
Specialized models

Machine translation (Qwen-MT)

92 languages with terms

Qwen-MT is a machine translation model fine-tuned from Qwen3, supporting 92 languages. It provides term intervention, domain prompting, and translation memory to enhance translation quality.

How it works

  1. Provide the text to translate: The messages array must contain a single message with its role set to user. The content of this message is the text to be translated.
  2. Set languages: Set the source language (source_lang) and target language (target_lang) in the translation_options parameter. For a list of supported languages, see Supported languages. To automatically detect the source language, set source_lang to auto.
Specifying the source language improves translation accuracy. You can also set languages using custom prompts.
  • OpenAI compatible
  • DashScope
# Import dependencies and create a client...
completion = client.chat.completions.create(
  model="qwen-mt-flash",    # Select the model
  # The messages parameter must contain only one message with the role set to user, and its content is the text to be translated.
  messages=[{"role": "user", "content": "No me reí después de ver este video"}],
  # Since translation_options is not a standard OpenAI parameter, it must be passed in the extra_body parameter.
  extra_body={"translation_options": {"source_lang": "auto", "target_lang": "English"}},
)
Limitations
  • Single-turn translation only: The model is designed for translation tasks and does not support multi-turn conversations.
  • System messages not supported: You cannot set global behavior using a message with the system role. Instead, define translation configurations in the translation_options parameter.

Model selection

  • For general scenarios, you can select qwen-mt-flash. It balances quality, speed, and cost, and supports incremental streaming output.
  • For the highest translation quality in professional fields, you can select qwen-mt-plus.
  • For the fastest response speed in simple, real-time scenarios, you can select qwen-mt-lite.
ModelScenarioResultSpeedCostSupported languagesSupports incremental streaming output
qwen-mt-plusScenarios that require high translation quality, such as professional fields, formal documents, academic papers, and technical reportsBestStandardHigh92Unsupported
qwen-mt-flashTop choice for general use. Suitable for scenarios such as website/app content, product descriptions, daily communication, and blog postsGoodFastLow92Supported
qwen-mt-turboThis model will not be updated in the future. Use flash instead.FairFastLow92Unsupported
qwen-mt-liteSimple, latency-sensitive scenarios such as real-time chat and live comment translationBasicFastestLowest31Supported
For model details, pricing, and rate limits, see Model Gallery.

Getting started

Get an API key and set it as an environment variable. To use the SDK, install it.
  • OpenAI compatible
  • DashScope
Sample request
import os
from openai import OpenAI

client = OpenAI(
  # If you have not configured the environment variable, replace the following line with your API key: api_key="sk-xxx",
  api_key=os.getenv("DASHSCOPE_API_KEY"),
  base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
)
messages = [
  {
    "role": "user",
    "content": "No me reí después de ver este video"
  }
]
translation_options = {
  "source_lang": "auto",
  "target_lang": "English"
}

completion = client.chat.completions.create(
  model="qwen-mt-plus",
  messages=messages,
  extra_body={
    "translation_options": translation_options
  }
)
print(completion.choices[0].message.content)
Sample response
I didn't laugh after watching this video.

Streaming output

For general streaming concepts (SSE protocol, how to enable streaming, billing, and token usage), see Streaming output. This section covers only the streaming behavior specific to machine translation.
To enable streaming, add stream: true to your translation call. The only difference from standard streaming is including translation_options:
completion = client.chat.completions.create(
  model="qwen-mt-flash",
  messages=[{"role": "user", "content": "No me reí después de ver este video"}],
  stream=True,
  stream_options={"include_usage": True},
  extra_body={"translation_options": {"source_lang": "auto", "target_lang": "English"}},
)
for chunk in completion:
  if chunk.choices:
    print(chunk.choices[0].delta.content or "", end="", flush=True)
Model differences:
ModelIncremental streaming
qwen-mt-flash, qwen-mt-liteSupported — each chunk contains only new content
qwen-mt-plus, qwen-mt-turboNot supported — each chunk contains all content generated so far
For DashScope, set incremental_output=True to enable incremental streaming on supported models.

Improve translation quality

For professional translation tasks, you may encounter these issues:
  • Inconsistent terminology: Product names or industry terms are translated incorrectly.
  • Mismatched style: The style of the translated text does not meet the standards of specific domains, such as legal or marketing.
You can use term intervention, translation memory, and domain prompting to resolve these issues.

Term intervention

To ensure translation accuracy and consistency when text contains brand names, product names, or technical terms, you can provide a glossary in the terms field. This instructs the model to use your specified translations. Define and pass terms as follows:
1

Define terms

Create a JSON array and assign it to the terms field. Each object in the array represents a term in the following format:
{
  "source": "term",
  "target": "pre-translated term"
}
2

Pass the terms

Use the translation_options parameter to pass the defined terms array.
  • OpenAI compatible
  • DashScope
Sample request
import os
from openai import OpenAI

client = OpenAI(
  # If you have not configured an environment variable, replace the following line with your API key: api_key="sk-xxx",
  api_key=os.getenv("DASHSCOPE_API_KEY"),
  base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
)
messages = [
  {
    "role": "user",
    "content": "Este conjunto de biosensores utiliza grafeno, un material novedoso. Su objetivo son los elementos químicos. Su agudo «sentido del olfato» le permite reflejar el estado de salud del cuerpo de forma más profunda y precisa."
  }
]

# --- First request: without the terms parameter ---
print("--- [Translation result without terms] ---")
translation_options_without_terms = {
  "source_lang": "auto",
  "target_lang": "English"
}

completion_without_terms = client.chat.completions.create(
  model="qwen-mt-turbo",
  messages=messages,
  extra_body={
    "translation_options": translation_options_without_terms
  }
)
print(completion_without_terms.choices[0].message.content)

print("\n" + "="*50 + "\n") # Separator for comparison

# --- Second request: with the terms parameter ---
print("--- [Translation result with terms] ---")
translation_options_with_terms = {
  "source_lang": "auto",
  "target_lang": "English",
  "terms": [
    {
      "source": "biosensor",
      "target": "biological sensor"
    },
    {
      "source": "estado de salud del cuerpo",
      "target": "health status of the body"
    }
  ]
}

completion_with_terms = client.chat.completions.create(
  model="qwen-mt-turbo",
  messages=messages,
  extra_body={
    "translation_options": translation_options_with_terms
  }
)
print(completion_with_terms.choices[0].message.content)
Sample responseAfter you add the terms, the translation result is consistent with the terms you passed: "biological sensor" and "health status of the body".
--- [Translation result without terms] ---
This set of biosensors uses graphene, a new material, whose target substance is chemical elements. Its sensitive "sense of smell" allows it to more deeply and accurately reflect one's health condition.

==================================================
--- [Translation result with terms] ---
This biological sensor uses a new material called graphene. Its target is chemical elements, and its sensitive "sense of smell" enables it to reflect the health status of the body more deeply and accurately.

Translation memory

To instruct the model to use a specific translation style or sentence pattern, you can provide source-target sentence pairs as examples in the tm_list field. The model then imitates the style of these examples for the current translation task.
1

Define the translation memory

Create a JSON array named tm_list. Each JSON object in the array contains a source sentence and its corresponding translated sentence in the following format:
{
  "source": "source statement",
  "target": "translated statement"
}
2

Pass the translation memory

Use the translation_options parameter to pass the translation memory array.
  • OpenAI compatible
  • DashScope
Sample request
import os
from openai import OpenAI

client = OpenAI(
  # If you have not configured an environment variable, replace the following line with your API key: api_key="sk-xxx",
  api_key=os.getenv("DASHSCOPE_API_KEY"),
  base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
)
messages = [
  {
    "role": "user",
    "content": "El siguiente comando muestra la información de la versión de Thrift instalada."
  }
]
translation_options = {
  "source_lang": "auto",
  "target_lang": "English",
  "tm_list": [
    {
      "source": "Puede utilizar uno de los siguientes métodos para consultar la versión del motor de un clúster:",
      "target": "You can use one of the following methods to query the engine version of a cluster:"
    },
    {
      "source": "La versión de Thrift utilizada por nuestro HBase en la nube es la 0.9.0. Por lo tanto, recomendamos que la versión del cliente también sea la 0.9.0. Puede descargar Thrift 0.9.0 desde aquí. El paquete de código fuente descargado se utilizará posteriormente. Primero debe instalar el entorno de compilación de Thrift. Para la instalación desde el código fuente, puede consultar el sitio web oficial de Thrift.",
      "target": "The version of Thrift used by ApsaraDB for HBase is 0.9.0. Therefore, we recommend that you use Thrift 0.9.0 to create a client. Click here to download Thrift 0.9.0. The downloaded source code package will be used later. You must install the Thrift compiling environment first. For more information, see Thrift official website."
    },
    {
      "source": "Puede instalar el SDK a través de PyPI. El comando de instalación es el siguiente:",
      "target": "You can run the following command in Python Package Index (PyPI) to install Elastic Container Instance SDK for Python:"
    }
  ]
}

completion = client.chat.completions.create(
  model="qwen-mt-plus",
  messages=messages,
  extra_body={
    "translation_options": translation_options
  }
)
print(completion.choices[0].message.content)
Sample response
You can run the following command to view the version of Thrift that is installed:

Domain prompting

To adapt the translation style to a specific domain, you can use the translation_options parameter to pass a domain prompt. For example, translations for legal or government domains should be formal, while those for social media should be colloquial.
Domain prompts currently support only English.
  • OpenAI compatible
  • DashScope
Sample request
import os
from openai import OpenAI

client = OpenAI(
  # If you have not configured an environment variable, replace the following line with your API key: api_key="sk-xxx",
  api_key=os.getenv("DASHSCOPE_API_KEY"),
  base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
)
messages = [
  {
    "role": "user",
    "content": "La segunda instrucción SELECT devuelve un número que indica la cantidad de filas que habría devuelto la primera instrucción SELECT si no se hubiera utilizado la cláusula LIMIT."
  }
]

# --- First request: without the domains parameter ---
print("--- [Translation result without domains] ---")
translation_options_without_domains = {
  "source_lang": "auto",
  "target_lang": "English",
}

completion_without_domains = client.chat.completions.create(
  model="qwen-mt-plus",
  messages=messages,
  extra_body={
    "translation_options": translation_options_without_domains
  }
)
print(completion_without_domains.choices[0].message.content)

print("\n" + "="*50 + "\n") # Separator for comparison

# --- Second request: with the domains parameter ---
print("--- [Translation result with domains] ---")
translation_options_with_domains = {
  "source_lang": "auto",
  "target_lang": "English",
  "domains": "The sentence is from Ali Cloud IT domain. It mainly involves computer-related software development and usage methods, including many terms related to computer software and hardware. Pay attention to professional troubleshooting terminologies and sentence patterns when translating. Translate into this IT domain style."
}

completion_with_domains = client.chat.completions.create(
  model="qwen-mt-plus",
  messages=messages,
  extra_body={
    "translation_options": translation_options_with_domains
  }
)
print(completion_with_domains.choices[0].message.content)
Sample response
--- [Translation result without domains] ---
The second SELECT statement returns a number indicating how many rows the first SELECT statement would return without the LIMIT clause.

==================================================

--- [Translation result with domains] ---
The second SELECT statement returns a number that indicates how many rows the first SELECT statement would have returned if it had not included a LIMIT clause.

Custom prompts

Use custom prompts in Qwen-MT to specify details such as the language or style. This method is mutually exclusive with the translation_options parameter. If you use both, translation_options may not take effect.
For the best translation results, use translation_options to configure translation settings instead.
Example: Spanish-to-English translation in the legal domain:
  • OpenAI compatible
  • DashScope
Sample request
import os
from openai import OpenAI

client = OpenAI(
  # If the environment variable is not configured, replace the following line with your Qwen Cloud API Key: api_key="sk-xxx",
  api_key=os.getenv("DASHSCOPE_API_KEY"),
  base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
)
prompt_template = """
# Role
You are a professional legal translation expert, proficient in both Spanish and English, and you are especially skilled at handling commercial contracts and legal documents.

# Task
I need you to translate the following Spanish legal text into professional, accurate, and formal English.

# Translation Requirements
1.  **Fidelity to the Original**: Strictly translate according to the meaning and legal intent of the original text. Do not add or omit information.
2.  **Precise Terminology**: Use standard legal terms common in the Common Law system. For example, "甲方" should be translated as "Party A", "乙方" as "Party B", and "不可抗力" as "Force Majeure".
3.  **Formal Tone**: Maintain the rigorous, objective, and formal style inherent in legal documents.
4.  **Clarity of Language**: The translation must be clear, unambiguous, and conform to the expressive conventions of English legal writing.
5.  **Format Preservation**: Retain the paragraphs, numbering, and basic format of the original text.

# Text to be Translated
{text_to_translate}
"""

# --- 2. Prepare the legal text to be translated ---
chinese_legal_text = "Este contrato entrará en vigor a partir de la fecha en que ambas partes lo firmen y sellen, y tendrá una vigencia de un año."
final_prompt = prompt_template.format(text_to_translate=chinese_legal_text)

# --- 3. Construct the messages ---
messages = [{"role": "user", "content": final_prompt}]

# --- 4. Initiate the API request ---
completion = client.chat.completions.create(model="qwen-mt-plus", messages=messages)

# --- 5. Print the model's translation result ---
translation_result = completion.choices[0].message.content
print(translation_result)
Sample response
This Contract shall become effective from the date on which both parties sign and affix their seals, and its term of validity shall be one year.

Going live

  • Control the input token count The Qwen-MT models have a maximum input limit of 8,192 tokens. For long content, consider the following strategies to control the number of input tokens:
    • Translate in segments: When you translate long text, process it in segments. Split the text based on semantic units, such as paragraphs or complete sentences, instead of by character count. This approach preserves contextual integrity and improves translation quality.
    • Provide the most relevant reference content: Terms, translation memory, and domain prompts are added to the input prompt as tokens. To optimize token usage, provide only the reference content that is most relevant to the current task. Avoid using large, generic lists.
  • Set source_lang based on the scenario
    • When the source language is uncertain, such as in social chat scenarios with multilingual text, set source_lang to auto. The model automatically identifies the source language.
    • In scenarios with a fixed language and high accuracy requirements, such as for technical documents or operation manuals, always specify source_lang. Explicitly defining the source language improves translation accuracy.

Supported languages

Use the English name or Code from the table below when you send a request.
If you are unsure of the source language, you can set the source_lang parameter to auto for automatic detection.
  • qwen-mt-plus/flash/turbo (92 languages)
  • qwen-mt-lite (31 languages)
LanguageEnglish nameCode
EnglishEnglishen
Simplified ChineseChinesezh
Traditional ChineseTraditional Chinesezh_tw
RussianRussianru
JapaneseJapaneseja
KoreanKoreanko
SpanishSpanishes
FrenchFrenchfr
PortuguesePortuguesept
GermanGermande
ItalianItalianit
ThaiThaith
VietnameseVietnamesevi
IndonesianIndonesianid
MalayMalayms
ArabicArabicar
HindiHindihi
HebrewHebrewhe
BurmeseBurmesemy
TamilTamilta
UrduUrduur
BengaliBengalibn
PolishPolishpl
DutchDutchnl
RomanianRomanianro
TurkishTurkishtr
KhmerKhmerkm
LaoLaolo
CantoneseCantoneseyue
CzechCzechcs
GreekGreekel
SwedishSwedishsv
HungarianHungarianhu
DanishDanishda
FinnishFinnishfi
UkrainianUkrainianuk
BulgarianBulgarianbg
SerbianSerbiansr
TeluguTelugute
AfrikaansAfrikaansaf
ArmenianArmenianhy
AssameseAssameseas
AsturianAsturianast
BasqueBasqueeu
BelarusianBelarusianbe
BosnianBosnianbs
CatalanCatalanca
CebuanoCebuanoceb
CroatianCroatianhr
Egyptian ArabicEgyptian Arabicarz
EstonianEstonianet
GalicianGaliciangl
GeorgianGeorgianka
GujaratiGujaratigu
IcelandicIcelandicis
JavaneseJavanesejv
KannadaKannadakn
KazakhKazakhkk
LatvianLatvianlv
LithuanianLithuanianlt
LuxembourgishLuxembourgishlb
MacedonianMacedonianmk
MaithiliMaithilimai
MalteseMaltesemt
MarathiMarathimr
Mesopotamian ArabicMesopotamian Arabicacm
Moroccan ArabicMoroccan Arabicary
Najdi ArabicNajdi Arabicars
NepaliNepaline
North AzerbaijaniNorth Azerbaijaniaz
North Levantine ArabicNorth Levantine Arabicapc
Northern UzbekNorthern Uzbekuz
Norwegian BokmalNorwegian Bokmalnb
Norwegian NynorskNorwegian Nynorsknn
OccitanOccitanoc
OdiaOdiaor
PangasinanPangasinanpag
SicilianSicilianscn
SindhiSindhisd
SinhalaSinhalasi
SlovakSlovaksk
SlovenianSloveniansl
South Levantine ArabicSouth Levantine Arabicajp
SwahiliSwahilisw
TagalogTagalogtl
Ta'izzi-Adeni ArabicTa'izzi-Adeni Arabicacq
Tosk AlbanianTosk Albaniansq
Tunisian ArabicTunisian Arabicaeb
VenetianVenetianvec
ValaisanWaraywar
WelshWelshcy
Western PersianWestern Persianfa

API reference

For detailed API parameters, see: