China AI Models List (Updated Weekly, English)

A builder-facing tracker of major China AI model families, labs, and the best English-accessible verification links

Thesis

The most useful way to track China AI models in English is not a giant benchmark sheet. It is a compact weekly list of the labs and model families that keep changing builder decisions, paired with the English-accessible places where you can verify what actually shipped.

Decision in 20 seconds

Use this page for the standing watchlist. Its core asset is the combination of who stays on the list, what should trigger action, and where to verify it first. If the question is broader than the watchlist, go back to the China AI overview. If the question is mainly about source selection, use the China AI English sites hub or Best Sites.

Companion trackers for this watchlist

Who this is for

  • Builders and PMs who need a compact weekly watchlist rather than a giant market map.
  • English-first teams who want to know which China AI names still matter after the news cycle fades.
  • Researchers and evaluators who need a short list of families worth checking for benchmark, access, or license changes.

Who this is not for

  • Readers looking for a benchmark leaderboard with one universal ranking.
  • People who want every China AI lab listed regardless of builder relevance.
  • Readers whose main question is source selection or workflow rather than model families to keep in view.

Use this page when

Use this page when the question is specifically which names deserve a permanent slot in the weekly review. It is not trying to be the full topic overview, the full workflow, or the full source directory. Those routes still matter, but the main reason to come here is the watchlist itself: families, triggers, and verification paths in one place.

Permanent watchlist in one screen

FamilyKeep it on the list becauseVerify first through
DeepSeekIt repeatedly changes open-model benchmark, cost-performance, and evaluation conversations.DeepSeek GitHub, Hugging Face, technical report
QwenIt ships across sizes and modalities often enough to keep affecting OSS and builder comparisons.QwenLM GitHub, Hugging Face, official docs
KimiIt matters when product-facing reasoning and launch momentum change what builders pay attention to.Official product pages, release notes, research posts
MiniMaxIt matters when multimodal packaging or practical API access changes what is testable for your team.Official docs, release pages, API notes
GLM / HunyuanThey matter when commercial APIs, enterprise distribution, or platform reach enter the evaluation set.Official docs, product pages, release notes

April 2026 families to watch now

The permanent watchlist still matters, but late April 2026 needs a second question: which specific branches should builders pay extra attention to right now? The short answer is Qwen3.6 as the clearest open-weight release wave, GLM-5.1 as a stronger API and coding watch, MiniMax-M2.7 as a compact or cost-sensitive multimodal watch, Kimi K2.6 as an agent-workflow watch, and DeepSeek V4 as a high-priority watch item that still should not be treated like a settled public release path. This is why the page now separates the permanent watchlist from the April-now watch.

Family or branchWhy nowDefault stance
Qwen3.6 lineOfficial late-April releases keep extending the clearest English-accessible open-weight path in the China AI cluster.Test this week if open coding models matter
GLM-5.1It strengthens the API-first and coding-oriented branch of the current watchlist.Watch → Test if commercial API comparisons matter
MiniMax-M2.7It keeps showing up when the question is cost, packaging, or practical multimodal deployment.Watch → Test if unit economics or multimodality matter
Kimi K2.6It raises Moonshot's importance for agentic coding and longer workflow tasks.Watch → Test if agent workflows matter
DeepSeek V4 watchAttention is high, but teams should still wait for a stable public release surface before treating it as the new default.Watch only until the official public path changes

What is the best way to use a China AI models list?

The best way to use a China AI models list is as a weekly tracking layer, not as a final ranking. Keep a short set of labs and model families in view, watch the English-accessible release channels for each one, and verify benchmark source, API access, and license before you act. This works better than following generic AI news because it keeps the China AI watchlist small, current, and connected to practical verification steps.

Which China AI models should builders track?

Builders should track the China AI model families that repeatedly change evaluation queues, cost comparisons, access options, or product packaging, not every model name that trends for one week. In most teams, that means keeping DeepSeek, Qwen, Kimi, MiniMax, GLM, and Hunyuan on the permanent watchlist, then checking whether ERNIE, Doubao, or a Tier 2 name has become newly decision-relevant. A good watchlist is small enough to review weekly and specific enough to tell you what changed, where to verify it, and whether it should trigger action. This page answers who stays on the watchlist and why; it does not replace the Best Sites page, which owns the source shortlist, or the workflow guide, which owns the weekly routine.

How should I track Chinese AI models in English?

To track Chinese AI models in English, keep the workflow simple: use a small permanent watchlist, not a giant market map. Start with the model families most likely to change builder decisions, such as DeepSeek, Qwen, Kimi, MiniMax, GLM, and Hunyuan. Check GitHub, Hugging Face, official docs, technical reports, and release pages for what actually shipped. Then review only the changes that affect benchmark confidence, API access, license terms, or product packaging. This page owns that structured tracker role. If you need translation-lag context, lab-specific channel explanations, or a broader verification workflow, switch to the supporting article or the workflow guide instead of turning the tracker into a second article.

How to read this page

  • Current watchlist tells you which families deserve a permanent slot in a weekly review.
  • Trigger action tells you what kind of change should move an item from "notice" to "review this week."
  • Verification links tell you where to confirm the release in English before you repeat or act on the claim.
  • Tier 2 names are worth adding only when your scope expands beyond the core builder watchlist.

Current watchlist

Model or familyLab or companyWhy it stays on the listWhat should trigger actionBest English-accessible verification linksRadarAI note
DeepSeek-V3 / DeepSeek-R1DeepSeekOften resets open-model cost-performance conversations and benchmark comparisons.New flagship release, benchmark jump, API pricing change, or license shift.GitHub, Hugging Face, technical report, official docsUsually one of the first China-origin model families that changes builder evaluation queues.
Qwen familyAlibaba CloudFrequent releases across sizes, modalities, and OSS-friendly distribution channels.New family branch, stronger reasoning variant, OSS release, or access update.QwenLM GitHub, Hugging Face, official docs, technical reportUseful when you want both strong open models and clear English-facing release materials.
Kimi familyMoonshot AIProduct-facing launches often shape how people talk about China AI reasoning and UX.Major product release, reasoning improvement claim, or broader English-facing rollout.Official product pages, release notes, research posts, English coverageWorth tracking when the signal is product experience or launch momentum rather than a repo-first release.
MiniMax familyMiniMaxStrong candidate when multimodal packaging and practical product access matter more than one benchmark line.Multimodal launch, API availability change, or pricing / packaging update.Official docs, release pages, research posts, English summariesUseful when the question is not just model quality but also product packaging and practical access.
ERNIE familyBaiduEnterprise packaging, cloud distribution, and China-market context can matter more than raw model buzz.Enterprise release, API or cloud packaging change, or region-access signal.Official docs, Baidu AI Cloud updates, product pages, English reportingImportant when your decisions depend on enterprise packaging, cloud access, or China-market context.
Doubao familyByteDanceFast product iteration and ecosystem moves can matter even when the repo story is weaker.Major product feature release, model refresh, or platform integration move.Official product pages, research posts, GitHub when available, English summariesTrack when you care about fast product iteration and ecosystem-level movement, not just one repo.
GLM familyZhipu AIRelevant when you want another strong commercial and API-facing line beyond DeepSeek and Qwen.New GLM generation, API availability expansion, or enterprise partnership signal.Official docs, release notes, model pages, English reportingAdd early if your team compares commercial APIs, not just open weights.
Hunyuan familyTencentMatters when ecosystem distribution, cloud reach, and platform packaging are part of the evaluation.Cloud release, enterprise access update, or notable multimodal / agent capability move.Official docs, Tencent Cloud updates, product pages, English summariesMost relevant when you care about platform leverage and enterprise distribution, not only benchmark chatter.

Tier 2 names to add only if your scope expands

Name or familyWhy add laterAdd it when
THUDM research lineStrong academic and open research signal, but not always the first builder decision layer.You care about frontier research repos, not only deployable product choices.
SenseNovaUseful for enterprise and multimodal tracking, but less central for a compact weekly builder watchlist.You need broader enterprise AI vendor coverage inside China.
Step familyWorth watching for momentum and ecosystem chatter, but not always a top-8 must-track family.You start seeing repeated product relevance or customer questions around it.
Yi familyStill useful historically and for selective OSS comparison, but less often the first family that changes this week's decision.Your stack or benchmarks still compare against earlier open-model baselines.

What should trigger action this week

If this changesWhy it mattersWhat to do next
New flagship model or major versionCould change benchmark comparisons, evaluation backlog, or product positioning.Read the technical report or model card, then compare against your current default model.
API access opens or changesA model moves from "interesting" to "testable" only when your team can actually use it.Check docs, pricing, account requirements, and region access before adding it to testing.
License terms changeCommercial use assumptions break fast when the license changes across versions.Read the LICENSE file and model card before sharing a recommendation internally.
Benchmark claim gets third-party confirmationThat is often the moment hype becomes evaluation-worthy.Move the model from watchlist into a short benchmark or prompt test.
Distribution or cloud packaging changesEnterprise and production relevance often depends more on packaging than on one raw score.Re-check whether procurement, deployment, or regional access just became easier.

DeepSeek vs Qwen vs Kimi for watchlists: what signal does each one usually give?

NameMost useful when your question isWhat usually triggers reviewWhere to verify first
DeepSeekWhether a new open model changes benchmark, cost, or evaluation queuesNew flagship model, benchmark jump, pricing move, or license changeGitHub, Hugging Face, technical report, official docs
QwenWhether a broad OSS-friendly family added a new size, modality, or reasoning branch worth testingNew branch, reasoning update, multimodal release, or access expansionQwenLM GitHub, Hugging Face, official docs, release posts
KimiWhether product-facing reasoning, UX, or launch momentum changed enough to affect builder attentionMajor Kimi launch, reasoning claim, product release, or broader English-facing rolloutOfficial product pages, release notes, research posts, English summaries

What to verify for every tracked model

FieldWhy it mattersGood source
Benchmark sourceSeparates self-reported claims from reproducible evidenceTechnical reports, model cards, third-party leaderboards
API accessDetermines whether your team can actually test the modelOfficial docs, pricing page, onboarding or account requirements
License termsDetermines whether commercial use is allowed or restrictedLICENSE file, model card, official release page
Release channelShows whether the claim comes from a primary source or commentary onlyGitHub repo, Hugging Face page, official docs, product page
Builder relevanceKeeps the watchlist tied to actual product decisionsYour own evaluation queue, cost comparison, deployment constraints

Weekly update rhythm

  1. Keep the list small: do not track every China AI release. Track the labs and model families most likely to affect your stack.
  2. Check primary sources first: repos, model cards, docs, and technical reports beat commentary for first verification.
  3. Pull only the meaningful changes: new model, benchmark shift, API access change, or license change.
  4. Write one note: what changed, where it was verified, and whether it affects this month's decisions.

How to decide who stays on this list

  • Keep a family on the list if it repeatedly affects benchmark, cost, access, or product packaging decisions.
  • Move a family to Tier 2 if it is interesting but no longer changes what your team evaluates or deploys.
  • Add a new family only after it appears more than once in your weekly review or in customer / team decision discussions.

A 15-minute copyable weekly check

## China AI models check — [Date]
1. Families checked: [DeepSeek / Qwen / Kimi / ...]
2. Trigger seen: [new release / benchmark / API / license / packaging]
3. Verified through: [GitHub / Hugging Face / docs / report]
4. Action level: [watch / discuss / test this week]
5. Why it matters: [1 sentence tied to your stack or roadmap]

How RadarAI uses this list

RadarAI uses this page as the model tracker layer inside the China AI cluster. The weekly report gives you the broader signal stream, the workflow guide tells you how to review it, the Best Sites page tells you where to look, and this page tells you which model families and labs deserve a permanent slot in the watchlist.

What this page is not

  • Not a benchmark leaderboard: it does not try to rank every model by one score.
  • Not a complete market map: it keeps the list small enough for weekly use.
  • Not a replacement for primary-source verification: every row still needs repo, doc, or report checks.

Common mistakes when using a models list

  • Using this page like a ranking: the point is to decide what deserves verification, not to crown one universal winner.
  • Confusing lab relevance with release relevance: a big lab name does not mean every weekly update matters.
  • Skipping access and license checks: many China AI models sound relevant before you discover they are not actually usable in your context.
  • Adding too many names too early: once the list turns into a directory, it stops working as a weekly decision tool.

Quotable summary

RadarAI's China AI models list is a weekly watchlist, not a giant leaderboard. Track the model families most likely to change builder decisions, define clear action triggers, verify releases through GitHub, Hugging Face, technical reports, and official docs, and keep benchmark source, API access, and license checks tied to every meaningful update.

Fresh supporting reads

FAQ

What is this page for?

This page is a builder-facing tracker for major China AI model families and labs. It helps you see who to watch, where to verify releases in English, what should trigger action this week, and which practical checks matter before you act.

What is the shortest answer to which China AI models builders should track?

Start with DeepSeek, Qwen, Kimi, MiniMax, GLM, and Hunyuan, then expand only when access, pricing, multimodal packaging, or enterprise distribution makes another lab newly decision-relevant. Use the short answer page when you need the citation-ready version first.

Is this a benchmark leaderboard?

No. This page is a monitoring and verification list, not a benchmark leaderboard. Use it to keep the major labs and model families in view, then verify benchmark claims through model cards, technical reports, and third-party evaluations.

How should I use this with the workflow guide?

Use this page to decide which labs and model families belong in your watchlist. Use the workflow guide for the weekly routine, and use the Best Sites page when you need the source stack behind that routine.

Why are some China AI labs missing from the top watchlist?

Because this page is not trying to map every lab. RadarAI keeps the top watchlist focused on the families most likely to change builder decisions through open-weight releases, API access, enterprise packaging, multimodal launches, or major ecosystem movement. That is why some names stay in Tier 2 until they become more decision-relevant.

Should I track by lab or by model family?

Track by model family when your question is adoption or evaluation, and by lab when your question is roadmap or ecosystem movement. In practice, most teams need both: family for what to test, lab for where the next release is likely to appear.

Next