Are Free ChatGPT-Level AIs Really Safe to Use? — The Beginner’s Guide to Chinese AIs
“ChatGPT and Gemini are paid, but you're saying I can use an AI with similar performance for free?”
As of 2026, “free” or “low-cost” AIs are emerging almost daily.
Today, let's explore the three most spotlighted models: Kimi K2.5, MiniMax M2.5, and GLM-5.
listContentsexpand_more
- 1. What is Open-Weight AI?
- Differences Between General AI (Commercial Models) and Open-Weight AI
- 2. Why Open-Weight AI is Getting Attention
- Why release them for free?
- 3. What Can You Do With Open-Weight AI?
- ① Install on Your Own Server = Enhanced Security
- ② Custom Training with Your Data (Fine-Tuning)
- ③ Compressing Large Models into Smaller Ones (Distillation)
- ④ Integrate AI into Your Services
- 4. Comparison of 3 Major Open-Weight AIs
- ① Kimi K2.5
- ② MiniMax M2.5
- ③ GLM-5
- ④ At a Glance Comparison
- 5. How to Actually Use Them
- Method 1: Directly on the Web (Recommended for Beginners ⭐)
- Method 2: Connect via API (Recommended for Developers & Operators)
- Method 3: Install Directly on Your Server (Advanced Users)
- 6. What Happens to My Personal Information and Data?
- 6-1. Where is my input saved?
- 6-2. What are the actual issues with Chinese laws?
- 6-3. So, how can I use them safely?
- 7. From Training Controversy to Actual Quality
- 7-1. The Controversy Over How These Models Were Trained
- 7-2. Other Precautions
- 8. Summary — Which AI is Right for Me?
- 9. References
- 9-1. Official Model Sites
- 9-2. Security and Policy
- 9-3. Useful Tools
1. What is Open-Weight AI?
AI is trained on massive amounts of text and data.
During this learning process, tens of billions of numbers called ‘Weights’ are formed inside the AI. These numbers act as the criteria for judgment, such as ‘what word is most likely to come next.’ In human terms, it’s akin to the intuition or instincts developed through learning and experience.
Differences Between General AI (Commercial Models) and Open-Weight AI
| Category | General AI (Commercial Models) | Open-Weight AI |
|---|---|---|
| Representative AIs | ChatGPT, Claude, Gemini | Kimi K2.5, MiniMax M2.5, GLM-5 |
| Weights (Learning Output) | Private | Public (Anyone can download) |
| How to Use | Access only via website or app | Install directly on a server or connect via API |
| Cost | Monthly subscription or usage-based pricing | Free if self-hosted (only server costs apply) |
| Customization | Impossible | Possible (e.g., fine-tuning with own data) |
In short, an open-weight AI is one where the weights learned by the AI are made public.
While “open-weight” does not strictly mean “free,” the three models introduced today are released under licenses that permit free commercial use, making them practically free.
Thanks to this, developers can install these AIs directly on their own servers or customize them to fit their services.
2. Why Open-Weight AI is Getting Attention
In January 2025, China's DeepSeek built a GPT-4 level AI with a training cost of only $6 million and released it to the world. This event was so shocking that it was called a ‘bombshell in the AI industry.’ Until then, it was widely believed that building a single AI cost tens to hundreds of millions of dollars.
Following DeepSeek's success, several Chinese AI companies began aggressively releasing open-weight models. Today in 2026, free AIs that match or even surpass ChatGPT and Claude in certain areas are continuously emerging.
All three models introduced today were created by Chinese AI startups.
Since 2025, as they've raced to release high-performance AIs for free, users worldwide have been enjoying the benefits.
Why release them for free?
-
Dominate the Developer Ecosystem: If developers worldwide start using their models, transitioning to enterprise services or paid APIs later becomes much easier.
-
Promote Technical Prowess: Releasing open-weight models encourages global researchers to run benchmarks. Good results naturally attract investments and talent.
-
B2B Revenue is the Real Goal: About 85% of Zhipu AI's (GLM) 2024 revenue came from on-premise services (installed directly on corporate servers) for governments and enterprises. Free open-weight models serve as a showroom for their enterprise sales.
-
Bypass US Sanctions: With the latest GPUs hard to acquire due to US semiconductor export restrictions, open-sourcing allows them to maintain research speed through global community contributions.
In other words, open-weight releases are a strategic decision to secure a developer ecosystem + drive enterprise sales + expand global presence.
3. What Can You Do With Open-Weight AI?
The biggest advantage of open-weight models is that you can actually own the model files yourself.
① Install on Your Own Server = Enhanced Security
By running it on your own server or cloud (AWS, GCP, etc.), your input data never leaves your environment.
This is particularly useful for hospitals, law firms, and financial institutions where data leaks are strictly prohibited.
② Custom Training with Your Data (Fine-Tuning)
The base model possesses general knowledge but lacks understanding of your company's specific jargon, work styles, or internal regulations.
Open-weight models can be trained further on this data. You can build specialized AIs, like one that perfectly writes emails in your company's tone, a translation specialist, or a customer service bot.
③ Compressing Large Models into Smaller Ones (Distillation)
This is a technique that uses the outputs of a massive model as the “answer key” to train a smaller, faster model.
For example, by transferring the knowledge of GLM-5 (744 billion parameters) to GLM-4.7-Flash (30 billion parameters), you can create an AI with similar performance but faster speeds and lower costs.
④ Integrate AI into Your Services
Via API, you can directly connect AI features to the apps or websites you build.
While using the ChatGPT API requires paying OpenAI based on usage, open-weight models mostly just cost server fees.
| Use Case | Who is it for? | Difficulty |
|---|---|---|
| Use directly on Web/App | First-time AI users | ⭐ (Anyone) |
| Connect to services via API | Developers, Service Operators | ⭐⭐⭐ |
| Install directly on a server | Enterprises prioritizing data security | ⭐⭐⭐⭐ |
| Fine-tuning (Additional training) | Teams needing specialized task AIs | ⭐⭐⭐⭐⭐ |
4. Comparison of 3 Major Open-Weight AIs
① Kimi K2.5
Developer: Moonshot AI, China | Released: January 27, 2026 | License: MIT (Requires attribution for large-scale commercial use)
Kimi K2.5 is a ‘multimodal’ AI that understands not only text but also images and video. Its standout features include generating code directly from UI screenshots and an ‘Agent Swarm’ feature where 100 AIs work together simultaneously like a team.
What are its strengths?
-
📸 Image→Code: Show it a design mockup, and it generates the website code as-is.
-
🎥 Video Understanding: Can analyze videos and generate outputs.
-
🐝 Agent Swarm: Operates up to 100 sub-AIs simultaneously, reducing task completion time by 4.5x compared to a single AI.
-
📄 Long Document Processing: Handles up to 256,000 tokens at once (capable of reading and working with an entire book in one go).
How is the performance?
It outperformed top-tier AIs like GPT and Gemini in coding skill evaluations (SWE-Bench Multilingual) and also surpassed GPT and Claude in the field of video understanding.
How can I try it?
-
🌐 Directly on the Web: Visit kimi.com
-
💻 Developer CLI: Kimi Code (Useful as an AI coding assistant in the terminal)
-
🔌 API: Apply at moonshot.ai ($0.60 per 1M input tokens)
② MiniMax M2.5
Developer: MiniMax AI, China | Released: February 2026 | License: Apache 2.0 (Free for commercial use)
MiniMax is well-known for its AI video generation service ‘Hailuo,’ and its M-series is their text AI. M2.5 is specialized for actual workflow automation, characterized by its ability to directly create and manipulate Word, Excel, and PowerPoint files, alongside high coding performance.
What are its strengths?
-
📊 Office Tasks: AI directly generates Word, Excel, and PowerPoint files.
-
💻 Coding: SWE-Bench Verified (Coding skill test) 80.2% — Industry-leading level.
-
🔍 Web Research: BrowseComp (Information retrieval test) 76.3% — Strong in complex data gathering and analysis.
-
⚡ Cost Efficiency: API pricing is about 8% of equivalent paid models.
How is the performance?
MiniMax M1 (the previous generation) made headlines by surpassing DeepSeek R1 with a training cost of only $534,700. M2.5 is an advanced version of that, currently recording the highest level of coding performance among open-weight models.
How can I try it?
-
🌐 API: Available on the free tier via OpenRouter.
-
🔌 Paid API: minimax.io ($0.40 per 1M input tokens)
-
🖥️ Direct Installation: Download the model from Hugging Face and install it on your server.
③ GLM-5
Developer: Zhipu AI, China (Tsinghua University spin-off) | Released: Late 2025~2026 | License: MIT
The GLM series was created by Zhipu AI, a startup founded by a research team from China's prestigious Tsinghua University. GLM-5 boasts a massively expanded scale of 744 billion parameters compared to the previous version (GLM-4 series). However, it uses an efficient architecture that activates only 40 billion parameters during operation, showing particular strength in coding and automation tasks.
What are its strengths?
-
🏆 Coding Ability: SWE-bench Verified 77.8% → Surpasses Gemini 3 Pro (76.2%).
-
🤖 Agent (Autonomous Execution) Tasks: Strong across frontend, backend, and long-term automation tasks.
-
🔧 Lightweight version available: GLM-4.7-Flash (30 billion parameter lightweight model) — A manageable size for individual use.
-
🧠 RL Training Tech Open-Sourced: Released a Reinforcement Learning tool called Slime as open-source.
How can I try it?
-
🌐 Directly on the Web: chatglm.cn
-
🔌 API: Apply at bigmodel.cn (Approx. $0.72 per 1M input tokens)
-
🖥️ Direct Installation: Available for download on Hugging Face.
④ At a Glance Comparison
| Feature | Kimi K2.5 | MiniMax M2.5 | GLM-5 |
|---|---|---|---|
| Core Strengths | Multimodal + Agent Swarm | Office Tasks + Coding | Coding + Agents |
| Image/Video Understanding | ✅ Native Support | ❌ Text-centric | △ Separate with GLM-4.6V |
| Coding Performance | ⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ |
| Beginner Accessibility | ⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐ |
| API Cost | $0.60 / 1M Input | $0.40 / 1M Input | Approx. $0.72 / 1M Input |
| License | MIT | Apache 2.0 | MIT |
5. How to Actually Use Them
Method 1: Directly on the Web (Recommended for Beginners ⭐)
This is the easiest method. Just sign up and you can start using it immediately.
-
Kimi K2.5: Visit kimi.com
-
GLM: Visit chatglm.cn
-
MiniMax: No dedicated app; use via API or OpenRouter.
Method 2: Connect via API (Recommended for Developers & Operators)
Use an API if you want to connect these AIs to your own apps or services.
While you can sign separate API contracts on each AI company's official site, the differing registration procedures and payment methods can be cumbersome. In this case, using OpenRouter is highly convenient.
OpenRouter is an intermediary service that lets you manage APIs from multiple AI companies in one place. Create just one account, and you can instantly connect various models. MiniMax M2.5 is available on OpenRouter's free tier, while Kimi K2.5 and GLM-5 are available at low prices.
Method 3: Install Directly on Your Server (Advanced Users)
The most powerful advantage of open-weight models is the ability to deploy and operate the AI on your own server.
You only pay for server costs, and your data never leaks externally.
If your company has its own servers, you can install it there. If not, you can rent GPU servers from cloud services like AWS, GCP, or Azure.
-
Download the model files from Hugging Face.
-
Run it with AI model execution tools like vLLM or SGLang.
-
Ollama (Limited to small models) — Can run on personal Macs or PCs.
6. What Happens to My Personal Information and Data?
While the performance is excellent, there are important details you must know before using them. Especially for those with vague concerns like “Is it unsafe because it's a Chinese AI?”, we've summarized exactly how personal data is handled.
6-1. Where is my input saved?
The basic structure is the same for all three AIs. The key is understanding the difference between the Web/App services and the APIs.
① Kimi K2.5 (Web Service and API)
Moonshot AI's official privacy policy states the following:
“We collect all content inputted by the user—including prompts, audio, images, videos, and files—and utilize it for service improvement and model training.”
No opt-out method is provided, and the policy specifies that even if you delete your account, data may remain in an anonymized form. In short, the content you input on the web service is likely to be used for AI training.
② MiniMax M2.5 (API)
MiniMax offers a ‘Zero Retention Mode’ and an opt-out option for training data for enterprise users. They state that if privacy settings are enabled, user data is fundamentally not used for training. However, this is something you must ‘manually check’ and may not be the default setting, so verification is essential.
③ GLM-5 (API)
Zhipu AI, the creator of GLM, specifies the following in its API terms of service:
“We do not store the content (such as text) inputted or generated by customers or end-users while using the service. This information is processed in real-time to provide the service and is not stored on our servers.”
Therefore, using GLM via its API offers the clearest ‘no-retention’ policy among the three. However, note that this applies to API usage; different policies may apply to general web service consumers.
6-2. What are the actual issues with Chinese laws?
Saying “It's a Chinese AI, so it's inherently dangerous” might be an exaggeration, but it's important to understand exactly which laws could pose a problem.
① Article 7 of China's National Intelligence Law (Enacted 2017)
Chinese companies and citizens must cooperate with and support national intelligence efforts. This clause serves as a legal basis for the Chinese government to demand user data from Chinese companies if deemed necessary.
② Structural Issues with Moonshot AI
The operator of Kimi 2.5 is ‘MOONSHOT AI PTE. LTD.,’ incorporated in Singapore. However, its origin and core research team are in Beijing. Their privacy policy does not clearly state whether data is stored in China or Singapore, causing controversy. IAPS, a security policy organization, explicitly pointed out this jurisdictional ambiguity in a February 2026 report and recommended caution.
③ Additional Risks for Zhipu AI (GLM)
Zhipu AI completed its IPO on the Hong Kong stock market in January 2026 and is currently listed on the US Department of Commerce's Entity List. However, as explained earlier, Zhipu AI's API service explicitly guarantees a ‘no-retention’ policy, making its data exposure risk lower than others.
WARNING
This isn't an issue exclusive to Chinese AIs.
US AI services like OpenAI and Anthropic also have similar clauses for utilizing training data, and US laws also impose data provision obligations upon government requests.
6-3. So, how can I use them safely?
| Usage Method | Safety Level | Recommended Situation |
|---|---|---|
| Web Service | 🟡 Moderate | General info searches, non-sensitive tasks |
| GLM API | 🟢 Relatively Safe | “No retention” stated — When privacy is a concern |
| MiniMax API (Zero Retention Mode) | 🟢 Relatively Safe | Use after enabling opt-out |
| Via OpenRouter | 🟢 Relatively Safe | OpenRouter itself does not use data for training |
| Direct Server Install (Local PC / Cloud VM) | 🟢 Safest | Processing sensitive data or internal corporate files |
CAUTION
Things You Should NEVER Input (Regardless of the AI)
- Social Security Numbers, Passport Numbers, Credit Card Numbers
- Corporate trade secrets, unreleased contracts
- Patient personal information, medical records
- Passwords, API keys, authentication tokens
- Personal information without the other party's consent, etc.
7. From Training Controversy to Actual Quality
7-1. The Controversy Over How These Models Were Trained
Remember the ‘Distillation’ mentioned in Section 3? We described it as a legitimate technique to create lightweight models, but what happens when this method targets another company's AI without permission?
CAUTION
Official Statement by Anthropic (February 23, 2026)
Anthropic, the creator of Claude, released a shocking statement on their official blog on February 23, 2026.
DeepSeek, Kimi, and MiniMax — these three companies allegedly created around 24,000 fake accounts to generate over 16 million conversations with Claude, utilizing this data to train their own models.
This technique is known as ‘Model Distillation.’ It involves collecting massive amounts of output from a more powerful AI to train your own weaker model. It allows companies to elevate their capabilities with significantly less time and cost compared to building from scratch.
Each company had different objectives:
- DeepSeek: Extracted reasoning capabilities across various tasks and secured methods to bypass censorship on politically sensitive questions.
- Kimi: Whenever a new version of Claude was released, Kimi systematically absorbed its capabilities so fast that they shifted nearly half their traffic to the new model within 24 hours.
- MiniMax: Concentrated on extracting coding and agent capabilities.
WARNING
Before taking this announcement at pure face value, consider the context pointed out by experts:
- Distillation itself is a standard technique: All AI companies, including Anthropic, use distillation to create lightweight models. The line between “illegal distillation” and “legal use” is not always technically clear.
- Anthropic's Political Intent: This announcement was also used as a policy argument to support US AI semiconductor export restrictions. It carries political context rather than being a purely technical statement.
- Legally, it's a Terms of Service violation: Currently, this is more of a Terms of Service violation than a strict copyright infringement. The legal standing of copyright for AI-generated outputs remains ambiguous.
In conclusion, while it appears true that these companies rapidly grew by leveraging Claude without authorization, labeling it simply as ‘theft’ is an area still under legal debate.
7-2. Other Precautions
-
⚠️ Service Stability: Service Level Agreements (SLAs) might be lower than those of commercial services.
-
⚠️ Don't Blindly Trust Benchmarks: Publicized numbers are results under optimal conditions. Always test them yourself to judge.
-
⚠️ Hallucinations: In external knowledge accuracy evaluations, Kimi K2.5 tends to have a higher error rate than other top-tier AI models. Always verify the results for tasks where factual accuracy is critical.
8. Summary — Which AI is Right for Me?
| Who is this for? | Recommended Model | How to Start |
|---|---|---|
| First-time AI users | Kimi K2.5 | Start immediately for free at kimi.com |
| Developers with heavy coding tasks | MiniMax M2.5 or GLM-5 | Connect via OpenRouter API |
| Image/Video-based tasks | Kimi K2.5 | kimi.com or Kimi Code CLI |
| Those wanting to minimize costs | MiniMax M2.5 | OpenRouter Free Tier |
| Those concerned about personal data | GLM-5 API or MiniMax API | Z.ai API (No-retention policy) or via OpenRouter |
| Those wanting to install on their own servers | MiniMax M2.5 | Download from Hugging Face (Apache 2.0) |
9. References
9-1. Official Model Sites
9-2. Security and Policy
9-3. Useful Tools
NOTE
The information in this article is current as of March 2026.
Since AI models and privacy policies change frequently, please be sure to check each service's latest terms of use yourself before making important decisions.