A few days ago, while casually browsing, I heard experts in a group chat mention that NVIDIA has officially opened access to the GLM-4.7 and minimax-m2.1 models for free use. For someone like me, who even refuses to pay a 10 RMB repositioning fee for shared bikes, this immediately caught my attention! After a day of intermittent practical testing for GIS documentation writing and GIS development, while these models aren't as good as top-tier paid ones yet, they are quite adequate for daily GIS tasks. If you're currently on a tight budget and your daily GIS development work isn't too complex, you might want to give them a try. Given the current GIS market and economic climate, every bit of savings helps.

About GLM-4.7 and minimax-m2.1

GLM-4.7 is a domestic (Chinese) large language model touted to rival Claude Code, known for being affordable and effective. Upon its release, GLM-4.7 claimed to achieve SOTA (state-of-the-art) levels among open-source models in reasoning, coding, and agent capabilities. It seamlessly integrates with various mainstream AI programming tools, offering high capacity at a low cost. Previously, Zhipu AI's GLM Coding even had a year-end promotion.

MiniMax-M2.1 was released on December 23, 2025, focusing on complex real-world tasks with core enhancements in multilingual programming and office scenario capabilities. It systematically strengthened support for languages like Rust, Java, and Golang, covering development from low-level to application layers. It also enhanced native Android/iOS development, improved aesthetics for Web/App design, and supports complex interactions and 3D simulations. Some independent user evaluations suggest: Sonnet 4.0 < M2.1 ≤ Sonnet 4.5.

Currently, these two excellent models have been made freely available by NVIDIA (a truly generous move).

Registering with NVIDIA and Generating an API Key

To use these models, you first need to register for an NVIDIA account. Visit:

https://build.nvidia.com/explore/discover

The registration process requires email and phone number verification. I've tested it; it supports +86 (Chinese) phone numbers.

Note that both email and phone verification must be completed before you can generate an API Key. After verification, click "Get API Key".

Once done, you can close the website.

Using for General Chat

For general conversational use, you don't have to use the official UI. I recommend using tools like Cherry Studio or Alma. Taking Cherry Studio as an example, install it and then open the settings.

Select "OpenAI-Response" here.

After confirming, paste the API Key you copied earlier.

For the API Host, enter:

https://integrate.api.nvidia.com/v1

Remember this address, as you'll need to use it for all subsequent configurations. After setting it up, click "Manage" to add the GLM-4.7 and minimax-m2.1 models.

Once added, you can start using them. Look how well it answers when I ask it to find some Chinese GIS blogs. It seems very professional (wink).

Using for GIS Development

Moving on from general chat, let's talk about common GIS development use. I recommend a recently popular editor: Zed. Of course, other editors like VSCode, Cursor, IDEA, etc., can also be configured similarly. Feel free to explore.

Zed's official website: https://zed.dev/. It's an editor written in Rust and is incredibly fast. Interested GIS professionals can give it a try. After installation, open Zed and click the settings icon on the right.

Click "Add Provider" -> "OpenAI", and fill in the relevant information.

For the API URL, enter:

https://integrate.api.nvidia.com/v1

The API Key is the one you copied in the first step. After configuration, let's test the effect together.

The generated code worked without a single modification. The result is as shown:

Summary

Based on my tests, currently in Wuhan, the latency for GLM-4.7 is very high, making it almost unusable. All the testing results mentioned here are based on the minimax-m2.1 model. I'm not sure if this is an isolated case, so you can test it yourself.

Finally, I'd like to say that the quality of domestic models is gradually improving. Not all tasks require the latest models. Using domestic models avoids network-related issues and is more cost-effective. If you're also trying to use AI to boost efficiency, consider giving domestic large models a try, or at least keep one as a backup. If international models become unavailable for any reason, you can switch promptly without affecting your work.