GLM-4.7 LoRA SFT on GSM8K with ms-swift on Modal
GLM-4.7 LoRA SFT on GSM8K (Megatron)
What ms-swift is. ms-swift is ModelScope’s end-to-end fine-tuning toolkit. It supports PEFT methods (LoRA / QLoRA / DoRA) and full finetuning across HuggingFace and Megatron-LM backends. The Megatron backend is what makes it interesting at large scale — it gives you 4-D parallelism (TP / PP / EP / CP) for models that don’t fit on one GPU under HF’s data parallelism alone.
What this tutorial does. LoRA SFT of GLM-4.7 (a large MoE model)
on GSM8K, on 4 nodes × 8×H100 (32 GPUs). The interesting piece is
the parallelism split: TP=2, EP=4, PP=4, CP=1 — tensor parallel
across pairs of GPUs, 4-way expert parallel for the MoE layers,
4-stage pipeline parallel for the transformer blocks. Under the
hood this launches megatron sft via torchrun on each clustered
node. For the shared primitives (DatasetConfig, Model, 3-stage
pipeline) see 001_quickstart.
What you’ll need.
- Access to Modal’s multi-node training preview (4 × 8×H100).
wandbModal secret.
What to watch. W&B project glm-4-7-sft. Watch train/loss
and train/grad_norm; LoRA converges quickly on GSM8K so expect
loss to fall off within the first few hundred iters.
import modal
from modal_training_gym.common.dataset import HuggingFaceDatasetfrom modal_training_gym.common.models import GLM_4_7from modal_training_gym.common.wandb import WandbConfigfrom modal_training_gym.frameworks.ms_swift import ( MsSwiftConfig, MsSwiftFrameworkConfig,)from modal_training_gym.frameworks.ms_swift.config import HF_CACHE_PATHDefine the dataset
Section titled “Define the dataset”ms-swift reads a JSONL file where each line is a chat-format object:
{"messages": [{"role": "user", ...}, {"role": "assistant", ...}]}.
prepare() converts GSM8K’s (question, answer) columns into that
shape and writes it under the HF cache volume so both download and
dataset prep share the same mount.
class GSM8KDataset(HuggingFaceDataset): hf_repo = "openai/gsm8k" hf_config = "main" output_format = "jsonl" input_column = "question" output_column = "answer"Define the experiment
Section titled “Define the experiment”MsSwiftFrameworkConfig holds ms-swift-specific knobs; the launcher
forwards them to megatron sft as --flag value args.
Parallelism, MoE, and LoRA — from ModelTrainingConfig
Section titled “Parallelism, MoE, and LoRA — from ModelTrainingConfig”GLM-4.7’s parallelism, MoE, and LoRA settings are defined on the
model itself via its ModelTrainingConfig (see GLM_4_7 in
common/models/glm_4_7.py). The framework pulls them automatically
— no need to set them on MsSwiftFrameworkConfig. Here’s what the
model provides for 32 GPUs = 4 nodes × 8 H100:
| Axis | Setting | Why |
|---|---|---|
| Tensor (TP) | 2 | Shard individual weight matrices across 2 GPUs |
| Expert (EP) | 4 | Spread MoE experts across 4 GPUs |
| Pipeline (PP) | 4 | 4-stage pipeline over transformer blocks |
| Context (CP) | 1 | No sequence-dim parallelism at this context length |
LoRA: lora_rank=128, lora_alpha=32 — higher rank than the usual
8–16; GLM-4.7 is large enough that a bigger rank pays for itself.
Throughput
Section titled “Throughput”global_batch_size=8,max_length=2048— GSM8K is short so we don’t need long context; batch is small because GLM-4.7 is big.lr=1e-4— standard LoRA LR (higher than a full-finetune LR because only the adapter params update).train_iters=1,num_train_epochs=1— set for a quick smoke run; bump either for real training.
swift_framework_config = MsSwiftFrameworkConfig( n_nodes=4, gpus_per_node=8, global_batch_size=8, max_length=2048, train_iters=1, num_train_epochs=1,)
my_training_run = MsSwiftConfig( name="glm-4-7-gsm8k-sft", dataset=GSM8KDataset(HF_CACHE_PATH), model=GLM_4_7(), wandb=WandbConfig(project="glm-4-7-sft"), framework_config=swift_framework_config,)Build and run
Section titled “Build and run”build_app() returns a Modal app with download_model,
prepare_dataset, and train. See
001_quickstart for the pattern.
app = my_training_run.build_app()Evaluate the trained checkpoint
Section titled “Evaluate the trained checkpoint”After train completes, use TrainResult to find and serve the
checkpoint:
from modal_training_gym.common.train_result import TrainResult
result = TrainResult.load("glm-4-7-gsm8k-sft")print(result.latest_checkpoint_path())
# Serve via vLLM:serve_app = result.build_serve_app()See the TrainResult reference for the full API — listing runs, pinning specific checkpoints, and browsing the checkpoints volume.
Related API Reference
Section titled “Related API Reference”Source: tutorials/sft/001_ms_swift/001_ms_swift.py
| Open in Modal Notebook