Axi's Blog
InternManip 踩坑记录Blur image

前言#

正如 上回书说到,我准备使用 InternManip 来构建我的模型,因为 InternManip 的代码支持训测一体,伴随着后续的支持变多,可以更加快速地进行模型的迭代,这正是我需要的功能。

为了给 InternManip 团队部分的反馈,以及为了后续在其他环境上的可迁移性,在这里记录一下使用以及踩坑的过程。

环境搭建#

InternManip 的代码提供了一键安装的脚本,不过在此之前还是需要安装一些基本的依赖。

# 安装 gcc g++ 以及 ninja
sudo apt update
sudo apt install build-essential
sudo apt install ninja-build

# 安装 git 以及 git-lfs
sudo apt install git
sudo apt install git-lfs
git lfs install

# 安装 miniconda
mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm ~/miniconda3/miniconda.sh
source ~/miniconda3/bin/activate
conda init --all
bash

当然,假如说没有 sudo 权限,也可以使用 conda 来安装。

conda install -c conda-forge ninja
conda install -c conda-forge gcc=11 gxx=11
conda install -c conda-forge git git-lfs
git lfs install
bash

之后安装基础包即可,全量包需要安装 InternUtopia 以及 GenManip,二者基于 Isaac,因此只能在 RTX 系列的卡上运行,在 H/A100 等卡上没必要安装。

# Git Clone
git clone https://github.com/InternRobotics/InternManip.git
cd InternManip
git submodule update --init --recursive

# 安装 uv
curl -LsSf https://astral.sh/uv/install.sh | sh
source ~/.bashrc

# 启动一键安装脚本
chmod +x install.sh
./install.sh --beginner
bash

本身 InternManip 是基于 uv 来管理依赖的,在安装的过程中,在 model 环境中会安装 GR00T 的环境,这个过程中可能会报错:

uv pip install "git+https://github.com/NVIDIA/Isaac-GR00T.git#egg=isaac-gr00t[base]"
Using Python 3.10.12 environment at: .venv/model Resolved 162 packages in 1.11s 
× Failed to download and build gr00t @ git+https://github.com/NVIDIA/Isaac-GR00T.git@ae7d46f02cdca5d9c80efc446fe41fe2b58e94c7
 ├─▶ Git operation failed
 ╰─▶ process didn't exit successfully: /usr/bin/git reset --hard ae7d46f02cdca5d9c80efc446fe41fe2b58e94c7 (exit status: 128) 
 --- stderr Downloading media/robot-demo.gif (6.6 MB) 
Error downloading object: media/robot-demo.gif (164861f): Smudge error: Error downloading media/robot-demo.gif (164861fc0eee1291db00364d449f4920e54f5132881f74526a636377f6bc11e5): error transferring "164861fc0eee1291db00364d449f4920e54f5132881f74526a636377f6bc11e5": [0] remote missing object
txt

这个问题本质上是因为在 GR00T 中的 media/robot-demo.gif 这个文件没有被下载下来,而这个文件本身是一个 git-lfs 存储的文件,虽然安装了 git-lfs,但是貌似还是难以下载,这里直接修改 install.sh

install.sh
##############################################################
# Install model dependencies
##############################################################
install_model() {
    ...
    uv pip install --upgrade setuptools
    uv pip install "git+https://github.com/NVIDIA/Isaac-GR00T.git#egg=isaac-gr00t[base]" || { echo -e "⚠️  \033[1;33mWarning: Failed to install GR00T dependencies due to network issues\033[0m" >&2; } 
    GIT_LFS_SKIP_SMUDGE=1 uv pip install "git+https://github.com/NVIDIA/Isaac-GR00T.git#egg=isaac-gr00t[base]" || { echo -e "⚠️  \033[1;33mWarning: Failed to install GR00T dependencies due to network issues\033[0m" >&2; } 
    echo -e "📦 Installing flash-attn module..."
    uv pip install --no-build-isolation flash-attn==2.7.1.post4 || { echo -e "⚠️  \033[1;33mWarning: Failed to install flash-attn due to network issues\033[0m" >&2; }
    uv pip install draccus
    uv pip install transforms3d

    install_base_requirements

    deactivate
    echo -e "✅ \033[1;32mModel dependencies installed\033[0m"
}
bash

之后正常安装 beginner 即可。

当然,理论来说在安装的过程中也可能出现其他的问题,目前 InternManip 的文档中有一个 Troubleshooting 的章节,可以参考一下。

训练模型#

InternManip 的 Dataloader 和 Trainer 本质上都是基于 GR00T 的,也就是使用 Modality 来管理 LeRobot Dataset 格式中的各种 key,使得整体更加清晰且灵活。

GR00T 的代码本身由和我合作的同学已经经过了反复的验证,确认是没有问题的,但是 InternManip 的代码按理来说为了加入到自己的框架中,必然需要从 GR00T 中解耦部分的内容,因此我并不确定在这个过程中是否有错误的修改,因此需要进行 Diff 的检查,同时也进行一下 code reading。

模型加载#

在这里还是从训练的入口来看,也就是 scripts/train/train.py,在开始的部分,首先通过 Training Cfg,从里面读取模型的信息,并且准备好模型:

scripts/train/train.py
def main(config: TrainCfg):
    """Main training function."""
    # ------------ load model ------------
    kwargs = config.model_dump()
    kwargs.pop('model_type')
    if config.base_model_path == '':
        config.base_model_path = POLICY_NAME_TO_ID[config.model_type]
    if config.base_model_path is None:
        model_cfg = AutoConfig.for_model(config.model_type, **kwargs)
        model = AutoModel.from_config(model_cfg, **kwargs)
    else:
        # must ensure that if the path is a huggingface model, it should be a repo that has only one model weight
        model = AutoModel.from_pretrained(config.base_model_path, **kwargs)
    model.compute_dtype = config.compute_dtype
    model.config.compute_dtype = config.compute_dtype
    data_collator = DataCollatorRegistry.get_collator(config.model_type)
python

假如说没有通过 config.model_type 来获取模型的类型,那么会通过 AutoConfig.for_model 来获取模型的配置,之后通过 AutoModel.from_config 来获取模型。否则就会通过 AutoModel.from_pretrained 来获取模型。

对于不了解 Transformer 框架的读者简单解释一下,这里是 Transformer 框架的模型加载方式,AutoConfig.for_modelAutoModel.from_config 用于从配置中创建模型,而框架之所以可以加载自己定义的模型,是因为在代码的编写过程中,通过 API 对于模型以及 Config 进行了注册。同时 AutoModel.from_pretrained 会自动从 Hugging Face 中下载模型权重,并加载到本地。

internmanip/model/basemodel/gr00t/gr00t.py
# register
AutoConfig.register('gr00t_n1_5', GR00T_N1_5_Config)
AutoModel.register(GR00T_N1_5_Config, GR00T_N1_5)
python

数据加载#

在数据加载的部分,首先通过 EmbodimentTag 来获取数据集的标签,之后通过 DATA_CONFIG_MAP 来获取数据集的配置,之后通过 LeRobotSingleDataset 来获取数据集或者 LeRobotMixtureDataset 来获取多个数据集。

scripts/train/train.py
def main(config: TrainCfg):
    """Main training function."""
    ...
    # ------------ load dataset ------------
    embodiment_tag = EmbodimentTag(config.embodiment_tag)

    # modality configs and transforms
    data_config_cls = DATA_CONFIG_MAP[config.data_config]
    model_transform, observation_indices, action_indices = model.config.transform()
    modality_configs = data_config_cls.modality_config(observation_indices, action_indices)
    transforms = data_config_cls.transform()

    if config.pad_center_crop:
        transforms = [tr if not isinstance(tr, VideoCrop) else VideoPadSquareCrop(apply_to=tr.apply_to, scale=0.95) for tr in transforms]
    if model_transform is not None:
        transforms.append(model_transform)

    transforms = ComposedModalityTransform(transforms=transforms)
    # data_loader
    if isinstance(config.dataset_path, str):
        train_dataset = LeRobotSingleDataset(
            dataset_path=config.dataset_path,
            modality_configs=modality_configs,
            transforms=transforms,
            embodiment_tag=embodiment_tag,  # This will override the dataset's embodiment tag to "new_embodiment"
            video_backend=config.video_backend,
            cache_dir=config.HF_cache_dir,
            skip_unlabeled=config.skip_unlabeled
        )
    else:
        print('\n' + '='*30)
        print('⚠️  WARNING: MULTIPLE DATASETS DETECTED')
        print('='*30)
        print('You are about to train on multiple datasets simultaneously.')
        print('Please ensure that:')
        print('  1. All datasets have compatible and consistent modality configurations')
        print('  2. The datasets are from the same embodiment or compatible embodiments')
        print('  3. The datasets have similar data distributions and task objectives')
        print('='*30 + '\n')
        single_datasets = []
        for p in config.dataset_path:
            assert os.path.exists(p), f'Dataset path {p} does not exist'
            # We use the same transforms, modality configs, and embodiment tag for all datasets here
            dataset = LeRobotSingleDataset(
                dataset_path=p,
                modality_configs=modality_configs,
                transforms=transforms,
                embodiment_tag=embodiment_tag,
                video_backend=config.video_backend,
                cache_dir=config.HF_cache_dir,
                skip_unlabeled=config.skip_unlabeled
            )
            single_datasets.append(dataset)

        train_dataset = LeRobotMixtureDataset(
            data_mixture=[
                (dataset, 1.0)  # we will use equal weights for all datasets
                for dataset in single_datasets
            ],
            mode='train',
            balance_dataset_weights=config.balance_dataset_weights,
            balance_trajectory_weights=config.balance_trajectory_weights,
            seed=42,
            metadata_config={
                'percentile_mixing_method': 'weighted_average',
            },
        )
        print(f'Loaded {len(single_datasets)} datasets, with {config.dataset_path} ')
python

本身这部分并不难理解,就是从 Config 里面获得到数据集的配置,之后通过 LeRobotSingleDataset 或者 LeRobotMixtureDataset 来获取数据集。

在这里最核心的操作还是从 DATA_CONFIG_MAP 获取到的 data_config_cls,这里也是 GR00T 的精髓所在,通过这里的 data_config 以及数据中的 modality.json 来设置 Key 以及对应的截取变换。

以一个较为完整的数据集举例,如 AlohaDataConfig,内容如下:

internmanip/configs/dataset/data_config.py
class AlohaDataConfig(BaseDataConfig):
    #==========================key list==========================#
    video_keys = ['video.left_view', 'video.right_view', 'video.top_view']
    state_keys = ['state.arm_qpos']
    action_keys = ['action.left_arm_delta_qpos', 'action.right_arm_delta_qpos', 'action.left_gripper_close', 'action.right_gripper_close']
    language_keys = ['annotation.human.action.task_description']

    #==========================modality config==========================#
    def modality_config(self, observation_indices, action_indices) -> dict[str, ModalityConfig]:
        self.action_indices = action_indices
        self.observation_indices = observation_indices
        video_modality = ModalityConfig(
            delta_indices=self.observation_indices,
            modality_keys=self.video_keys,
        )

        state_modality = ModalityConfig(
            delta_indices=self.observation_indices,
            modality_keys=self.state_keys,
        )

        action_modality = ModalityConfig(
            delta_indices=self.action_indices,
            modality_keys=self.action_keys,
        )

        language_modality = ModalityConfig(
            delta_indices=self.observation_indices,
            modality_keys=self.language_keys,
        )


        modality_configs = {
            'video': video_modality,
            'state': state_modality,
            'action': action_modality,
            'language': language_modality,
        }

        return modality_configs

    #==========================transform==========================#
    def transform(self):
        transforms = [
            VideoToTensor(apply_to=self.video_keys),
            VideoCrop(apply_to=self.video_keys, scale=0.95),
            VideoResize(apply_to=self.video_keys, height=224, width=224, interpolation='linear'),
            VideoColorJitter(
                apply_to=self.video_keys,
                brightness=0.3,
                contrast=0.4,
                saturation=0.5,
                hue=0.08,
            ),
            # state transforms
            StateActionToTensor(apply_to=self.state_keys),
            StateActionTransform(
                apply_to=self.state_keys,
                normalization_modes={
                    'state.arm_qpos': 'mean_std'
                },
            ),
            # action transforms
            StateActionToTensor(apply_to=self.action_keys),
            StateActionTransform(
                apply_to=self.action_keys,
                normalization_modes={
                    'action.left_arm_delta_qpos': 'mean_std',
                    'action.right_arm_delta_qpos': 'mean_std',
                    'action.left_gripper_close': 'binary',
                    'action.right_gripper_close': 'binary'
                }
            ),
            # concat transforms
            ConcatTransform(
                video_concat_order=self.video_keys,
                state_concat_order=self.state_keys,
                action_concat_order=self.action_keys,
            )
        ]
        return transforms
python

通过代码中的注释,我们将每一个 data config 都分为了三部分,当读者要引入自己的数据集时,只需要按照这个格式来写即可。

第一部分是 Key 的列表,一般来说数据中包括图像、状态、动作、语言四种内容,也就是 video_keysstate_keysaction_keyslanguage_keys,对于 video_keys,其可能来自于不同的相机的不同图片,而对于 action 来说,也可能是不同的关节的数据,这些内容在 LeRobot 数据集中按照 dict 的格式保存,如对于 video.left_view 来说,实际上就是数据集的 data["video"]["left_view"],而 state.arm_qpos 则是 data["state"]["arm_qpos"]

以源代码中的实例数据 internmanip/demo_data/robot_sim.PickNPlace/meta/modality.json 为例,其 modality 决定了数据集的 Key 以及对应的截取变换,内容如下:

internmanip/demo_data/robot_sim.PickNPlace/meta/modality.json
{
    "state": {
        "left_arm": {
            "start": 0,
            "end": 7
        },
        "left_hand": {
            "start": 7,
            "end": 13
        },
        "left_leg": {
            "start": 13,
            "end": 19
        },
        "neck": {
            "start": 19,
            "end": 22
        },
        "right_arm": {
            "start": 22,
            "end": 29
        },
        "right_hand": {
            "start": 29,
            "end": 35
        },
        "right_leg": {
            "start": 35,
            "end": 41
        },
        "waist": {
            "start": 41,
            "end": 44
        }
    },
    "action": {
        "left_arm": {
            "start": 0,
            "end": 7
        },
        "left_hand": {
            "start": 7,
            "end": 13
        },
        "left_leg": {
            "start": 13,
            "end": 19
        },
        "neck": {
            "start": 19,
            "end": 22
        },
        "right_arm": {
            "start": 22,
            "end": 29
        },
        "right_hand": {
            "start": 29,
            "end": 35
        },
        "right_leg": {
            "start": 35,
            "end": 41
        },
        "waist": {
            "start": 41,
            "end": 44
        }
    },
    "video": {
        "ego_view": {
            "original_key": "observation.images.ego_view"
        }
    },
    "annotation": {
        "human.action.task_description": {},
        "human.validity": {}
    }
}
json

第二部分是 Modality 的配置,这里相较于原本的 GR00T 的配置,改成了通过传参输入 observation_indices 以及 action_indices。假如说目光回到 main 的代码中,会发现这两个数值来自于 model.config.transform(),而 对应的代码在 internmanip/configs/model/gr00t_cfg.py 中,内容如下:

internmanip/configs/model/gr00t_cfg.py
# config
@dataclass
class GR00T_N1_5_Config(PretrainedConfig):
    model_type = 'gr00t_n1_5'
    backbone_cfg: dict = field(init=False, metadata={'help': 'Backbone configuration.'})

    action_head_cfg: dict = field(init=False, metadata={'help': 'Action head configuration.'})

    action_horizon: int = field(init=False, metadata={'help': 'Action horizon.'})

    action_dim: int = field(init=False, metadata={'help': 'Action dimension.'})
    compute_dtype: str = field(default='float32', metadata={'help': 'Compute dtype.'})
    observation_indices = [0]

    def __init__(self, **kwargs):
        super().__init__(**kwargs)
        for key, value in kwargs.items():
            setattr(self, key, value)

    def transform(self):
        transforms = GR00TTransform_15(
                state_horizon=len(self.observation_indices),
                action_horizon=len(list(range(self.action_horizon))),
                max_state_dim=self.action_head_cfg['max_state_dim'],
                max_action_dim=self.action_head_cfg['max_action_dim'],
            )
        return transforms, self.observation_indices, list(range(self.action_horizon))
python

可以注意到这些参数实际上并没有默认值。实际上在运行 GR00T 的训练过程中,例如运行代码:

torchrun --nnodes 1 --nproc_per_node 1 scripts/train/train.py --config run_configs/train/gr00t_n1_5_genmanip_v1.yaml
bash

此时会包括 model_typegr00t_n1_5 的参数,而后加载 nvidia/GR00T-N1.5-3Bmodel_base_path,此时会自动从 Hugging Face 中下载模型权重,并加载到本地,而在这个过程中,会自动加载 Huggingface 上的配置信息,也就是从 nvidia/GR00T-N1.5-3Bconfig.json 中加载 配置信息,并设置到 GR00T_N1_5_Config 中。

在拥有了 observation_indices 以及 action_horizon 之后,就会对应的指定 ModalityConfig。这里面 key 的顺序决定了 dataloader 中返回的顺序,而 observation_indices 以及 action_indices 则决定了获得的内容长度。也就是假如说 observation_indices[-2, -1, 0],那么返回过去到现在的 3 帧的数据,而 action_indiceslist(range(16)),那么返回现在到未来的 16 帧的动作。

第三部分是 Transform 的配置,这部分本质上没什么好说的,就是各种各样的变换,值得注意的是其中 apply_to 的参数,其决定了这个变换应用到哪些 Key 上。

在配置好之后,通过 ComposedModalityTransform 来将所有的 Transform 进行组合,之后通过 LeRobotSingleDataset 或者 LeRobotMixtureDataset 来获取数据集。

训练#

在拥有了数据集以及模型之后,就可以开始训练了,也就是这个程序的最后一段。

scripts/train/train.py
def main(config: TrainCfg):
    ...
    # ------------ load model ------------
    ...
    # ------------ load dataset ------------
    ...
    # ------------ load trainer ------------
    if config.lora_rank > 0:
        model = get_lora_model(
            model,
            rank=config.lora_rank,
            lora_alpha=config.lora_alpha,
            lora_dropout=config.lora_dropout,
        )
    # modify training args
    training_args = TrainingArguments(
        output_dir=config.output_dir,
        run_name=None,
        remove_unused_columns=False,
        deepspeed='',
        gradient_checkpointing=False,
        bf16=True,
        tf32=True,
        per_device_train_batch_size=config.batch_size,
        gradient_accumulation_steps=config.gradient_accumulation_steps,
        dataloader_num_workers=config.dataloader_num_workers,
        dataloader_pin_memory=False,
        dataloader_persistent_workers=True,
        optim='adamw_torch',
        adam_beta1=0.95,
        adam_beta2=0.999,
        adam_epsilon=1e-8,
        learning_rate=config.learning_rate,
        weight_decay=config.weight_decay,
        warmup_ratio=config.warmup_ratio,
        lr_scheduler_type='cosine',
        logging_steps=10.0,
        num_train_epochs=300,
        max_steps=config.max_steps,
        save_strategy='steps',
        save_steps=config.save_steps,
        eval_strategy='no',
        save_total_limit=8,
        report_to=config.report_to,
        seed=42,
        do_eval=False,
        ddp_find_unused_parameters=False,
        ddp_bucket_cap_mb=100,
        torch_compile_mode=None,
    )
    # Create the trainer
    trainer = BaseTrainerWrapper(
        model=model,
        args=training_args,
        train_dataset=train_dataset,
        data_collator=data_collator,
    )

    # Add checkpoint format callback to ensure experiment_cfg is copied to each checkpoint
    ckpt_format_callback = CheckpointFormatCallback(
        train_dataset=train_dataset, exp_cfg_dir=training_args.output_dir
    )
    trainer.add_callback(ckpt_format_callback)
    # ------------ print model info ------------
    ...
    # ------------ train ------------
    trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
    trainer.save_state()
python

逻辑也很简单,也就是通过 TrainingArguments 来配置训练的参数,之后通过 BaseTrainerWrapper 来创建 Trainer,之后通过 trainer.train 来训练模型。这套逻辑基本上就是 Huggingface 风格的 Trainer 的逻辑,不过为了不了解这套风格的读者,还是需要深入 Trainer 来看一下具体的实现。

Huggingface Style Trainer#

让我们深入 Trainer 看一下里面具体发生了什么,通过 Link 进入实现:

scripts/train/train.py
class BaseTrainerWrapper(BaseTrainer):
    import torch.nn as nn

    def prediction_step(self, model: nn.Module,
        inputs: Dict[str, Union[torch.Tensor, Any]]):

        actions = model.inference(inputs)['action_pred']
        valid_action = inputs['action_mask']
        gt_action = inputs['action'][valid_action]
        pred_action = actions[valid_action].cpu()

        diff = pred_action - gt_action
        import matplotlib.pyplot as plt
        import numpy as np

        mean_diff = diff.abs().mean(dim=0)

        x = np.arange(mean_diff.shape[0])


        num_channels = mean_diff.shape[1]
        fig, axes = plt.subplots(num_channels, 1, figsize=(10, 3 * num_channels), sharex=True)

        x = list(range(pred_action.shape[1]))

        for i in range(num_channels):
            ax = axes[i]
            ax.plot(x, pred_action[0, :, i].cpu().numpy(), label='Predicted', color='blue')
            ax.plot(x, gt_action[0, :, i].cpu().numpy(), label='Ground Truth', color='orange')
            ax.set_title(f'Channel {i}')
            ax.set_ylabel('Mean Value')
            ax.grid(True)
            if i == num_channels - 1:
                ax.set_xlabel('Index (0~15)')
            if i == 0:
                ax.legend()

        plt.tight_layout()
        plt.savefig('debug_value_{}.png'.format(0))
        plt.close()

        return mean_diff
python

不难发现本身支持包裹了

InternManip 踩坑记录
https://axi-blog.pages.dev/blog/internmanip-code
Author 阿汐
Published at August 18, 2025
Comment seems to stuck. Try to refresh?✨