Skip to content

MMSegmentation

OpenMMLab Semantic Segmentation Toolbox and Benchmark.

Supported backbones

  • ResNet (CVPR'2016)
  • ResNeXt (CVPR'2017)
  • HRNet (CVPR'2019)
  • ResNeSt (ArXiv'2020)
  • MobileNetV2 (CVPR'2018)
  • MobileNetV3 (ICCV'2019)
  • Vision Transformer (ICLR'2021)
  • Swin Transformer (ICCV'2021)
  • Twins (NeurIPS'2021)
  • BEiT (ICLR'2022)
  • ConvNeXt (CVPR'2022)
  • MAE (CVPR'2022)
  • PoolFormer (CVPR'2022)
  • SegNeXt (NeurIPS'2022)

Supported methods

  • SAN (CVPR'2023)
  • VPD (ICCV'2023)
  • DDRNet (T-ITS'2022)
  • PIDNet (ArXiv'2022)
  • Mask2Former (CVPR'2022)
  • MaskFormer (NeurIPS'2021)
  • K-Net (NeurIPS'2021)
  • SegFormer (NeurIPS'2021)
  • Segmenter (ICCV'2021)
  • DPT (ArXiv'2021)
  • SETR (CVPR'2021)
  • STDC (CVPR'2021)
  • BiSeNetV2 (IJCV'2021)
  • CGNet (TIP'2020)
  • PointRend (CVPR'2020)
  • DNLNet (ECCV'2020)
  • OCRNet (ECCV'2020)
  • ISANet (ArXiv'2019/IJCV'2021)
  • Fast-SCNN (ArXiv'2019)
  • FastFCN (ArXiv'2019)
  • GCNet (ICCVW'2019/TPAMI'2020)
  • ANN (ICCV'2019)
  • EMANet (ICCV'2019)
  • CCNet (ICCV'2019)
  • DMNet (ICCV'2019)
  • Semantic FPN (CVPR'2019)
  • DANet (CVPR'2019)
  • APCNet (CVPR'2019)
  • NonLocal Net (CVPR'2018)
  • EncNet (CVPR'2018)
  • DeepLabV3+ (CVPR'2018)
  • UPerNet (ECCV'2018)
  • ICNet (ECCV'2018)
  • PSANet (ECCV'2018)
  • BiSeNetV1 (ECCV'2018)
  • DeepLabV3 (ArXiv'2017)
  • PSPNet (CVPR'2017)
  • ERFNet (T-ITS'2017)
  • UNet (MICCAI'2016/Nat. Methods'2019)
  • FCN (CVPR'2015/TPAMI'2017)

Supported Head

  • ANN_Head
  • APC_Head
  • ASPP_Head
  • CC_Head
  • DA_Head
  • DDR_Head
  • DM_Head
  • DNL_Head
  • DPT_HEAD
  • EMA_Head
  • ENC_Head
  • FCN_Head
  • FPN_Head
  • GC_Head
  • LightHam_Head
  • ISA_Head
  • Knet_Head
  • LRASPP_Head
  • mask2former_Head
  • maskformer_Head
  • NL_Head
  • OCR_Head
  • PID_Head
  • point_Head
  • PSA_Head
  • PSP_Head
  • SAN_Head
  • segformer_Head
  • segmenter_mask_Head
  • SepASPP_Head
  • SepFCN_Head
  • SETRMLAHead_Head
  • SETRUP_Head
  • STDC_Head
  • Uper_Head
  • VPDDepth_Head

Supported datasets

  • Cityscapes
  • PASCAL VOC
  • ADE20K
  • Pascal Context
  • COCO-Stuff 10k
  • COCO-Stuff 164k
  • CHASE_DB1
  • DRIVE
  • HRF
  • STARE
  • Dark Zurich
  • Nighttime Driving
  • LoveDA
  • Potsdam
  • Vaihingen
  • iSAID
  • Mapillary Vistas
  • LEVIR-CD
  • BDD100K
  • NYU
  • HSIDrive20

Supported loss

  • boundary_loss
  • cross_entropy_loss
  • dice_loss
  • focal_loss
  • huasdorff_distance_loss
  • kldiv_loss
  • lovasz_loss
  • ohem_cross_entropy_loss
  • silog_loss
  • tversky_loss

Installation

The compatible MMSegmentation, MMCV and MMEngine versions are as below. Please install the correct versions of them to avoid installation issues.

MMSegmentation version

MMCV version

MMEngine version

MMClassification (optional) version

MMDetection (optional) version

dev-1.x branch

mmcv >= 2.0.0

MMEngine >= 0.7.4

mmpretrain>=1.0.0rc7

mmdet >= 3.0.0

main branch

mmcv >= 2.0.0

MMEngine >= 0.7.4

mmpretrain>=1.0.0rc7

mmdet >= 3.0.0

1.2.2

mmcv >= 2.0.0

MMEngine >= 0.7.4

mmpretrain>=1.0.0rc7

mmdet >= 3.0.0

1.2.1

mmcv >= 2.0.0

MMEngine >= 0.7.4

mmpretrain>=1.0.0rc7

mmdet >= 3.0.0

1.2.0

mmcv >= 2.0.0

MMEngine >= 0.7.4

mmpretrain>=1.0.0rc7

mmdet >= 3.0.0

1.1.2

mmcv >= 2.0.0

MMEngine >= 0.7.4

mmpretrain>=1.0.0rc7

mmdet >= 3.0.0

1.1.1

mmcv >= 2.0.0

MMEngine >= 0.7.4

mmpretrain>=1.0.0rc7

mmdet >= 3.0.0

1.1.0

mmcv >= 2.0.0

MMEngine >= 0.7.4

mmpretrain>=1.0.0rc7

mmdet >= 3.0.0

1.0.0

mmcv >= 2.0.0rc4

MMEngine >= 0.7.1

| mmdet >= 3.0.0

1.0.0rc6

mmcv >= 2.0.0rc4

MMEngine >= 0.5.0

mmcls>=1.0.0rc0

mmdet >= 3.0.0rc6

1.0.0rc5

mmcv >= 2.0.0rc4

MMEngine >= 0.2.0

mmcls>=1.0.0rc0

mmdet>=3.0.0rc6

1.0.0rc4 || mmcv == 2.0.0rc3

MMEngine >= 0.1.0

mmcls>=1.0.0rc0

mmdet>=3.0.0rc4, <=3.0.0rc5

1.0.0rc3 || mmcv == 2.0.0rc3

MMEngine >= 0.1.0

mmcls>=1.0.0rc0

mmdet>=3.0.0rc4, <=3.0.0rc5

1.0.0rc2 || mmcv == 2.0.0rc3

MMEngine >= 0.1.0

mmcls>=1.0.0rc0

mmdet>=3.0.0rc4, <=3.0.0rc5

1.0.0rc1

mmcv >= 2.0.0rc1, <=2.0.0rc3>

MMEngine >= 0.1.0

mmcls>=1.0.0rc0

Not required

1.0.0rc0

mmcv >= 2.0.0rc1, <=2.0.0rc3>

MMEngine >= 0.1.0

mmcls>=1.0.0rc0

Not required

Triton Inference Server 로 올리기

MMSegmentation 모델을 NVIDIA Triton Inference Server에서 사용하는 방법을 단계별로 설명드리겠습니다. MMSegmentation은 PyTorch를 기반으로 하는 모델이며, Triton Inference Server와 함께 사용하려면 ONNX 또는 TorchScript 형식으로 모델을 변환해야 합니다. 이후 Triton 서버에 모델을 배포하고 추론을 실행할 수 있습니다.

ONNX 로 모델 변환

먼저 MMSegmentation 모델을 ONNX 또는 TorchScript로 변환합니다.

import torch
from mmseg.apis import init_segmentor
import mmcv

# 모델과 구성 파일 경로
config_file = 'configs/fcn/fcn_r50-d8_512x512_40k_voc12aug.py'
checkpoint_file = 'checkpoints/fcn_r50-d8_512x512_40k_voc12aug_20200617_071005-41a67fa4.pth'

# 모델 초기화
model = init_segmentor(config_file, checkpoint_file, device='cuda:0')

# 입력 데이터 생성 (예시 입력)
dummy_input = torch.randn(1, 3, 512, 512).cuda()

# 모델을 평가 모드로 전환
model.eval()

# 모델 변환
torch.onnx.export(model, dummy_input, "mmsegmentation_model.onnx", 
                  export_params=True, opset_version=11, 
                  input_names=['input'], output_names=['output'])

모델 리포지토리 구성

모델을 Triton Inference Server의 모델 리포지토리에 배치합니다. 디렉토리 구조는 다음과 같습니다:

<model-repository>
└── mmsegmentation_model
    ├── 1
    │   └── model.onnx
    └── config.pbtxt

config.pbtxt 수정:

name: "mmsegmentation_model"
platform: "onnxruntime_onnx"
max_batch_size: 1
input [
  {
    name: "input"
    data_type: TYPE_FP32
    dims: [ 3, 512, 512 ]
  }
]
output [
  {
    name: "output"
    data_type: TYPE_FP32
    dims: [ 512, 512 ]
  }
]

클라이언트에서 추론 요청

이후 tritonclient 를 사용하여 Triton Inference Server에 추론 요청을 보냅니다. 필요에 따라 공유 메모리를 사용하여 성능을 최적화할 수 있습니다.

import tritonclient.http as httpclient
import tritonclient.utils.shared_memory as shm
import numpy as np

# Triton Inference Server URL
url = "localhost:8000"

# 클라이언트 초기화
client = httpclient.InferenceServerClient(url=url)

# 모델 정보
model_name = "mmsegmentation_model"
input_name = "input"
output_name = "output"

# 입력 데이터 준비
input_data = np.random.rand(1, 3, 512, 512).astype(np.float32)

# Shared memory 설정
input_byte_size = input_data.nbytes
output_byte_size = input_byte_size  # 예제에서는 입력과 출력 크기가 같다고 가정

# Shared memory 생성
input_shm_handle = shm.create_shared_memory_region("input_data", "/dev/shm/input", input_byte_size)
output_shm_handle = shm.create_shared_memory_region("output_data", "/dev/shm/output", output_byte_size)

# Shared memory에 데이터 쓰기
shm.set_shared_memory_region(input_shm_handle, [input_data])

# Shared memory 등록
client.register_system_shared_memory("input_data", "/dev/shm/input", input_byte_size)
client.register_system_shared_memory("output_data", "/dev/shm/output", output_byte_size)

# 입력 및 출력 설정
inputs = []
outputs = []

inputs.append(httpclient.InferInput(input_name, input_data.shape, "FP32"))
inputs[-1].set_shared_memory("input_data", input_byte_size)

outputs.append(httpclient.InferRequestedOutput(output_name, binary_data=True))
outputs[-1].set_shared_memory("output_data", output_byte_size)

# 추론 요청
results = client.infer(model_name, inputs, outputs=outputs)

# 결과 읽기
output_data = shm.get_shared_memory_region(output_shm_handle, output_byte_size, np.float32)
print(output_data)

# Shared memory 해제
client.unregister_system_shared_memory("input_data")
client.unregister_system_shared_memory("output_data")
shm.destroy_shared_memory_region(input_shm_handle)
shm.destroy_shared_memory_region(output_shm_handle)

MMSegModel Example

SegDataSample 구조를 직접 읽기.

@DATASETS.register_module()

Dataset 등록:

# -*- coding: utf-8 -*-

from mmseg.datasets.coco import CocoDataset  # noqa
from mmseg.registry import DATASETS  # noqa


@DATASETS.register_module()
class PlateCocoDataset(CocoDataset):
    """Dataset for COCO"""

    METAINFO = {
        "classes": ("Plate",),
        # palette is a list of color tuples, which is used for visualization.
        "palette": [
            (106, 0, 228),
        ],
    }

    def __init__(self, **kwargs):
        super().__init__(**kwargs)
        self._metainfo["dataset_type"] = None

Sample class

Sample 클래스 구조:

# -*- coding: utf-8 -*-

# [IMPORTANT]
# Do not import the 'mmdet' packages !!

from typing import List, Optional, Tuple

from cvlayer.geometry.iou import calculate_iou
from cvlayer.typing import RectT
from numpy.typing import NDArray


class Sample:
    def __init__(
        self,
        index: int,
        label: str,
        score: float,
        x1: float,
        y1: float,
        x2: float,
        y2: float,
        color: Tuple[int, int, int],
        mask: Optional[NDArray] = None,
    ):
        self.index = index
        self.label = label
        self.score = score
        self.x1 = x1
        self.y1 = y1
        self.x2 = x2
        self.y2 = y2
        self.color = color
        self.mask = mask

    def __repr__(self):
        return f"<Sample index={self.index},label={self.label},score={self.score:.2f}>"

    def __str__(self):
        return f"{self.label}({self.score:.2f})"

    @property
    def shape(self):
        return self.mask.shape

    @property
    def p1(self):
        return self.x1, self.y1

    @property
    def p2(self):
        return self.x2, self.y2

    @property
    def width(self):
        return self.x2 - self.x1

    @property
    def height(self):
        return self.y2 - self.y1

    @property
    def center(self):
        return self.x1 + self.width / 2, self.y1 + self.height / 2

    @property
    def rect(self):
        return self.x1, self.y1, self.x2, self.y2

    def intersection(self, roi: RectT) -> bool:
        left = max(self.x1, roi[0])
        top = max(self.y1, roi[1])
        right = min(self.x2, roi[2])
        bottom = min(self.y2, roi[3])
        return right > left and bottom > top

    def find_nearest_sample(self, slots: List["Sample"]) -> Optional["Sample"]:
        ss = list(filter(lambda s: s.intersection(self.rect), slots))
        if not ss:
            return None
        ious = [calculate_iou(self.rect, s.rect) for s in ss]
        largest_iou = max(ious)
        return ss[ious.index(largest_iou)]

ModelInterface class

Model 인터페이스:

# -*- coding: utf-8 -*-

from abc import ABCMeta, abstractmethod
from typing import Any, List, Optional, Tuple

from numpy.typing import NDArray

from ddrm.models.sample import Sample


class ModelInterface(metaclass=ABCMeta):
    @abstractmethod
    def inference_raw(self, image: NDArray, **kwargs):
        raise NotImplementedError

    @abstractmethod
    def inference_samples(self, image: NDArray, **kwargs) -> List[Sample]:
        raise NotImplementedError

    @abstractmethod
    def get_extra(self) -> Any:
        raise NotImplementedError

    @abstractmethod
    def set_extra(self, extra: Any) -> None:
        raise NotImplementedError

    @abstractmethod
    def run(
        self,
        image: NDArray,
        preview: Optional[NDArray] = None,
        **kwargs,
    ) -> Tuple[List[Sample], Optional[NDArray]]:
        raise NotImplementedError

MMSegModel class

모델 구현:

# -*- coding: utf-8 -*-

from os import path
from typing import Any, List, Optional, Sequence, Tuple

import numpy as np
from cvlayer.cv.cvt_color import CvtColorCode, cvt_color
from mmseg.apis import inference_model, init_model  # noqa
from mmseg.registry import VISUALIZERS  # noqa
from mmseg.structures.seg_data_sample import SegDataSample  # noqa
from numpy import uint8, zeros
from numpy.typing import NDArray
from overrides import override

from ddrm.arguments import DEFAULT_DEVICE, DEFAULT_SCORE_THRESHOLD
from ddrm.models.interface import ModelInterface
from ddrm.models.sample import Sample


def get_bounding_box(mask):
    y_indices, x_indices = np.where(mask > 0)

    if len(x_indices) == 0 or len(y_indices) == 0:
        return None

    x1, x2 = np.min(x_indices), np.max(x_indices)
    y1, y2 = np.min(y_indices), np.max(y_indices)

    return int(x1), int(y1), int(x2), int(y2)


class MMSegModel(ModelInterface):
    def __init__(
        self,
        config: str,
        checkpoint: str,
        device=DEFAULT_DEVICE,
        threshold=DEFAULT_SCORE_THRESHOLD,
        ignore_labels: Optional[Sequence[str]] = None,
        extra: Optional[str] = None,
        name: Optional[str] = None,
    ):
        if not path.isfile(config):
            raise FileNotFoundError(f"Config file not found: '{config}'")
        if not path.isfile(checkpoint):
            raise FileNotFoundError(f"Checkpoint file not found: '{checkpoint}'")

        self._model = init_model(config, checkpoint, device=device)
        self._ignore_labels = ignore_labels
        self._extra = extra if extra else str()
        self._name = name if name else type(self).__class__.__name__
        self._visualizer = VISUALIZERS.build(self._model.cfg.visualizer)
        self._threshold = threshold

        # the dataset_meta is loaded from the checkpoint and
        # then pass to the model in init_detector
        self._visualizer.dataset_meta = self._model.dataset_meta

    @property
    def name(self) -> str:
        return self._name

    @property
    def threshold(self) -> float:
        return self._threshold

    @property
    def classes(self) -> List[str]:
        return self._visualizer.dataset_meta["classes"]

    @property
    def palette(self) -> List[Tuple[int, int, int]]:
        return self._visualizer.dataset_meta["palette"]

    def convert_seg_to_samples(self, seg: SegDataSample) -> List[Sample]:
        result = list()
        num_classes = len(self.classes)
        sem_seg = seg.pred_sem_seg.cpu().data  # noqa
        ids = np.unique(sem_seg)[::-1]
        legal_indices = ids < num_classes
        ids = ids[legal_indices]
        labels = np.array(ids, dtype=np.int64)
        colors = [self.palette[label] for label in labels]
        height = sem_seg.shape[1]
        width = sem_seg.shape[2]
        shape = height, width
        mask = zeros(shape, dtype=uint8)  # noqa
        for label, color in zip(labels, colors):
            label_index = int(label)
            label_name = self.classes[label_index]
            if label_index == 0:
                assert label_name == "background_bg"
                continue
            if self._ignore_labels and label_name in self._ignore_labels:
                continue
            score = 1.0
            mask[sem_seg[0] == label] = 255
            x1, y1, x2, y2 = get_bounding_box(mask)
            sample = Sample(label_index, label_name, score, x1, y1, x2, y2, color, mask)
            result.append(sample)
        return result

    def inference(self, image: NDArray) -> SegDataSample:
        return inference_model(self._model, image)

    def visualize(self, image: NDArray, seg: SegDataSample) -> NDArray:
        self._visualizer.add_datasample(
            name=self._name,
            image=cvt_color(image, CvtColorCode.BGR2RGB),
            data_sample=seg,
            draw_gt=False,
            pred_score_thr=self._threshold,
            show=False,
        )
        return cvt_color(self._visualizer.get_image(), CvtColorCode.RGB2BGR)

    @override
    def get_extra(self) -> Any:
        return self._extra

    @override
    def set_extra(self, extra: Any) -> None:
        self._extra = extra

    @override
    def inference_raw(self, image: NDArray, **kwargs):
        return self.inference(image)

    @override
    def inference_samples(self, image: NDArray, **kwargs) -> List[Sample]:
        return self.convert_seg_to_samples(self.inference_raw(image, **kwargs))

    @override
    def run(
        self,
        image: NDArray,
        preview: Optional[NDArray] = None,
        **kwargs,
    ) -> Tuple[List[Sample], Optional[NDArray]]:
        seg = self.inference(image)
        samples = self.convert_seg_to_samples(seg)
        visualized = self.visualize(preview if preview is not None else image, seg)
        return samples, visualized

Troubleshooting

RuntimeError: _default_to_fused_or_foreach

HOST PC 실행 환경:

  • Ubuntu 20.04.6 TLS,
  • NVIDIA GeForce RTX 4090
  • CUDA 12.6

Guest PC 환경:

  • Docker Image: pytorch/pytorch:2.4.0-cuda12.4-cudnn9-runtime
  • mmengine==0.10.4
  • mmcv==2.1.0
  • mmdet==3.3.0
  • mmsegmentation==1.2.2
_default_to_fused_or_foreach(Tensor[] params, bool differentiable, bool use_fused=False) -> ((bool, bool)):
Expected a value of type 'List[Tensor]' for argument 'params' but instead found type 'List[NoneType]'.
:
  File "/opt/conda/lib/python3.11/site-packages/torch/optim/optimizer.py", line 426
    # We still respect when the user inputs False for foreach.
    if foreach is None:
        _, foreach = _default_to_fused_or_foreach(
                     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
            params, differentiable, use_fused=False
        )
'adadelta' is being compiled since it was called from '_FunctionalAdadelta.step'
  File "/opt/conda/lib/python3.11/site-packages/torch/distributed/optim/functional_adadelta.py", line 93

        with torch.no_grad():
            F.adadelta(
            ~~~~~~~~~~~
                params_with_grad,
                ~~~~~~~~~~~~~~~~~
                grads,
                ~~~~~~
                square_avgs,
                ~~~~~~~~~~~~
                acc_deltas,
                ~~~~~~~~~~~
                state_steps,
                ~~~~~~~~~~~~
                lr=lr,
                ~~~~~~
                rho=rho,
                ~~~~~~~~
                eps=eps,
                ~~~~~~~~
                weight_decay=weight_decay,
                ~~~~~~~~~~~~~~~~~~~~~~~~~~
                foreach=self.foreach,
                ~~~~~~~~~~~~~~~~~~~~~
                maximize=self.maximize,
                ~~~~~~~~~~~~~~~~~~~~~~~
                has_complex=has_complex
                ~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
            )
Traceback (most recent call last):
  File "/opt/conda/lib/python3.11/site-packages/ddrm/models/defaults.py", line 315, in load_model
    model = load_model_with_inits(init)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/conda/lib/python3.11/site-packages/ddrm/models/defaults.py", line 285, in load_model_with_inits
    return load_mmseg_model(init)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/conda/lib/python3.11/site-packages/ddrm/models/defaults.py", line 233, in load_mmseg_model
    from ddrm.models.mmseg.model import MMSegModel
  File "/opt/conda/lib/python3.11/site-packages/ddrm/models/mmseg/model.py", line 8, in <module>
    from mmseg.apis import inference_model, init_model  # noqa
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/conda/lib/python3.11/site-packages/mmseg/apis/__init__.py", line 2, in <module>
    from .inference import inference_model, init_model, show_result_pyplot
  File "/opt/conda/lib/python3.11/site-packages/mmseg/apis/inference.py", line 11, in <module>
    from mmengine.runner import load_checkpoint
  File "/opt/conda/lib/python3.11/site-packages/mmengine/runner/__init__.py", line 2, in <module>
    from ._flexible_runner import FlexibleRunner
  File "/opt/conda/lib/python3.11/site-packages/mmengine/runner/_flexible_runner.py", line 14, in <module>
    from mmengine._strategy import BaseStrategy
  File "/opt/conda/lib/python3.11/site-packages/mmengine/_strategy/__init__.py", line 4, in <module>
    from .base import BaseStrategy
  File "/opt/conda/lib/python3.11/site-packages/mmengine/_strategy/base.py", line 19, in <module>
    from mmengine.model.wrappers import is_model_wrapper
  File "/opt/conda/lib/python3.11/site-packages/mmengine/model/__init__.py", line 6, in <module>
    from .base_model import BaseDataPreprocessor, BaseModel, ImgDataPreprocessor
  File "/opt/conda/lib/python3.11/site-packages/mmengine/model/base_model/__init__.py", line 2, in <module>
    from .base_model import BaseModel
  File "/opt/conda/lib/python3.11/site-packages/mmengine/model/base_model/base_model.py", line 9, in <module>
    from mmengine.optim import OptimWrapper
  File "/opt/conda/lib/python3.11/site-packages/mmengine/optim/__init__.py", line 2, in <module>
    from .optimizer import (OPTIM_WRAPPER_CONSTRUCTORS, OPTIMIZERS,
  File "/opt/conda/lib/python3.11/site-packages/mmengine/optim/optimizer/__init__.py", line 10, in <module>
    from .zero_optimizer import ZeroRedundancyOptimizer
  File "/opt/conda/lib/python3.11/site-packages/mmengine/optim/optimizer/zero_optimizer.py", line 11, in <module>
    from torch.distributed.optim import \
  File "/opt/conda/lib/python3.11/site-packages/torch/distributed/optim/__init__.py", line 17, in <module>
    from .functional_adadelta import _FunctionalAdadelta
  File "/opt/conda/lib/python3.11/site-packages/torch/distributed/optim/functional_adadelta.py", line 20, in <module>
    @torch.jit.script
     ^^^^^^^^^^^^^^^^
  File "/opt/conda/lib/python3.11/site-packages/torch/jit/_script.py", line 1432, in script
    return _script_impl(
           ^^^^^^^^^^^^^
  File "/opt/conda/lib/python3.11/site-packages/torch/jit/_script.py", line 1183, in _script_impl
    _compile_and_register_class(obj, _rcb, qualified_name)
  File "/opt/conda/lib/python3.11/site-packages/torch/jit/_recursive.py", line 62, in _compile_and_register_class
    script_class = torch._C._jit_script_class_compile(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/conda/lib/python3.11/site-packages/torch/jit/_recursive.py", line 1004, in try_compile_fn
    return torch.jit.script(fn, _rcb=rcb)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/conda/lib/python3.11/site-packages/torch/jit/_script.py", line 1432, in script
    return _script_impl(
           ^^^^^^^^^^^^^
  File "/opt/conda/lib/python3.11/site-packages/torch/jit/_script.py", line 1204, in _script_impl
    fn = torch._C._jit_script_compile(
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

근본적인 에러 해결 대신 일단 가라로 해결하는 방법으로,

강조 표시된 /opt/conda/lib/python3.11/site-packages/mmengine/optim/optimizer/__init__.py 파일의 ZeroRedundancyOptimizer 를 제거하여 해결.

# Copyright (c) OpenMMLab. All rights reserved.
from .amp_optimizer_wrapper import AmpOptimWrapper
from .apex_optimizer_wrapper import ApexOptimWrapper
from .base import BaseOptimWrapper
from .builder import (OPTIM_WRAPPER_CONSTRUCTORS, OPTIMIZERS,
                      build_optim_wrapper)
from .default_constructor import DefaultOptimWrapperConstructor
from .optimizer_wrapper import OptimWrapper
from .optimizer_wrapper_dict import OptimWrapperDict
from .zero_optimizer import ZeroRedundancyOptimizer

__all__ = [
    'OPTIM_WRAPPER_CONSTRUCTORS', 'OPTIMIZERS',
    'DefaultOptimWrapperConstructor', 'build_optim_wrapper', 'OptimWrapper',
    'AmpOptimWrapper', 'ApexOptimWrapper', 'OptimWrapperDict',
    'ZeroRedundancyOptimizer', 'BaseOptimWrapper'
]

똑같이 /opt/conda/lib/python3.11/site-packages/mmengine/optim/__init__.py 파일의 ZeroRedundancyOptimizer 도 제거하자.

# Copyright (c) OpenMMLab. All rights reserved.
from .optimizer import (OPTIM_WRAPPER_CONSTRUCTORS, OPTIMIZERS,
                        AmpOptimWrapper, ApexOptimWrapper, BaseOptimWrapper,
                        DefaultOptimWrapperConstructor, OptimWrapper,
                        OptimWrapperDict, ZeroRedundancyOptimizer,
                        build_optim_wrapper)
# yapf: disable
from .scheduler import (ConstantLR, ConstantMomentum, ConstantParamScheduler,
                        CosineAnnealingLR, CosineAnnealingMomentum,
                        CosineAnnealingParamScheduler, ExponentialLR,
                        ExponentialMomentum, ExponentialParamScheduler,
                        LinearLR, LinearMomentum, LinearParamScheduler,
                        MultiStepLR, MultiStepMomentum,
                        MultiStepParamScheduler, OneCycleLR,
                        OneCycleParamScheduler, PolyLR, PolyMomentum,
                        PolyParamScheduler, ReduceOnPlateauLR,
                        ReduceOnPlateauMomentum, ReduceOnPlateauParamScheduler,
                        StepLR, StepMomentum, StepParamScheduler,
                        _ParamScheduler)

# yapf: enable
__all__ = [
    'OPTIM_WRAPPER_CONSTRUCTORS', 'OPTIMIZERS', 'build_optim_wrapper',
    'DefaultOptimWrapperConstructor', 'ConstantLR', 'CosineAnnealingLR',
    'ExponentialLR', 'LinearLR', 'MultiStepLR', 'StepLR', 'ConstantMomentum',
    'CosineAnnealingMomentum', 'ExponentialMomentum', 'LinearMomentum',
    'MultiStepMomentum', 'StepMomentum', 'ConstantParamScheduler',
    'CosineAnnealingParamScheduler', 'ExponentialParamScheduler',
    'LinearParamScheduler', 'MultiStepParamScheduler', 'StepParamScheduler',
    '_ParamScheduler', 'OptimWrapper', 'AmpOptimWrapper', 'ApexOptimWrapper',
    'OptimWrapperDict', 'OneCycleParamScheduler', 'OneCycleLR', 'PolyLR',
    'PolyMomentum', 'PolyParamScheduler', 'ReduceOnPlateauLR',
    'ReduceOnPlateauMomentum', 'ReduceOnPlateauParamScheduler',
    'ZeroRedundancyOptimizer', 'BaseOptimWrapper'
]

See also