# PartPacker on `trellis2` Conda Environment

## Goal

Verify whether the official `PartPacker` repository can run inside the existing `trellis2` conda environment without creating a new dedicated environment.

## Environment

- Conda env: `trellis2`
- Python: `/opt/miniconda3/envs/trellis2/bin/python`
- Torch: `2.6.0+cu124`
- GPU: `2 x NVIDIA H100 80GB`

## Minimal dependency additions

To keep the existing environment intact, only the missing packages were added:

```bash
/opt/miniconda3/envs/trellis2/bin/pip install -q kiui fpsample pymeshlab meshiki pypdf
```

## Pretrained weights

Official pretrained checkpoints were downloaded into:

- `pretrained/flow.pt`
- `pretrained/vae.pt`

Using:

```bash
mkdir -p pretrained
cd pretrained
wget -c https://hf-mirror.com/nvidia/PartPacker/resolve/main/flow.pt
wget -c https://hf-mirror.com/nvidia/PartPacker/resolve/main/vae.pt
```

The first flow inference also downloaded `facebook/dinov2-giant` automatically into Hugging Face cache.

## Reproduced commands

### 1. Flow smoke test

```bash
cd /data/250010098/PartPacker
HF_ENDPOINT=https://hf-mirror.com PYTHONPATH=. /opt/miniconda3/envs/trellis2/bin/python flow/scripts/infer.py \
  --input assets/images/barrel.png \
  --output_dir /data/250010098/physxanything_stage1_experiments/runs/partpacker_smoke_20260419 \
  --num_steps 4 \
  --num_repeats 1 \
  --grid_res 256
```

Output:

- `/data/250010098/physxanything_stage1_experiments/runs/partpacker_smoke_20260419/flow_big_parts_strict_pvae_20260419_233505/`

Artifacts include:

- `barrel_0.glb`
- `barrel_0_part0.glb ... barrel_0_part15.glb`
- `barrel_0_vol0.glb`
- `barrel_0_vol1.glb`

### 2. Flow formal single-image run

```bash
cd /data/250010098/PartPacker
HF_ENDPOINT=https://hf-mirror.com PYTHONPATH=. /opt/miniconda3/envs/trellis2/bin/python flow/scripts/infer.py \
  --input assets/images/teapot.png \
  --output_dir /data/250010098/physxanything_stage1_experiments/runs/partpacker_formal_20260419 \
  --num_steps 50 \
  --num_repeats 1 \
  --grid_res 384
```

Output:

- `/data/250010098/physxanything_stage1_experiments/runs/partpacker_formal_20260419/flow_big_parts_strict_pvae_20260419_233623/`

Artifacts include:

- `teapot_0.glb`
- `teapot_0_part0.glb`
- `teapot_0_part1.glb`
- `teapot_0_part2.glb`
- `teapot_0_part3.glb`
- `teapot_0_vol0.glb`
- `teapot_0_vol1.glb`

Observed runtime signals:

- `Flow Sampling: 50/50`
- memory around `9.3 GB`
- output mesh successfully cleaned and exported

### 3. VAE smoke test

Important note: `vae/scripts/infer.py` expects a directory, not a single file path.

Working command:

```bash
cd /data/250010098/PartPacker
PYTHONPATH=. /opt/miniconda3/envs/trellis2/bin/python vae/scripts/infer.py \
  --input assets/meshes/ \
  --output_dir /data/250010098/physxanything_stage1_experiments/runs/partpacker_vae_smoke_20260419 \
  --limit 1 \
  --grid_res 256
```

Output:

- `/data/250010098/physxanything_stage1_experiments/runs/partpacker_vae_smoke_20260419/vae_part_woenc_20260419_233743/balloon_whisk.glb`

## Compatibility verdict

### Confirmed working

- official `flow/scripts/infer.py`
- official `vae/scripts/infer.py`
- official checkpoint loading
- DINOv2 download and initialization
- part-level GLB export
- dual-volume GLB export

### Known caveats

1. `pymeshlab` prints OpenGL plugin warnings on this headless server:
   - `libOpenGL.so.0: cannot open shared object file`
   - but current inference still completes successfully

2. `app.py` was not validated end-to-end in this round.
   - The current `trellis2` env previously showed a `gradio/Pillow` compatibility issue in unrelated checks.
   - CLI inference is the reliable path confirmed here.

3. The first run has high cold-start cost because it downloads:
   - `flow.pt`
   - `vae.pt`
   - `facebook/dinov2-giant`
   - `u2net.onnx` for `rembg`

## Bottom line

`PartPacker` can be reproduced in the existing `trellis2` environment for official CLI inference, after adding a small set of missing Python packages and downloading the official pretrained weights.
