Skip to content

Commit 1dbaabf

Browse files
committed
ci: update README, requirements, and test matrix for Python 3.10-3.11
1 parent 43be411 commit 1dbaabf

12 files changed

Lines changed: 96 additions & 70 deletions

.github/workflows/ci.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ jobs:
2323
needs: lint
2424
strategy:
2525
matrix:
26-
python-version: ['3.8', '3.9', '3.10']
26+
python-version: ['3.10', '3.11']
2727

2828
steps:
2929
- uses: actions/checkout@v4

README.md

Lines changed: 71 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -381,9 +381,6 @@ python ev_run_render.py config/train_config.yml
381381
G-PCC is the industry-standard point cloud codec from MPEG. Compare your results:
382382

383383
```bash
384-
# Run G-PCC on the same data
385-
python mp_run.py config/train_config.yml --num_parallel 8
386-
387384
# Generate a final comparison report
388385
python mp_report.py \
389386
results/metrics/evaluation_report.json \
@@ -552,69 +549,99 @@ print(f"Peak memory: {mem.peak_mb:.1f} MB")
552549

553550
| Software | Version | Purpose |
554551
|----------|---------|---------|
555-
| Python | 3.8+ | Programming language |
556-
| TensorFlow | ≥ 2.11.0 | Neural network framework |
552+
| Python | 3.10+ | Programming language |
553+
| TensorFlow | ~=2.15 | Neural network framework |
554+
| TensorFlow Probability | ~=0.23 | Probability distributions for entropy modeling |
557555
| MPEG G-PCC | Latest | Industry-standard codec for comparison |
558556
| MPEG PCC Metrics | v0.12.3 | Standard evaluation metrics |
559557

560558
### Python Dependencies
561559

562560
Install these with `pip install -r requirements.txt`:
563561

564-
| Package | Purpose |
565-
|---------|---------|
566-
| tensorflow | Neural network operations |
567-
| tensorflow-probability | Probability distributions for entropy modeling |
568-
| numpy | Numerical computations |
569-
| matplotlib | Visualization |
570-
| pandas | Data analysis |
571-
| pyyaml | Configuration file parsing |
572-
| scipy | Scientific computing |
573-
| numba | JIT compilation for speed |
562+
| Package | Version | Purpose |
563+
|---------|---------|---------|
564+
| tensorflow | ~=2.15 | Neural network operations |
565+
| tensorflow-probability | ~=0.23 | Probability distributions for entropy modeling |
566+
| numpy | ~=1.26 | Numerical computations |
567+
| matplotlib | ~=3.8 | Visualization |
568+
| pandas | ~=2.1 | Data analysis |
569+
| pyyaml | ~=6.0 | Configuration file parsing |
570+
| scipy | ~=1.11 | Scientific computing |
571+
| tqdm | ~=4.66 | Progress bars |
572+
| numba | ~=0.58 | JIT compilation for speed |
573+
| keras-tuner | ~=1.4 | Hyperparameter tuning (for cli_train.py) |
574+
| pytest | ~=8.0 | Test framework |
575+
| ruff | >=0.4 | Linter (configured in pyproject.toml) |
574576

575577
---
576578

577579
## Project Structure
578580

579581
```
580582
deepcompress/
581-
├── src/ # Source code
583+
├── src/ # Source code
582584
│ ├── Model Components
583-
│ │ ├── model_transforms.py # Main encoder/decoder architecture
584-
│ │ ├── entropy_model.py # Entropy coding (converts to bits)
585-
│ │ ├── entropy_parameters.py # Hyperprior parameter prediction
586-
│ │ ├── context_model.py # Spatial autoregressive context
587-
│ │ ├── channel_context.py # Channel-wise context
588-
│ │ └── attention_context.py # Attention-based context
585+
│ │ ├── model_transforms.py # Main encoder/decoder (V1 + V2) architecture
586+
│ │ ├── entropy_model.py # Gaussian conditional, hyperprior entropy models
587+
│ │ ├── entropy_parameters.py # Hyperprior mean/scale prediction network
588+
│ │ ├── context_model.py # MaskedConv3D, autoregressive spatial context
589+
│ │ ├── channel_context.py # Channel-wise context model
590+
│ │ └── attention_context.py # Windowed attention context model
589591
│ │
590592
│ ├── Performance
591-
│ │ ├── constants.py # Pre-computed math constants
592-
│ │ ├── precision_config.py # Mixed precision settings
593-
│ │ ├── benchmarks.py # Performance measurement
594-
│ │ └── quick_benchmark.py # Quick testing tool
593+
│ │ ├── constants.py # Pre-computed math constants (LOG_2, EPSILON)
594+
│ │ ├── precision_config.py # Mixed precision (float16) settings
595+
│ │ ├── benchmarks.py # Performance measurement
596+
│ │ └── quick_benchmark.py # Quick synthetic smoke test
595597
│ │
596598
│ ├── Data Processing
597-
│ │ ├── ds_mesh_to_pc.py # Convert meshes to point clouds
598-
│ │ ├── ds_pc_octree_blocks.py# Split into octree blocks
599-
│ │ ├── compress_octree.py # Compression pipeline
600-
│ │ └── decompress_octree.py # Decompression pipeline
599+
│ │ ├── data_loader.py # Unified data loader (ModelNet40 / 8iVFB)
600+
│ │ ├── ds_mesh_to_pc.py # Convert .off meshes to point clouds
601+
│ │ ├── ds_pc_octree_blocks.py # Split point clouds into octree blocks
602+
│ │ ├── ds_select_largest.py # Select N largest blocks by point count
603+
│ │ ├── octree_coding.py # Octree encode/decode for voxel grids
604+
│ │ ├── compress_octree.py # Compression entry point
605+
│ │ └── map_color.py # Transfer colors between point clouds
606+
│ │
607+
│ ├── Training & Evaluation
608+
│ │ ├── training_pipeline.py # End-to-end training loop
609+
│ │ ├── evaluation_pipeline.py # Model evaluation pipeline
610+
│ │ ├── cli_train.py # Training CLI with hyperparameter tuning
611+
│ │ └── experiment.py # Experiment runner
601612
│ │
602-
│ └── Training & Evaluation
603-
│ ├── training_pipeline.py # End-to-end training
604-
│ ├── evaluation_pipeline.py# Model evaluation
605-
│ └── cli_train.py # Command-line interface
613+
│ └── Evaluation & Comparison
614+
│ ├── ev_compare.py # Point cloud quality metrics (PSNR, Chamfer)
615+
│ ├── ev_run_render.py # Visualization / rendering
616+
│ ├── point_cloud_metrics.py # D1/D2 point-to-point metrics
617+
│ ├── mp_report.py # MPEG G-PCC comparison reports
618+
│ ├── colorbar.py # Colorbar visualization utility
619+
│ └── parallel_process.py # Parallel processing utility
606620
607-
├── tests/ # Automated tests
608-
│ ├── test_entropy_model.py
609-
│ ├── test_context_model.py
610-
│ ├── test_performance.py # Performance regression tests
611-
│ └── ...
621+
├── tests/ # Automated tests (pytest + tf.test.TestCase)
622+
│ ├── conftest.py # Session-scoped fixtures (tf_config, file factories)
623+
│ ├── test_utils.py # Shared test utilities (mock grids, configs)
624+
│ ├── test_model_transforms.py # V1 + V2 model tests
625+
│ ├── test_entropy_model.py # Entropy model tests
626+
│ ├── test_context_model.py # Context model tests
627+
│ ├── test_channel_context.py # Channel context tests
628+
│ ├── test_attention_context.py # Attention context tests
629+
│ ├── test_performance.py # Performance regression + optimization tests
630+
│ ├── test_training_pipeline.py # Training loop tests
631+
│ ├── test_evaluation_pipeline.py # Evaluation pipeline tests
632+
│ ├── test_data_loader.py # Data loading tests
633+
│ ├── test_compress_octree.py # Compression pipeline tests
634+
│ ├── test_octree_coding.py # Octree codec tests
635+
│ └── ... # + 10 more module-level test files
612636
613-
├── config/ # Configuration files
614-
├── data/ # Datasets (not in git)
615-
├── results/ # Output files (not in git)
616-
├── README.md # This file
617-
└── requirements.txt # Python dependencies
637+
├── data/ # Datasets (not in git)
638+
├── results/ # Output files (not in git)
639+
├── CLAUDE.md # AI agent coding standards
640+
├── pyproject.toml # Ruff linter configuration
641+
├── pytest.ini # Pytest configuration and markers
642+
├── setup.py # Package setup
643+
├── requirements.txt # Python dependencies
644+
└── README.md # This file
618645
```
619646

620647
---

requirements.txt

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
1-
matplotlib~=3.5.0
2-
pyntcloud~=0.1.2
3-
numpy~=1.23.0
4-
pandas~=1.4.0
5-
tqdm~=4.64.0
6-
tensorflow~=2.11.0
1+
tensorflow~=2.15.0
2+
tensorflow-probability~=0.23.0
3+
numpy~=1.26.0
4+
matplotlib~=3.8.0
5+
pandas~=2.1.0
6+
tqdm~=4.66.0
77
pyyaml~=6.0
8-
pytest~=7.1.0
9-
scipy~=1.8.1
10-
numba~=0.56.0
11-
tensorflow-probability~=0.19.0
8+
pytest~=8.0.0
9+
scipy~=1.11.0
10+
numba~=0.58.0
11+
keras-tuner~=1.4.0
1212
ruff>=0.4.0

src/evaluation_pipeline.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
import logging
22
from dataclasses import asdict, dataclass
33
from pathlib import Path
4-
from typing import Any, Dict, List
4+
from typing import Any, Dict
55

66
import tensorflow as tf
77

src/model_transforms.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33

44
import tensorflow as tf
55

6-
from constants import LOG_2_RECIPROCAL, EPSILON
6+
from constants import EPSILON, LOG_2_RECIPROCAL
77

88

99
@dataclass

src/training_pipeline.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,10 @@
99
class TrainingPipeline:
1010
def __init__(self, config_path: str):
1111
import yaml
12+
1213
from data_loader import DataLoader
13-
from model_transforms import DeepCompressModel, TransformConfig
1414
from entropy_model import EntropyModel
15+
from model_transforms import DeepCompressModel, TransformConfig
1516

1617
self.config_path = config_path
1718
with open(config_path, 'r') as f:
@@ -50,7 +51,8 @@ def __init__(self, config_path: str):
5051
def _train_step(self, batch: tf.Tensor, training: bool = True) -> Dict[str, tf.Tensor]:
5152
"""Run a single training step."""
5253
with tf.GradientTape(persistent=True) as tape:
53-
x_hat, y, y_hat, z = self.model(batch[..., tf.newaxis] if len(batch.shape) == 4 else batch, training=training)
54+
inputs = batch[..., tf.newaxis] if len(batch.shape) == 4 else batch
55+
x_hat, y, y_hat, z = self.model(inputs, training=training)
5456

5557
# Compute focal loss on reconstruction
5658
focal_loss = self.compute_focal_loss(

tests/conftest.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,6 @@
55
import tensorflow as tf
66

77

8-
98
def pytest_collection_modifyitems(items):
109
"""Filter out tf.test.TestCase.test_session, which is a deprecated
1110
context manager that pytest mistakenly collects as a test."""

tests/test_colorbar.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,12 @@
33

44
sys.path.insert(0, str(Path(__file__).parent.parent / 'src'))
55

6-
import json
76

87
import matplotlib.pyplot as plt
98
import numpy as np
109
import pytest
1110

12-
from colorbar import ColorbarConfig, get_colorbar
11+
from colorbar import get_colorbar
1312

1413

1514
class TestColorbar:

tests/test_entropy_model.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,6 @@
88
from entropy_model import EntropyModel, PatchedGaussianConditional
99

1010

11-
1211
class TestEntropyModel(tf.test.TestCase):
1312
def setUp(self):
1413
tf.random.set_seed(42)

tests/test_evaluation_pipeline.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,6 @@
22
import sys
33
from pathlib import Path
44

5-
import numpy as np
65
import pytest
76
import tensorflow as tf
87
import yaml

0 commit comments

Comments
 (0)