This directory contains a comprehensive test suite for the Python-SLAM project.
-
Comprehensive Tests (
test_comprehensive.py)- Core SLAM functionality
- System integration tests
- Basic smoke tests
-
GPU Acceleration Tests (
test_gpu_acceleration.py)- GPU detection and management
- CUDA, ROCm, and Metal backend testing
- Accelerated operations validation
- Performance benchmarking
-
GUI Component Tests (
test_gui_components.py)- PyQt6/PySide6 interface testing
- Material Design styling
- 3D visualization components
- Control panels and metrics dashboard
-
Benchmarking Tests (
test_benchmarking.py)- Trajectory evaluation metrics (ATE, RPE)
- Processing performance metrics
- Dataset loading and validation
- Benchmark report generation
-
Integration Tests (
test_integration.py)- Component interaction testing
- Data pipeline validation
- Performance monitoring
- Error handling and recovery
- Scalability testing
# Run all tests
python tests/run_tests.py
# Run specific test categories
python tests/run_tests.py --categories gpu benchmarking
# Run with coverage analysis
python tests/run_tests.py --coverage
# Check dependencies
python tests/run_tests.py --check-depspython tests/run_tests.py [OPTIONS]
Options:
--categories CATEGORIES Test categories to run (comprehensive, gpu, gui, benchmarking, integration, all)
--verbosity VERBOSITY Test output verbosity (0, 1, 2)
--coverage Run with coverage analysis
--performance Run performance benchmarks
--output OUTPUT Output file for test report
--check-deps Only check dependencies and exit# Run only GPU and benchmarking tests with high verbosity
python tests/run_tests.py --categories gpu benchmarking --verbosity 2
# Run all tests with coverage and save report
python tests/run_tests.py --coverage --output test_results.json
# Run performance benchmarks
python tests/run_tests.py --performance
# Check if all dependencies are available
python tests/run_tests.py --check-deps- Python 3.8+
- NumPy
- PyTorch (for GPU acceleration tests)
- PyQt6 or PySide6 (for GUI tests)
- Matplotlib (for visualization tests)
- OpenCV (for computer vision tests)
- psutil (for system monitoring tests)
- coverage.py (for coverage analysis)
- NVIDIA GPU + CUDA (for CUDA tests)
- AMD GPU + ROCm (for ROCm tests)
- Apple Silicon (for Metal tests)
Note: GPU tests will automatically skip if the corresponding hardware/software is not available.
The test runner provides detailed console output including:
- Dependency checking results
- Test execution progress
- Individual test results
- Summary statistics
- Performance metrics
- JSON report with detailed results (
test_report.json) - Coverage reports (if
--coverageused) - Performance benchmark results
==================== COMPREHENSIVE TESTS ====================
test_basic_slam_pipeline (__main__.TestPythonSLAMCore) ... ok
test_feature_extraction (__main__.TestPythonSLAMCore) ... ok
...
============================================================
TEST SUITE SUMMARY
============================================================
Total execution time: 45.23s
Total tests run: 89
Passed: 85
Failed: 2
Errors: 0
Skipped: 2
Success rate: 95.5%
Category breakdown:
✓ comprehensive 25 tests, 96.0% success, 8.45s
✓ gpu 18 tests, 100.0% success, 12.34s
✗ gui 15 tests, 86.7% success, 5.67s
✓ benchmarking 21 tests, 100.0% success, 11.23s
✓ integration 10 tests, 100.0% success, 7.54s
The test suite is designed to work with GitHub Actions. Example workflow:
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8, 3.9, '3.10', 3.11]
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
pip install -r requirements.txt
pip install -r requirements-dev.txt
- name: Run tests
run: python tests/run_tests.py --coverageFor local development, you can run tests automatically on file changes using tools like pytest-watch or watchdog.
Follow the existing test structure:
import unittest
import sys
import os
# Add src directory to path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'src'))
class TestNewComponent(unittest.TestCase):
"""Test new component functionality."""
def setUp(self):
"""Set up test environment."""
pass
def tearDown(self):
"""Clean up test environment."""
pass
def test_component_functionality(self):
"""Test specific functionality."""
try:
from python_slam.new_component import NewComponent
component = NewComponent()
self.assertIsNotNone(component)
except ImportError:
self.skipTest("New component not available")
if __name__ == "__main__":
unittest.main(verbosity=2)- Use
skipTest()for optional dependencies - Clean up resources in
tearDown() - Use meaningful test names
- Test both success and failure cases
- Mock external dependencies when possible
- Include performance tests for critical paths
To add new test categories, update the test_categories dictionary in run_tests.py:
self.test_categories = {
"new_category": "test_new_category.py",
# ... existing categories
}-
Import Errors
- Ensure all dependencies are installed
- Check Python path configuration
- Verify src directory structure
-
GPU Test Failures
- Check GPU drivers and libraries
- Verify CUDA/ROCm installation
- Tests should skip gracefully if GPU unavailable
-
GUI Test Failures
- May require display server (use Xvfb on headless systems)
- Check PyQt6/PySide6 installation
- Some tests may need to be skipped in CI environments
-
Memory Issues
- Large datasets may cause memory issues
- Reduce test data size if needed
- Ensure proper cleanup in tearDown methods
For debugging test failures, increase verbosity and run specific test files:
python -m unittest tests.test_gpu_acceleration.TestGPUDetector.test_cuda_detection -vWhen contributing new features:
- Write corresponding tests
- Ensure all existing tests pass
- Add integration tests for component interactions
- Update documentation if test structure changes
- Consider performance implications
The test suite includes performance monitoring:
- Execution time tracking
- Memory usage monitoring
- GPU utilization metrics
- Benchmark comparisons
Use the --performance flag to run additional performance tests that measure system capabilities and identify potential bottlenecks.