Thank you for your interest in contributing to VersionTracker! This document provides comprehensive guidelines for contributing to the project.
- Code of Conduct
- Getting Started
- Development Setup
- Project Architecture
- Coding Standards
- Testing Guidelines
- Performance Considerations
- Pull Request Process
- Issue Reporting
- Release Process
This project adheres to a code of conduct. By participating, you are expected to:
- Be respectful and inclusive in all interactions
- Provide constructive feedback and criticism
- Focus on what is best for the community
- Show empathy towards other community members
- macOS (primary development platform)
- Python 3.10 or later
- Homebrew package manager
- Git for version control
# Fork and clone the repository
git clone https://github.com/your-username/versiontracker.git
cd versiontracker
# Create and activate virtual environment
python3 -m venv .venv
source .venv/bin/activate
# Install development dependencies
pip install -c constraints.txt -r requirements.txt
pip install -c constraints.txt -r requirements-dev.txt
pip install -e .
# Install pre-commit hooks
pre-commit installAlways use a virtual environment to isolate dependencies:
python3 -m venv .venv
source .venv/bin/activate # Linux/macOS
# or
.venv\Scripts\activate # WindowsThe project uses several categories of dependencies:
- Core dependencies (
requirements.txt): Runtime dependencies - Development dependencies (
requirements-dev.txt): Testing, linting, formatting tools - Constraints (
constraints.txt): Version pinning for reproducible builds
Essential tools for development:
# Code formatting and linting
ruff check . # Linting
ruff format . # Code formatting
mypy versiontracker # Type checking
# Testing
pytest # Run all tests
pytest --cov # Run with coverage
pytest -m "not slow" # Skip slow tests
# Security and quality
bandit -r versiontracker/ # Security analysis
safety check # Dependency vulnerability scanversiontracker/
├── __init__.py # Package initialization
├── __main__.py # CLI entry point
├── cli.py # Command-line interface
├── apps.py # Application discovery and analysis
├── homebrew.py # Homebrew integration (sync)
├── async_homebrew.py # Homebrew integration (async)
├── async_network.py # Async network utilities
├── cache.py # Basic caching functionality
├── advanced_cache.py # Multi-tier caching system
├── profiling.py # Performance monitoring
├── utils.py # Utility functions
├── config.py # Configuration management
├── exceptions.py # Custom exceptions
└── version.py # Version information
- CLI Layer (
cli.py): Handles command-line parsing and user interaction - Core Logic (
apps.py,homebrew.py): Business logic for app discovery and version checking - Network Layer (
async_network.py,async_homebrew.py): Handles API communication - Caching Layer (
cache.py,advanced_cache.py): Performance optimization - Utilities (
utils.py,config.py,profiling.py): Supporting functionality
- User Input → CLI parsing and validation
- Configuration → Load user preferences and settings
- App Discovery → Scan filesystem for applications
- Homebrew Integration → Fetch package information (with caching)
- Data Analysis → Compare versions and generate recommendations
- Output → Format and display results
- Formatter: Ruff for code formatting
- Linter: Ruff for linting with aggressive rules
- Type Checker: MyPy for static type analysis
- Line Length: 100 characters maximum (enforced by Ruff)
- Imports: Organized with
isortintegration
# Good: Clear, typed function with docstring
async def fetch_cask_info(
cask_name: str,
timeout: int = DEFAULT_TIMEOUT,
use_cache: bool = True
) -> Dict[str, Any]:
"""Fetch information about a Homebrew cask asynchronously.
Args:
cask_name: Name of the cask
timeout: Request timeout in seconds
use_cache: Whether to use cached results
Returns:
Dict containing cask information
Raises:
NetworkError: If there's a network issue
TimeoutError: If the request times out
"""
# Implementation...- Required for all public functions and methods
- Encouraged for internal functions
- Use
typingmodule for complex types - Follow PEP 484 guidelines
- Docstrings: Google-style docstrings for all public functions
- Inline comments: For complex logic or non-obvious code
- README updates: For user-facing changes
- Architecture docs: For significant design changes
# Use custom exceptions for domain-specific errors
from versiontracker.exceptions import HomebrewError, NetworkError
try:
result = await fetch_data()
except aiohttp.ClientError as e:
raise NetworkError(f"Failed to fetch data: {e}") from etests/
├── test_apps.py # Application discovery tests
├── test_homebrew.py # Homebrew integration tests
├── test_async_homebrew.py # Async homebrew tests
├── test_cache.py # Caching functionality tests
├── test_profiling.py # Performance profiling tests
├── test_integration.py # End-to-end integration tests
├── mock_homebrew_server.py # Mock server for testing
└── fixtures/ # Test data and fixtures
- Unit Tests: Test individual functions and classes
- Integration Tests: Test component interactions
- Performance Tests: Ensure performance requirements
- Regression Tests: Prevent reintroduction of bugs
import pytest
from unittest.mock import patch, MagicMock
class TestHomebrewIntegration:
"""Test cases for Homebrew integration."""
@pytest.mark.asyncio
async def test_fetch_cask_info_success(self):
"""Test successful cask information retrieval."""
# Arrange
expected_data = {"name": "test-app", "version": "1.0.0"}
# Act
with patch('aiohttp.ClientSession.get') as mock_get:
mock_get.return_value.__aenter__.return_value.json.return_value = expected_data
result = await fetch_cask_info("test-app")
# Assert
assert result == expected_data
mock_get.assert_called_once()
@pytest.mark.network
def test_network_integration(self):
"""Test real network integration."""
# Only runs when network tests are enabled
pass# Run all tests
pytest
# Run with coverage
pytest --cov=versiontracker --cov-report=html
# Run specific test categories
pytest -m "not slow" # Skip slow tests
pytest -m integration # Only integration tests
pytest -m network # Only network tests
# Run with verbose output
pytest -v --tb=short- Minimum: 70% overall coverage
- Target: 85% overall coverage
- Critical paths: 95%+ coverage required
- New code: Must maintain or improve coverage
- Use
async/awaitfor I/O operations - Implement proper timeout handling
- Use
aiohttpfor HTTP requests - Batch operations when possible
# Good: Asynchronous batch processing
async def process_casks_batch(cask_names: List[str]) -> List[Dict[str, Any]]:
"""Process multiple casks concurrently."""
tasks = [fetch_cask_info(name) for name in cask_names]
return await asyncio.gather(*tasks, return_exceptions=True)- Use appropriate cache levels (memory, disk, compressed)
- Implement cache invalidation
- Monitor cache hit rates
- Consider cache size limits
- Use generators for large datasets
- Implement proper resource cleanup
- Monitor memory usage in tests
- Profile memory-intensive operations
- Create feature branch from
master - Make atomic commits with clear messages
- Add/update tests for your changes
- Run full test suite and ensure it passes
- Update documentation if needed
- Check performance impact for significant changes
- Title: Clear, descriptive title following conventional commits
- Description:
- What changes were made and why
- Any breaking changes
- Testing instructions
- Related issues
- Size: Keep PRs focused and reasonably sized
- Commits: Squash if needed for clean history
## Summary
Brief description of changes made.
## Changes
- [ ] Feature: Add async batch processing
- [ ] Tests: Add integration tests for batch operations
- [ ] Docs: Update README with new async examples
## Testing
- All existing tests pass
- Added 5 new test cases
- Performance testing shows 40% improvement
## Breaking Changes
None.
## Related Issues
Closes #123- Automated checks must pass (CI/CD pipeline)
- Code review by at least one maintainer
- Performance review for performance-critical changes
- Documentation review for user-facing changes
**Bug Description**
A clear description of the bug.
**Reproduction Steps**
1. Step one
2. Step two
3. Step three
**Expected Behavior**
What should happen.
**Actual Behavior**
What actually happens.
**Environment**
- macOS version:
- Python version:
- VersionTracker version:
- Homebrew version:
**Additional Context**
Logs, screenshots, or other helpful information.**Feature Description**
Clear description of the proposed feature.
**Use Case**
Why is this feature needed? What problem does it solve?
**Proposed Solution**
How should this feature work?
**Alternatives Considered**
Other approaches you've considered.
**Additional Context**
Any other context or examples.Include profiling data, performance benchmarks, and specific scenarios where performance is inadequate.
- Semantic Versioning: MAJOR.MINOR.PATCH
- Version file: Update
versiontracker/version.py - Changelog: Maintain
CHANGELOG.md - Git tags: Tag releases in git
- Update version number in
version.py - Update CHANGELOG.md with release notes
- Run full test suite on multiple Python versions
- Performance testing to ensure no regressions
- Create release PR for final review
- Tag release after merge
- GitHub Actions handles PyPI deployment
- Patch (0.0.X): Bug fixes, no new features
- Minor (0.X.0): New features, backward compatible
- Major (X.0.0): Breaking changes, major new features
- Issues: GitHub issues for bugs and feature requests
- Discussions: GitHub discussions for questions and ideas
- Email: Maintainer email for security issues
- Be respectful and constructive
- Search existing issues before creating new ones
- Provide clear, detailed information
- Be patient with response times
- Project README
- Architecture Documentation (coming soon)
- API Documentation (coming soon)
- Performance Guide (coming soon)
Thank you for contributing to VersionTracker! 🚀