Thank you for your interest in contributing to GitHub Analyzer! This document provides guidelines and instructions for contributing.
- Code of Conduct
- Getting Started
- Development Setup
- Development Workflow
- Code Style
- Testing
- Commit Messages
- Pull Requests
- Reporting Issues
By participating in this project, you agree to maintain a respectful and inclusive environment. Please:
- Be respectful and constructive in discussions
- Welcome newcomers and help them get started
- Focus on the issue, not the person
- Accept constructive criticism gracefully
- Python 3.9 or higher
- Git
- A GitHub account
- Fork the repository on GitHub by clicking the "Fork" button
- Clone your fork locally:
git clone https://github.com/YOUR_USERNAME/github_analyzer.git cd github_analyzer - Add upstream remote:
git remote add upstream https://github.com/Oltrematica/github_analyzer.git
# Create a virtual environment (recommended)
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install development dependencies
pip install -r requirements-dev.txt
# Install optional dependencies
pip install -r requirements.txt# Run tests
pytest tests/ -v
# Run linter
ruff check src/github_analyzer/
# Run type checker
mypy src/github_analyzer/Always create a new branch for your work:
# Sync with upstream first
git fetch upstream
git checkout main
git merge upstream/main
# Create feature branch
git checkout -b feat/your-feature-nameBranch naming conventions:
feat/description- New featuresfix/description- Bug fixesdocs/description- Documentation changesrefactor/description- Code refactoringtest/description- Test additions or fixes
- Write clean, readable code
- Follow the code style guidelines
- Add tests for new functionality
- Update documentation as needed
# Run all tests
pytest tests/ -v
# Run with coverage
pytest --cov=src/github_analyzer --cov-report=term-missing
# Check coverage meets threshold (90%)
pytest --cov=src/github_analyzer --cov-fail-under=90
# Run linter
ruff check src/github_analyzer/
# Run type checker
mypy src/github_analyzer/git add .
git commit -m "feat(scope): description"
git push origin feat/your-feature-nameGo to GitHub and open a Pull Request from your branch to main.
- Python 3.9+ compatibility is required
- Use features available in Python 3.9 (no walrus operator in contexts that break 3.9)
All functions must have type hints:
def calculate_metrics(commits: list[Commit], days: int) -> dict[str, float]:
"""Calculate quality metrics from commits.
Args:
commits: List of commit objects to analyze.
days: Number of days in the analysis period.
Returns:
Dictionary mapping metric names to values.
"""
...We use ruff for linting and formatting:
# Check for issues
ruff check src/github_analyzer/
# Auto-fix issues
ruff check --fix src/github_analyzer/Key style rules:
- Line length: 100 characters max
- Use double quotes for strings
- Use 4 spaces for indentation
- Follow PEP 8 conventions
Use Google-style docstrings for public APIs:
def fetch_commits(repo: Repository, since: datetime) -> list[Commit]:
"""Fetch commits from a repository.
Args:
repo: Repository to fetch from.
since: Only fetch commits after this date.
Returns:
List of Commit objects.
Raises:
APIError: If the API request fails.
RateLimitError: If rate limit is exceeded.
"""Organize imports in this order:
- Standard library
- Third-party packages
- Local imports
from __future__ import annotations
import json
from datetime import datetime
from typing import TYPE_CHECKING
from src.github_analyzer.api.client import GitHubClient
from src.github_analyzer.core.exceptions import APIError
if TYPE_CHECKING:
from src.github_analyzer.config.settings import AnalyzerConfigTests mirror the source structure:
tests/
├── unit/
│ ├── api/
│ │ ├── test_client.py
│ │ └── test_models.py
│ ├── analyzers/
│ │ ├── test_commits.py
│ │ └── test_quality.py
│ └── config/
│ └── test_settings.py
└── integration/
└── test_full_workflow.py
import pytest
from unittest.mock import Mock, patch
from src.github_analyzer.analyzers.commits import CommitAnalyzer
class TestCommitAnalyzer:
"""Tests for CommitAnalyzer class."""
@pytest.fixture
def mock_client(self):
"""Create a mock GitHub client."""
return Mock()
def test_fetches_commits_from_api(self, mock_client):
"""Test that analyzer fetches commits from the API."""
mock_client.get_paginated.return_value = [{"sha": "abc123"}]
analyzer = CommitAnalyzer(mock_client)
# ... test implementation- Coverage: Minimum 90% code coverage
- Unit tests: All new code must have tests
- Mocking: Mock external dependencies (GitHub API, file system)
- Fixtures: Use pytest fixtures for reusable test data
- Naming: Test names should describe the behavior being tested
# Run single test file
pytest tests/unit/api/test_client.py -v
# Run tests matching pattern
pytest -k "test_fetch" -v
# Run with verbose output
pytest tests/ -v --tb=longWe use Conventional Commits format:
<type>(<scope>): <description>
[optional body]
[optional footer]
| Type | Description |
|---|---|
feat |
New feature |
fix |
Bug fix |
docs |
Documentation only |
style |
Code style (formatting, no logic change) |
refactor |
Code refactoring |
perf |
Performance improvement |
test |
Adding or updating tests |
build |
Build system or dependencies |
ci |
CI/CD configuration |
chore |
Maintenance tasks |
revert |
Revert a previous commit |
Optional, but helpful. Common scopes:
api- API clientcli- Command-line interfaceconfig- Configurationanalyzers- Analysis modulesexporters- CSV export
# Feature
feat(api): add retry logic for rate-limited requests
# Bug fix
fix(cli): handle empty repository list gracefully
# Documentation
docs(readme): add troubleshooting section
# Test
test(analyzers): add unit tests for quality metrics
# Breaking change (note the !)
feat(api)!: change response format for commits endpoint
BREAKING CHANGE: The commits endpoint now returns a different structure.- Tests pass locally (
pytest tests/ -v) - Linter passes (
ruff check src/github_analyzer/) - Type checker passes (
mypy src/github_analyzer/) - Coverage is ≥90%
- Documentation is updated (if applicable)
- Commit messages follow conventions
When opening a PR, include:
## Summary
Brief description of the changes.
## Changes
- Change 1
- Change 2
## Test Plan
- [ ] Unit tests added/updated
- [ ] Manual testing performed
## Related Issues
Fixes #123- Automated checks must pass (CI, linting, tests)
- Code review by at least one maintainer
- Discussion - respond to feedback constructively
- Approval - once approved, a maintainer will merge
- Delete your feature branch
- Pull the latest
mainto stay updated
Include:
- Python version
- Operating system
- Steps to reproduce
- Expected vs actual behavior
- Error messages (if any)
Include:
- Use case description
- Proposed solution
- Alternatives considered
For security vulnerabilities, please do not open a public issue. Instead, contact the maintainers directly.
- Open an issue for discussion before starting major changes
- Check existing issues and PRs to avoid duplicates
- Be patient - maintainers review PRs as time allows
Thank you for contributing!