This directory contains the CI/CD pipeline configuration for the concurrent library.
Documwntation can be found here
The CI pipeline includes the following jobs:
- Runs on multiple Go versions (1.23, 1.24.3)
- Executes all tests with race detection
- Generates coverage reports
- Uploads coverage to Codecov
- Runs comprehensive performance benchmarks
- Parses and analyzes benchmark results
- Checks performance thresholds
- Uploads benchmark results as artifacts
- Runs memory profiling benchmarks
- Analyzes memory usage patterns
- Checks memory thresholds
- Uploads memory analysis results
- Runs gosec security scanner
- Checks for vulnerabilities with govulncheck
- Ensures code security standards
- Builds the project for multiple platforms
- Tests all example programs
- Ensures cross-platform compatibility
- Compares current performance with previous runs
- Detects performance regressions
- Generates performance reports
- Only runs on push/schedule events
- Final validation of all pipeline stages
- Ensures all quality metrics are met
- Provides comprehensive status report
GO_VERSION: Go version to use (default: 1.24.3)BENCHMARK_THRESHOLD_NS: Performance threshold in nanoseconds (default: 100000)MEMORY_THRESHOLD_MB: Memory threshold in MB (default: 50)COVERAGE_THRESHOLD: Coverage threshold percentage (default: 80)
- Push: Runs on pushes to main/develop branches
- Pull Request: Runs on PRs to main/develop branches
- Schedule: Daily at 2 AM UTC for regression detection
The pipeline uses several custom scripts in the scripts/ directory:
parse_benchmarks.go: Parses benchmark output into JSONcheck_thresholds.go: Validates performance thresholdscheck_memory.go: Validates memory usage thresholdscompare_benchmarks.go: Compares performance between runsgenerate_report.go: Generates performance reportstest_ci.sh: Local CI testing script
To test the CI pipeline locally:
# Run the full CI pipeline locally
./scripts/test_ci.sh
# Or run individual components
make test-all
make coverage
go test -bench=. -benchmemThe pipeline generates several artifacts:
- benchmark-results: Raw benchmark data and summaries
- memory-results: Memory profiling data and analysis
- performance-report: Generated performance reports
The pipeline enforces the following quality gates:
- ✅ All tests must pass
- ✅ No race conditions detected
- ✅ Coverage above threshold (80%)
- ✅ Performance within thresholds
- ✅ Memory usage within limits
- ✅ No security vulnerabilities
- ✅ All examples build and run
- ✅ No performance regressions
The pipeline provides detailed monitoring:
- Real-time test results
- Performance trend analysis
- Memory usage tracking
- Security vulnerability reports
- Coverage trend analysis
- Performance Thresholds: If benchmarks fail, check if the threshold is too strict
- Memory Limits: If memory checks fail, analyze the memory profile
- Race Conditions: Use
-raceflag to detect concurrency issues - Coverage: Ensure all new code is properly tested
# Run specific tests
go test -v -run TestSpecific
# Run benchmarks with detailed output
go test -bench=. -benchmem -v
# Analyze memory usage
go tool pprof mem.prof
# Check race conditions
go test -race -vCurrent performance baselines (as of latest run):
- Pool: ~50,000 ns/op
- MapConcurrent: ~49,000 ns/op
- FanOut: ~43,000 ns/op
- FanIn: ~37,000 ns/op
- Pipeline: ~90,000 ns/op
- RateLimiter: ~8 ns/op
- CircuitBreaker: ~93 ns/op
These baselines are used for regression detection and threshold validation.