A highly advanced Formula 1 analytics dashboard web application with predictions and detailed past results analysis.
- Overview Dashboard: Next race countdown, predicted podium, season points progression, team performance, and recent races
- Race Weekend View: Detailed analysis with circuit info, weather, qualifying results, race analysis, and predictions
- Drivers Page: Comprehensive driver statistics, performance charts, and individual driver detail views
- Constructors Page: Team standings, driver contributions, and championship outlook
- Historical Analytics: Deep dive into past results with customizable metrics and time ranges
- Predictions: Season title odds, race predictions, and "what-if" scenario modeling
- Settings: Theme customization, ML model configuration, and data management
- Data Export: Export data to CSV or JSON format
- Data Refresh: Manual cache clearing and data refresh
- Error Handling: Global error boundary with graceful error recovery
- Real-Time Telemetry: WebSocket client for live race weekend data (ready for integration)
- React 18 + TypeScript - Modern UI framework with type safety
- Vite - Fast build tool and dev server
- Tailwind CSS - Utility-first CSS framework
- Recharts - React charting library
- React Router - Client-side routing
- date-fns - Date utility library
- Lucide React - Icon library
-
Clone the repository (or navigate to the project directory)
-
Install dependencies:
npm install
-
Start the development server:
npm run dev
-
Build for production:
npm run build
-
Preview production build:
npm run preview
src/
├── components/ # Reusable UI components
│ ├── layout/ # Layout components (Sidebar, TopBar, MainLayout)
│ └── ui/ # Base UI components (Button, Input, Select, StatCard, ErrorBoundary, etc.)
├── hooks/ # Custom React hooks
│ ├── useTheme.ts # Theme management hook
│ └── useTelemetry.ts # Real-time telemetry hook
├── lib/ # Core business logic
│ ├── api/ # API clients and data service
│ │ ├── openF1Client.ts # OpenF1 API client
│ │ ├── openF1Transformers.ts # OpenF1 data transformers
│ │ ├── rateLimiter.ts # Rate limiting utility
│ │ └── f1DataService.ts # Main data service with caching
│ ├── data/ # Data layer
│ │ ├── mockData.ts # Legacy file (not used - all data from API)
│ │ ├── dataUtils.ts # Data query utilities
│ │ └── telemetry.ts # WebSocket telemetry client
│ ├── predictions/ # Prediction engine
│ │ ├── predictionEngine.ts # Heuristic-based prediction algorithms
│ │ └── mlService.ts # ML prediction service interface
│ └── utils/ # Utility functions
│ ├── export.ts # Data export (CSV/JSON)
│ ├── refresh.ts # Data refresh utilities
│ ├── retry.ts # Retry logic with backoff
│ ├── analytics.ts # Advanced analytics calculations
│ └── performance.ts # Performance optimization utilities
├── pages/ # Page components
│ ├── Overview.tsx
│ ├── RaceWeekend.tsx
│ ├── Drivers.tsx
│ ├── Constructors.tsx
│ ├── Historical.tsx
│ ├── Predictions.tsx
│ └── Settings.tsx
├── types/ # TypeScript type definitions
│ └── index.ts # Domain models (Driver, Team, Race, Circuit, etc.)
├── App.tsx # Main app component with routing
├── main.tsx # Application entry point
└── index.css # Global styles and Tailwind configuration
The application uses OpenF1 API exclusively for real-time F1 data:
- ✅ OpenF1 API (https://api.openf1.org) - Modern, free, no CORS issues
- ✅ Rate limiting - Automatic throttling (400ms minimum interval) with exponential backoff retry logic
- ✅ Response caching (5 minutes) for improved performance
- ✅ Loading states and comprehensive error handling
- ✅ Custom React hooks for data fetching
- ✅ Detailed logging for debugging and monitoring
- ✅ Data refresh utilities - Manual cache clearing and refresh
- ✅ Data export - CSV and JSON export functionality
Why OpenF1 instead of FastF1?
- FastF1 is a Python library, not a REST API
- OpenF1 provides similar data as a REST API (no backend needed)
- See
FASTF1_BACKEND.mdif you want to use FastF1 via a Python backend
Files:
src/lib/api/openF1Client.ts- OpenF1 API clientsrc/lib/api/openF1Transformers.ts- Transform OpenF1 responsessrc/lib/api/rateLimiter.ts- Rate limiting and retry logicsrc/lib/api/f1DataService.ts- Main data service with cachingsrc/lib/utils/refresh.ts- Data refresh utilitiessrc/lib/utils/export.ts- Data export utilities (CSV/JSON)
- URL: https://api.openf1.org
- Real-time data (paid for live)
- Update
ergastClient.tsto use OpenF1 endpoints
FastF1 provides live timing and telemetry data.
Integration Steps:
- Create a Python backend service that uses FastF1
- Expose REST API endpoints
- Update
ergastClient.tsto use your backend - Add authentication if needed
Build your own data aggregation service that combines multiple sources.
See API_INTEGRATION.md for detailed API integration guide.
See TENSORFLOW_MODEL_GUIDE.md for TensorFlow.js model training guide.
See FASTF1_BACKEND.md for FastF1 Python backend integration.
The prediction engine in src/lib/predictions/predictionEngine.ts uses weighted heuristics:
- Recent Form (last 5 races): 35%
- Team Performance: 30%
- Track-Specific History: 25%
- Qualifying vs Race Pace Delta: 10%
A production-ready ML service abstraction is implemented in src/lib/predictions/mlService.ts:
- ✅ Service interface - Clean abstraction for different ML backends
- ✅ TensorFlow.js integration - Fully implemented and ready to use!
- ✅ Heuristic fallback - Automatic fallback if ML models fail
- ✅ Configuration - Configurable via Settings page
- ✅ Multiple backend support - TensorFlow.js (✅), PyTorch/TensorFlow API, Cloud ML
Usage:
import { mlPredictionService } from '@/lib/predictions/mlService';
// Configure TensorFlow.js (or use Settings page)
mlPredictionService.configure({
modelType: 'tensorflow',
modelUrl: '/models/f1-predictor/model.json'
});
// Get predictions - automatically uses TensorFlow.js model
const predictions = await mlPredictionService.predictRace(2024, 5);TensorFlow.js is now fully integrated and ready to use:
Features:
- ✅ Model loading with automatic caching
- ✅ Feature preparation (24 features per driver)
- ✅ Race, qualifying, and season predictions
- ✅ Automatic error handling with heuristic fallback
Quick Start:
- Train a model using
TENSORFLOW_MODEL_GUIDE.md - Place model in
public/models/f1-predictor/ - Configure in Settings page (select "TensorFlow.js", enter model URL)
- Predictions automatically use your model!
Model Requirements:
- Input: 24 features per driver (see
TENSORFLOW_MODEL_GUIDE.md) - Output: Position predictions or probabilities
- Format: TensorFlow.js (converted from Keras/TensorFlow)
Serve ML models via API:
- Train model using PyTorch/TensorFlow
- Deploy model as REST API (Flask/FastAPI)
- Configure endpoint and API key in Settings page
- Service automatically handles API calls
Use managed ML services:
- AWS SageMaker
- Google Cloud ML
- Azure Machine Learning
Current Status:
- ✅ Heuristic-based predictions implemented and working
- ✅ ML service interface ready for integration
- ✅ Configuration UI in Settings page
- 🔮 Future: Implement actual ML model loading/inference
A production-ready WebSocket telemetry client is implemented:
Files:
src/lib/data/telemetry.ts- WebSocket client with reconnection logicsrc/hooks/useTelemetry.ts- React hook for telemetry data
Features:
- ✅ Automatic reconnection with exponential backoff
- ✅ Event-based message handling (position, lap, sector, flag, safety_car, weather)
- ✅ Type-safe message interfaces
- ✅ Connection state management
- ✅ React hook for easy component integration
Usage:
// Using the React hook (recommended)
import { useTelemetry } from '@/hooks/useTelemetry';
const { isConnected, data, connect, disconnect } = useTelemetry('wss://api.example.com/telemetry');
// Or using the client directly
import { createTelemetryClient } from '@/lib/data/telemetry';
const client = createTelemetryClient('wss://your-telemetry-service.com');
client.on('position', (data) => {
console.log('Position update:', data);
});
client.connect();Data Sources:
- FastF1 WebSocket: Use FastF1's live timing data
- F1 Official API: Requires API access
- Custom WebSocket Service: Build your own real-time data service
The application uses Tailwind CSS with a custom design system:
- Dark theme by default with light theme support
- Team colors used throughout for visual consistency
- Responsive design for mobile, tablet, and desktop
- Consistent spacing and typography
Edit tailwind.config.js to customize:
- Colors
- Spacing
- Typography
- Border radius
Edit src/index.css for:
- CSS variables (theme colors)
- Global styles
The project uses strict TypeScript. All types are defined in src/types/index.ts.
- Components: Reusable UI components in
components/ - Pages: Full page views in
pages/ - Hooks: Custom React hooks in
hooks/ - Lib: Business logic and utilities in
lib/
- Define types in
src/types/index.ts - Add data utilities in
src/lib/data/ - Create components in
src/components/ - Build pages in
src/pages/ - Add routes in
src/App.tsx
-
Data Export: Export standings, results, and analytics to CSV or JSON
src/lib/utils/export.ts- Export utilities- Available on Drivers, Constructors, Historical, and Race Weekend pages
-
Data Refresh: Manual cache clearing and data refresh
src/lib/utils/refresh.ts- Refresh utilities- Available in Settings page
-
Error Handling: Global error boundary for graceful error recovery
src/components/ui/ErrorBoundary.tsx- Error boundary component- Integrated in
App.tsxfor app-wide error catching
-
Advanced Analytics: Additional statistical calculations
src/lib/utils/analytics.ts- Analytics utilities- Includes consistency, reliability, momentum, and performance breakdowns
-
Performance Utilities: Optimization helpers
src/lib/utils/performance.ts- Debounce, throttle, batch requestssrc/lib/utils/retry.ts- Retry logic with exponential backoff
-
ML Prediction Service: Production-ready ML service interface
src/lib/predictions/mlService.ts- ML service abstraction- Configurable via Settings page
- Ready for TensorFlow.js, PyTorch/TensorFlow API, or Cloud ML integration
-
Real-Time Telemetry: WebSocket client for live data
src/lib/data/telemetry.ts- WebSocket clientsrc/hooks/useTelemetry.ts- React hook for telemetry- Automatic reconnection with exponential backoff
- Code splitting: Routes are automatically code-split by Vite
- Memoization: Use
useMemoanduseCallbackfor expensive computations - Lazy loading: Consider lazy loading heavy components
- Data caching: Implement caching for API responses (5-minute cache)
- Rate limiting: Automatic rate limiting for API requests (400ms minimum interval)
- Batch requests: Utility for batching API requests
- Chrome/Edge (latest)
- Firefox (latest)
- Safari (latest)
MIT
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
- Formula 1 data structure inspired by Ergast API
- Team colors based on 2024 F1 livery
- Prediction methodology inspired by F1 analytics community
Note: This application uses OpenF1 API for real-time F1 data. ML model integration for predictions is a future enhancement as described above.