AI-Powered Food Recognition & Carbon Footprint Calculator
EcoVision AI is an advanced computer vision application that uses YOLO models to detect food items in images and calculate their carbon footprint. The application supports both bounding box detection and segmentation modes, with sophisticated weight estimation for accurate COβ emissions calculation.
- Bounding Box Detection: Fast food recognition using average weight database
- Segmentation Mode: Advanced pixel-level detection with AI-powered weight estimation
- Standard Mode: Uses pre-calculated average weights from comprehensive food database
- Segmentation Mode: Implements depth estimation + area calculation + density-based volume estimation for precise weight calculation
- Image Upload: Analyze food images from your device
- Live Detection: Real-time food detection using WebRTC for detailed analysis
- Calculate COβ emissions for 45+ food categories
- Comprehensive emissions database with per-kg COβ factors
- Export results to CSV or Excel with detection mode indicators
- WebRTC-based camera access (no server-side camera required)
- Compatible with Streamlit Community Cloud, Heroku, AWS, GCP, Azure
- Optimized for production deployment
- Python 3.8+
- CUDA-compatible GPU (optional, for faster inference)
- Clone the repository
git clone https://github.com/kuennethgroup/EcoVision.git
cd EcoVision
- Install dependencies
pip install -r requirements.txt
- Run the application
streamlit run app.py
- Access the app
Open your browser and navigate to
http://localhost:8501
EcoVision/
βββ π .streamlit/
β βββ config.toml # Streamlit configuration
βββ π data/
β βββ data_all.xlsx # Food COβ database (45+ items)
βββ π src/
β βββ π components/
β β βββ camera_controls.py # Camera management
β β βββ input_live.py # Live input handling
β β βββ sidebar.py # UI sidebar components
β βββ π logic/
β β βββ emissions_calculator.py # COβ calculations
β β βββ image_processing.py # YOLO inference
β β βββ model_loader.py # Model management
β β βββ data_loader.py # Data loading
β β βββ weight_estimation.py # Advanced weight estimation
β βββ config.py # Configuration constants
βββ π pages/
β βββ image_analysis.py # Image upload analysis
β βββ live_detection.py # Live camera detection
βββ π training_pipeline/
β βββ π data_processing # Dataset generation and processing
β βββ π model_training # Training and Evaluation of YOLO model
βββ app.py # Main application entry
βββ requirements.txt # Python dependencies
βββ README.md # This file
For information and code, click here
- Model Type: YOLO detection models (standard .pt files)
- Weight Estimation: Database average weights
- Speed: β‘ Fast processing
- Use Case: Quick analysis, real-time detection
- Model Type: YOLO segmentation models (-seg.pt files)
- Weight Estimation: Advanced AI-powered calculation
- Depth Estimation: Depth-Anything-V2-Small-hf model
- Area Calculation: Pixel-level segmentation area
- Volume Calculation: Area Γ estimated thickness
- Weight Calculation: Volume Γ food density
- Speed: π Slower but more accurate
- Use Case: Precise analysis, research applications
- Object Detection: Ultralytics YOLO models from Hugging Face Hub
- Depth Estimation: Depth-Anything-V2-Small-hf (Hugging Face Transformers)
- Segmentation: Custom trained YOLO segmentation models
- Food Database: 45 food categories with COβ factors, average weights, and densities
- COβ Factors: Scientific literature-based emission factors (kg COβ eq/kg food)
- Densities: Physical density values for volume-to-weight conversion
Choose between "Bounding boxes" or "Segmentation" in the sidebar
- Model: Automatically filtered based on detection mode
- Confidence: Adjust detection threshold (0.0-1.0)
- Class Filter: Select specific food categories (optional)
- Upload image (JPG, PNG, JPEG)
- View detection results
- Review COβ emissions report
- Export results
- Start camera feed
- Capture frame when ready
- Process with selected detection mode
- Download results and processed image
- Formats: CSV or Excel
- Filenames: Automatic mode detection (
_bbfor bounding boxes,_segfor segmentation) - Data: Complete emissions analysis with methodology tracking
- Fork this repository
- Connect to Streamlit Community Cloud
- Deploy directly (WebRTC works out of the box)
The application is compatible with:
- Heroku: Add
setup.shandProcfile - AWS/GCP/Azure: Use container deployment
- Local Network: Run with
--server.address 0.0.0.0
# Optional: Hugging Face token for private models
HUGGING_FACE_TOKEN=your_token_here
The application recognizes 45+ food categories including:
π Fruits: Apple, Avocado, Banana, Grapes, Orange, etc.
π₯ Vegetables: Carrot, Broccoli, Tomato, Cucumber, etc.
π« Legumes: Beans, Peas, etc.
π Others: Mushroom, Garlic, Ginger, etc.
Complete list available in data/data_all.xlsx
Models are automatically downloaded from Hugging Face Hub:
- Repository:
nagasaiteja999/EcoVision - Detection Models: Standard YOLO .pt files
- Segmentation Models: Files containing "-seg" in filename
Edit .streamlit/config.toml to customize the UI theme:
[theme]
primaryColor = "#2E86AB"
backgroundColor = '#0E1117'
secondaryBackgroundColor = "#262730"
textColor = "#FAFAFA"
font = "sans serif"
# Install development dependencies
pip install -r requirements.txt
# Run with debug mode
streamlit run app.py
If you use EcoVision AI in your research, please cite:
@software{ecovision_ai_2025,
title={EcoVision AI: AI-Powered Food Recognition and Carbon Footprint Calculator},
author={Kolakaleti, Naga Sai Teja},
year={2025},
organization={Kuenneth Research Group, University of Bayreuth},
url={https://github.com/kuennethgroup/EcoVision}
}
Created by: Naga Sai Teja Kolakaleti
Organization: Kuenneth Research Group, University of Bayreuth
Copyright: Β© 2025 Kuenneth Research Group, University of Bayreuth. All rights reserved.
β Star this repository if you find it useful!
Made with β€οΈ for a sustainable future π
