Self-hosted file storage. Your personal Google Drive alternative.
Simple. Fast. Extensible.
- 📤 Upload, organize, and manage files with drag-and-drop
- 📦 Streaming uploads for large files (multi-GB support)
- 💾 Pluggable storage backends (local disk, S3, in-memory)
- 🔄 Content-addressed deduplication (saves storage space)
- 👥 Multi-user support with authentication and per-user quotas
- 📁 Virtual folder hierarchy with file organization
- 🗑️ Deleted items with configurable retention (per-user settings)
- 🎨 Tailwind CSS with responsive dark mode (system preference aware)
- 🔒 Secure by default (CSRF protection, bcrypt, rate limiting)
- 🐳 Easy Docker deployment with multi-arch support
- 🗄️ PostgreSQL or SQLite database options
- 📊 Health checks and Prometheus metrics
Prerequisites: Docker and Docker Compose
git clone https://github.com/agjmills/trove.git
cd trove
cp .env.example .env
make setupTrove is now running at http://localhost:8080
Using pre-built images:
docker pull ghcr.io/agjmills/trove:latestMulti-arch images available for linux/amd64 and linux/arm64.
Trove supports multiple storage backends, configured via the STORAGE_BACKEND environment variable.
Stores files on the local filesystem with path traversal protection using Go 1.23+ os.Root.
STORAGE_BACKEND=disk
STORAGE_PATH=./data/filesStores files in S3 or any S3-compatible service (MinIO, Cloudflare R2, Backblaze B2, rustfs).
Uses native AWS SDK environment variables and credential chain:
| Variable | Description |
|---|---|
S3_BUCKET |
Bucket name (required) |
S3_USE_PATH_STYLE |
Set to true for MinIO/rustfs |
AWS_REGION |
AWS region |
AWS_ACCESS_KEY_ID |
Access key |
AWS_SECRET_ACCESS_KEY |
Secret key |
AWS_ENDPOINT_URL |
Custom endpoint for S3-compatible services |
The SDK also supports ~/.aws/credentials, ~/.aws/config, and IAM roles.
# AWS S3
STORAGE_BACKEND=s3
S3_BUCKET=my-trove-bucket
AWS_REGION=us-east-1
# S3-compatible (MinIO, rustfs)
STORAGE_BACKEND=s3
S3_BUCKET=my-trove-bucket
S3_USE_PATH_STYLE=true
AWS_ENDPOINT_URL=http://localhost:9000
AWS_ACCESS_KEY_ID=minioadmin
AWS_SECRET_ACCESS_KEY=minioadminLocal development with rustfs:
# Start rustfs (S3-compatible storage)
docker compose --profile s3 up rustfs -d
# Create the bucket
AWS_ACCESS_KEY_ID=rustfsadmin AWS_SECRET_ACCESS_KEY=rustfsadmin \
aws --endpoint-url http://localhost:9000 s3 mb s3://trove
# Run Trove with S3 backend
STORAGE_BACKEND=s3 \
S3_BUCKET=trove \
S3_USE_PATH_STYLE=true \
AWS_ENDPOINT_URL=http://localhost:9000 \
AWS_ACCESS_KEY_ID=rustfsadmin \
AWS_SECRET_ACCESS_KEY=rustfsadmin \
go run ./cmd/serverStores files in memory. Useful for integration tests. Data is lost on restart. Best used in conjunction with sqlite in memory mode for metadata.
STORAGE_BACKEND=memoryTrove separates physical storage from logical organization:
| Field | Purpose | Example |
|---|---|---|
StoragePath |
Physical location (UUID-based) | a48f0152-cbcb-4483.bin |
LogicalPath |
UI folder hierarchy | /photos/2024 |
Filename |
Display name (editable) | vacation.jpg |
OriginalFilename |
Original upload name (immutable) | IMG_1234.jpg |
This design enables:
- Backend portability: Move between disk/S3 without changing file references
- Safe storage: UUID paths prevent path traversal attacks
- Flexible organization: Rename and move files without touching physical storage
Files are content-addressed by SHA-256 hash. The upload flow ensures duplicates never touch the storage backend:
Client → Temp file (computing SHA-256) → Check DB → Storage (if new)
- Upload streams to local temp file while computing hash
- Database checked for existing file with same hash
- If duplicate: temp file discarded, new DB record points to existing storage path
- If new: temp file uploaded to storage backend
- Storage quota only charged once per unique file
When deleting files, the physical file is only removed when all references are deleted.
Note: Uploads require a writable temp directory. Configure TEMP_DIR for containerized deployments (see Configuration).
make dev # Start with hot-reload
make test # Run tests
make shell # Container shell
make psql # Database consoleRunning locally without Docker:
# SQLite (simplest)
DB_TYPE=sqlite DB_PATH=./data/trove.db go run ./cmd/server
# In-memory database (ephemeral)
DB_TYPE=sqlite DB_PATH=:memory: go run ./cmd/serverEdit .env for your setup:
# Server
TROVE_PORT=8080
ENV=development # or production
# Database
DB_TYPE=postgres # or sqlite
DB_HOST=postgres
DB_NAME=trove
DB_USER=trove
DB_PASSWORD=secret
# Storage
STORAGE_BACKEND=disk # disk, s3, or memory
STORAGE_PATH=./data/files # for disk backend
TEMP_DIR=/tmp # temp directory for uploads
# S3 (if STORAGE_BACKEND=s3) - uses native AWS SDK variables
S3_BUCKET=trove # required
S3_USE_PATH_STYLE=false # true for MinIO/rustfs
# AWS_REGION=us-east-1
# AWS_ACCESS_KEY_ID=...
# AWS_SECRET_ACCESS_KEY=...
# AWS_ENDPOINT_URL=http://localhost:9000 # for S3-compatible services
# Limits
DEFAULT_USER_QUOTA=10G # Per-user storage limit
MAX_UPLOAD_SIZE=500M # Max file size per upload
# Security
SESSION_SECRET=change-in-production
CSRF_ENABLED=trueSee .env.example for all options.
# 1. Setup
cp .env.example .env
cp docker-compose.example.yml docker-compose.prod.yml
# 2. Configure
# Edit .env with production values (strong SESSION_SECRET, etc.)
# 3. Deploy
docker compose -f docker-compose.prod.yml up -d
# 4. Monitor
docker compose -f docker-compose.prod.yml logs -fRecommended: Run behind a reverse proxy (Caddy/Nginx) for HTTPS.
For production Docker deployments, ensure writable volumes for:
services:
app:
volumes:
- trove-data:/app/data # Database and files (disk backend)
- trove-temp:/tmp # Temp directory for uploads
environment:
- TEMP_DIR=/tmp- Use a strong
SESSION_SECRET(generate withopenssl rand -base64 32) - Restrict
/metricsendpoint access via firewall or reverse proxy auth - Enable HTTPS in production: Set
ENV=productionfor strict CSRF validation- Behind reverse proxy with TLS termination: Ensure
X-Forwarded-Protoheader is forwarded
- Behind reverse proxy with TLS termination: Ensure
- Keep database credentials secure
- Regularly update to latest version
See INSTALL.md for detailed deployment options.
GET /health - Returns server health with database and storage checks
{
"status": "healthy",
"version": "1.0.0 (commit: abc123)",
"checks": {
"database": {"status": "healthy", "latency": "2.1ms"},
"storage": {"status": "healthy", "latency": "0.5ms"}
},
"uptime": "2h15m30s"
}GET /metrics - Prometheus-compatible metrics endpoint
Available metrics:
trove_http_requests_total- HTTP request counters by method, path, statustrove_http_request_duration_seconds- Request latency histogramstrove_http_requests_in_flight- Current concurrent requeststrove_storage_usage_bytes- Per-user storage consumptiontrove_files_total- File upload counterstrove_login_attempts_total- Authentication metrics
Security Note: The metrics endpoint is unauthenticated. Restrict access in production.
Production uses JSON format:
{"time":"2025-11-24T10:30:00Z","level":"INFO","msg":"http request","method":"POST","path":"/upload","status":200,"duration_ms":145}Development uses human-readable text format.
| Method | Endpoint | Description |
|---|---|---|
POST |
/upload |
Upload file (multipart/form-data) |
GET |
/download/{id} |
Download file |
POST |
/delete/{id} |
Delete file |
| Method | Endpoint | Description |
|---|---|---|
POST |
/folder/create |
Create folder |
POST |
/folder/delete/{name} |
Delete empty folder |
| Method | Endpoint | Description |
|---|---|---|
GET |
/health |
Health check |
GET |
/metrics |
Prometheus metrics |
For security-related documentation including CSRF protection details and migration notes, see SECURITY.md.
Contributions welcome! See CONTRIBUTING.md
Completed:
- ✅ Authentication & multi-user support
- ✅ File management with streaming uploads
- ✅ Storage quotas & deduplication
- ✅ Multiple storage backends (disk, S3, memory)
- ✅ Virtual folder hierarchy
- ✅ CSRF protection & rate limiting
- ✅ Health checks & Prometheus metrics
- ✅ Structured logging
- ✅ Tailwind CSS with responsive dark mode
- ✅ Production-ready Docker images (~18MB)
- ✅ Deleted items with configurable retention
Planned:
- File sharing links
- Version history
- Thumbnail generation
- Bulk operations
- REST API with authentication
Open source. See LICENSE file.