Adaptive intelligence for every device.
AetherAI is an open‑source universal AI runtime designed to work on any device, with any model, under any conditions. Whether you're offline on a low‑power machine or online with access to powerful cloud models, AetherAI automatically selects the best available backend to deliver a seamless AI experience.
AetherAI is not a model — it’s the intelligence layer that sits above models. It detects your hardware, storage, and connectivity, then routes requests to the most capable engine available. When resources improve, it upgrades itself. When conditions worsen, it gracefully falls back. One interface, many possibilities.
AetherAI automatically chooses the best model based on:
- CPU/GPU availability
- RAM and storage
- Internet connectivity
- User preferences
Use local models offline, cloud models online, or mix both.
AetherAI switches seamlessly without breaking the user experience.
- Downloads models automatically
- Upgrades to better models when storage allows
- Keeps a lightweight fallback model for offline use
- Supports multiple quantization formats
Developers interact with one simple interface.
AetherAI handles the routing, loading, and execution behind the scenes.
Designed to run on:
- Windows
- macOS
- Linux
- Low‑power devices (where supported models allow)
Add:
- New model backends
- New cloud providers
- New hardware detection modules
- New routing strategies
AetherAI is built to grow with the community.
⚠️ AetherAI is in early development.
Installation instructions will be added as the project evolves.
git clone https://github.com/yourusername/AetherAI.git
cd AetherAIAetherAI/
│
├── aetherai/ # Core source code
│ ├── runtime/ # Routing engine, hardware detection
│ ├── models/ # Local model loaders, cloud adapters
│ ├── storage/ # Model manager, downloads, upgrades
│ ├── net/ # Connectivity detection
│ ├── api/ # Unified interface for apps
│ └── utils/ # Shared helpers
│
├── examples/ # Example scripts and integrations
├── docs/ # Documentation site
├── tests/ # Automated tests
│
├── README.md
├── CONTRIBUTING.md
├── LICENSE
└── ROADMAP.md
AetherAI is built around four core components:
Identifies CPU/GPU, RAM, and storage to determine what models can run.
Detects online/offline status and network quality.
Downloads, installs, updates, and removes local models. Keeps a fallback model for offline use.
Chooses the best model (local or cloud) for each request. Developers don’t need to think about backends — AetherAI handles it.
AetherAI is open to contributions of all kinds:
-
Code
-
Documentation
-
Model adapters
-
Cloud provider integrations
-
Bug reports
-
Feature suggestions
See CONTRIBUTING.md for guidelines.
-
Basic hardware detection
-
Local model loading
-
Cloud model adapters
-
Routing engine MVP
-
Automatic model upgrades
-
Unified API
-
Plugin system
-
Cross‑platform packaging
-
v1.0 release
AetherAI is released under the MIT License.
AetherAI aims to become the standard open‑source AI runtime — a flexible, intelligent layer that lets developers build once and run anywhere, without worrying about models, hardware, or connectivity.
If you believe AI should be accessible, adaptable, and open, you’re in the right place.
If you like the vision, consider starring the repository. It helps the project grow and reach more contributors.