This is the server-side component for ChatFS.
It acts as a lightweight relay between:
- a web-based AI (Grok, Qwen, etc.)
- your local machine running the ChatFS client
You probably want the client repo:
➡️ https://github.com/deexor64/chatfs
That’s the part you run locally to share your workspace.
This repo is just the server that sits in between.
- exposes simple URL-based endpoints for LLMs
- maintains WebSocket connections with clients
- forwards requests → client → response → back to LLM
It intentionally stays very thin. All filesystem logic and safety checks happen in the client.
You can run your own server instead of using a public one.
This is recommended if you care about:
- privacy
- control over sessions
- not sending requests through external infrastructure
- Python 3.12
pip
git clone https://github.com/deexor64/chatfs-server.git
cd chatfs-serverIf you don’t have PDM:
pip install pdmThen:
pdm installpdm run startServer will start on:
http://0.0.0.0:8000
If your platform doesn’t have PDM preinstalled, use the provided build.sh:
bash build.shThen use this as your start command:
pdm run start- Client connects via WebSocket →
/client/ - LLM sends requests →
/{client_id}/... - Server forwards everything to the correct client session
- This server does not enforce filesystem safety
- All validation happens in the client
- Treat this as a relay layer, not a secure boundary
If you use a public server, your requests pass through it.
For better privacy:
run your own instance and point the client to it