Status: Challenge Complete ✨
Current Date: December 31, 2025
- Overview
- MVP Architecture
- Daily Progress
- Day 1: Intent Parser
- Day 2: Data Models, Storage & CRUD API
- Day 3: Task Engine, Calendar Logic & Core Enhancements
- Day 4: Weekly View, Conflict Detection & Assistant Insights
- Day 5: Today View, Suggestions Engine & Smart Rescheduling
- Day 6: Assistant Chat v1, Task Matching & Conflict-Aware Responses
- Day 7: Confirmations, Executable Actions & First Web Client
- Day 8: Backend → Mobile Connection & Smarter Task Creation
- Day 9: Today View Refinement & UI Foundation
- Day 10: UI Stability, Bug Fixes & Early Design Planning
- Day 11: Today View UI Alignment & Backend Sync
- Day 12: Check-In System & Energy Status
- Day 13: Reminders & Settings
- Day 14: Calendar View (Month & Week)
- Day 15: Calendar Quick Views & Task Interaction Layer
- Day 16: Authentication & User Identity Foundation
- Day 17: Authentication Emails & Verification Flow
- Day 18: Authentication Hardening & Production Readiness
- Day 19: PostgreSQL Migration & Async Backend Refactor
- Day 20: Closing the PostgreSQL Migration
- Day 21: Assistant Intelligence Foundation
- Day 22: Deep Context Awareness & Progress Insights
- Day 23: Memory Foundation
- Day 24: Context Awareness Signals & Memory Extraction
- Day 25: Selective Memory Injection & Memory-Informed Behavior
- Day 26: Deployment & Production Setup
- Day 27: Cross-Browser Stability & Explore Page
- Day 28: Goal-Aware Intelligence & Voice Input
- Day 29: Smart Task Suggestions & Explore Page Refinements
- Day 30: Production Readiness & Architecture Refinement
- Day 31: Completion & Reflection
LifeOS is a personal project I’m building throughout December as part of a 31-day AI challenge.
My goal is to create a mobile-first AI assistant that helps with planning, organisation, discipline, and self-reflection — something I would actually use every day. I also see this project as a way to build consistency and deepen my skills in AI engineering.
| Component | Technology | Role |
|---|---|---|
| Backend API | FastAPI (Python) | Core server handling AI logic |
| AI Engine | OpenAI (gpt-4o-mini) | Intent parsing & natural language understanding |
| Data Models | Pydantic | Defines structured intents |
| Storage | JSON / SQLite (planned) | Will store tasks, reminders, and diary entries |
| Mobile UI | React Native (planned) | Future mobile-first interface |
Today I implemented the first foundational feature:
the Intent Parser — the component that translates natural language into structured data the app can act on.
This is the “language brain” of LifeOS.
| Capability | Description |
|---|---|
| Intent Detection | Identifies event, reminder, diary, and memory messages |
| Date Understanding | Handles “today”, “tomorrow”, weekdays, and relative phrases |
| Time Extraction | Converts natural language time into standard formats |
| Category Inference | Detects simple categories (health, work, personal, social) |
| Timezone Handling | Uses Europe/London as default |
Input:
“Add gym tomorrow at 6pm”
Output (JSON):
{
"intent_type": "event",
"title": "gym",
"date": "2025-12-02",
"time": "18:00",
"datetime": "2025-12-02 18:00",
"category": "health",
"notes": null
}This will serve as the foundation for scheduling, reminders, journaling, and future agentic behaviour.
- Python + FastAPI
- OpenAI (gpt-4o-mini)
- Pydantic
- pytz
📁 LifeOS/
├── 📁 backend/
│ ├── 📁 app/
│ │ ├── 📁 ai/
│ │ │ └── parser.py # Intent parsing logic
│ │ ├── 📁 models/
│ │ │ └── intent.py # Pydantic schema for Intent
│ │ └── main.py # FastAPI server
│ ├── requirements.txt # Python dependencies
│ └── .env # Environment variables
└── README.md # Project documentation
Today I built the data foundation of LifeOS — giving the system a place to store structured information permanently.
Yesterday, the assistant could understand commands. Today, it can remember them.
1. Core Data Models (Pydantic)
| Model | Purpose |
|---|---|
| Task | Stores events & reminders |
| DiaryEntry | Stores daily reflections |
| Memory | Stores long-term personal preferences |
| BaseItem | Provides unique IDs for all items |
2. JSON Storage Layer
A simple persistent storage file located at: backend/app/db/data.json
Initial schema:
{
"tasks": [],
"diary": [],
"memories": []
}Lightweight now, easily replaced with SQLite later.
3. Intent → Storage Pipeline
LifeOS can now take a parsed intent and save it as:
- an event
- a reminder
- a diary entry
- a memory
This completes the first full loop:
natural language → structured intent → persistent data
4. CRUD Endpoints (FastAPI)
| Endpoint | Description |
|---|---|
GET /tasks |
Return all tasks |
GET /diary |
Return diary entries |
GET /memories |
Return stored memories |
GET /all |
Return entire database |
POST /clear |
Reset everything (dev only) |
{
"tasks": [
{
"id": "cdf98f06-07db-4721-b868-40dc6b1faf61",
"type": "reminder",
"title": "buy milk",
"date": "2025-12-02",
"time": "09:00",
"datetime": "2025-12-02 09:00",
"category": "errands",
"notes": null
}
],
"diary": [],
"memories": []
}LifeOS now has persistent memory, which is a major milestone.
- FastAPI
- Python
- Pydantic
- JSON file storage
- Repository pattern
📁 LifeOS/
├── 📁 backend/
│ ├── 📁 app/
│ │ ├── 📁 ai/ # NLP layer
│ │ │ ├── parser.py
│ │ │ └── processor.py
│ │ ├── 📁 logic/ # Business logic
│ │ │ └── intent_handler.py
│ │ ├── 📁 models/ # Data models
│ │ │ ├── base.py
│ │ │ ├── diary.py
│ │ │ ├── intent.py
│ │ │ ├── memory.py
│ │ │ └── task.py
│ │ ├── 📁 routers/ # API layer
│ │ │ └── intent.py
│ │ ├── 📁 storage/ # Data layer
│ │ │ └── repo.py
│ │ └── main.py
│ ├── requirements.txt
│ └── .env
└── README.md
Today was a big step. LifeOS moved from “storing data” to actually understanding time, organising it, and preparing for future intelligent behaviour.
This was the day LifeOS started behaving like a real personal operating system, not just a parser.
1. Task Engine (the heart of scheduling)
I created a full engine for organising tasks:
- Datetime normalisation - all tasks now have a unified
datetimefield - Status detection —
today,upcoming,overdue, orunscheduled - Sorting - global chronological ordering
- Filtering functions
get_tasks_today()get_upcoming_tasks()get_overdue_tasks()get_next_task()
This gives LifeOS the basic intelligence to understand when things happen and how to structure them.
2. Calendar-Friendly Structure
Tasks can now be grouped by date:
{
"2025-12-02": [...],
"2025-12-05": [...],
"2025-12-20": [...]
}I also added group_tasks_pretty() — a UI-friendly version for the future mobile app.
3. New API Endpoints
| Endpoint | Description |
|---|---|
GET /tasks/today |
Tasks for today |
GET /tasks/upcoming |
Chronological future tasks |
GET /tasks/overdue |
Tasks whose time has passed |
GET /tasks/next |
The very next upcoming task |
GET /tasks/grouped |
Calendar-style grouped tasks |
GET /tasks/grouped-pretty |
UI-friendly grouping |
GET /tasks/summary |
Daily overview & stats |
LifeOS now has everything needed for a real schedule view.
Today I also added several foundational upgrades that will power future features.
1. Duration & End Time
Added to the Task model:
duration_minutes: Optional[int] = None
end_datetime: Optional[str] = NoneThis prepares LifeOS for:
- conflict detection
- free-time blocks
- timeline visualisation
2. Event/Reminder Filtering
GET /tasks/events
GET /tasks/reminders
3. Task Completion
POST /tasks/{id}/complete
Allows marking tasks as done — needed for habit-tracking and stats.
Added to the Task model:
energy(low/medium/high)context(work/home/laptop/outside/errand)
And improved the parser so it now handles:
- “by Friday”
- “in an hour”
- “in 30 minutes”
- “after work”
- “this evening”
These small pieces will become the core of LifeOS’s agentic intelligence later.
- Error handling for malformed dates
- Logging (
app/logging.py) added across repo, parser, task engine - Better
/taskssorting (status → datetime) - Stats endpoint (
GET /stats) - Quick-add endpoint (
POST /tasks/add)
These upgrades make the backend much more stable and professional.
Day 3 was the moment LifeOS stopped being just an LLM wrapper and became a system with memory, structure, behaviour, and time awareness.
Building this reminded me why I started this challenge - to create something I would personally use every day, and to sharpen my engineering skills through real, hands-on work.
Today I moved from looking at tasks “one day at a time” to something closer to how I actually think: weeks, patterns, and gentle insights about my schedule. LifeOS is starting to feel less like a list API and more like a small planning brain.
I added a proper week engine that understands a Monday–Sunday week in Europe/London time.
New logic (week_engine.py):
get_current_week_boundaries()→ finds this week’s Monday–Sundayget_week_view()→ groups tasks by day for the current weekget_tasks_in_range(start, end)→ generic calendar range helper for any date window
New endpoints:
| Endpoint | Description |
|---|---|
GET /tasks/week |
Current week view (Mon–Sun), tasks grouped by day |
GET /tasks/calendar?start=YYYY-MM-DD&end=YYYY-MM-DD |
Generic calendar range (day/week/month) |
These will power the future weekly and monthly calendar views in the mobile app (e.g. “show me this week”, “show me my holiday week”, etc.).
To prepare for more intelligent planning, I built a light conflict detection engine.
New module: app/logic/conflict_engine.py
What it does:
- Builds time blocks for each scheduled task using:
datetimeduration_minutesend_datetime
- If duration isn’t set, it uses default assumptions:
- events → 60 minutes
- reminders → 15 minutes
- Detects overlapping blocks and returns conflict pairs.
New endpoint:
| Endpoint | Description |
|---|---|
GET /tasks/conflicts |
All overlaps across scheduled tasks |
GET /tasks/conflicts?start=...&end=... |
Conflicts only within a specific date range |
Example shape of a conflict:
{
"task_a": { "...": "..." },
"task_b": { "...": "..." },
"overlap_start": "2025-12-05 18:00",
"overlap_end": "2025-12-05 18:30"
}Later this will allow LifeOS to say things like:
“Your 6pm gym overlaps with dinner at 6:30pm on Friday.”
…without me having to manually spot it.
I also added a week summary engine so LifeOS can see the shape of my week, not just individual tasks.
get_week_stats()→ JSON stats for the current week:- total tasks
- total events / reminders
- tasks per day
- evening tasks (after 18:00)
- busiest day
- fully free days
get_week_summary_text()→ short, natural-language overview
| Endpoint | Description |
|---|---|
| GET /tasks/week-summary | JSON statistics for the current week (Mon–Sun) |
| GET /assistant/week-overview | Week stats + human-readable summary |
“This week (2025-12-01 → 2025-12-07) you have 7 tasks in total (4 events and 3 reminders).
Your busiest day is Sunday with 3 task(s).
There are 3 task(s) scheduled for the evening (after 18:00).
You still have fully free days on: Monday, Wednesday, Thursday, Saturday.”
This is exactly the kind of thing a future LifeOS avatar could show during a weekly check-in.
Today I created the first version of the Insight Engine — a lightweight but meaningful intelligence layer.
This is the earliest form of “assistant-like” behaviour:
LifeOS looks at the week and gives small, helpful observations.
app/logic/insight_engine.py
- “You have no tasks scheduled for today.”
- “Your next upcoming task is ‘gym’ at 2025-12-05 18:00.”
- “This week you have 7 tasks (4 events and 3 reminders).”
- “You have 3 evening task(s). Evenings might get busy.”
- “Your busiest day is Sunday with 3 task(s).”
- “You still have fully free days on: Monday, Wednesday, Thursday, Saturday.”
- “You have 1 scheduling conflict(s) that may need attention.”
These are pulled from real data — not templates.
Day 3 was about understanding time. Day 4 was about understanding weeks and patterns.
Today made me realise how much clarity comes from stepping back and looking at patterns, not individual tasks. Implementing the weekly logic and insights taught me how assistants evaluate load, detect conflicts and form summaries. It still isn’t “planning” yet, but the groundwork for actual decision support is now there.
Today’s focus was on moving beyond static insights and giving LifeOS the ability to interpret the day, surface actionable suggestions, and offer rescheduling options. These are the first features that start to feel genuinely “assistant-like” — the system now reacts to what it sees instead of simply reporting facts.
I added a dedicated module for understanding today with more nuance.
This includes:
- grouping tasks into morning / afternoon / evening
- detecting free time blocks between tasks
- estimating load (
empty,light,medium,heavy) - preparing data for the future Today screen in the UI
New module: app/logic/today_engine.py
New endpoint:
| Endpoint | Description |
|---|---|
GET /assistant/today |
Returns today’s tasks, free blocks, and load level |
Example output:
{
"date": "2025-12-04",
"tasks": [],
"free_blocks": [{"start": "06:00", "end": "22:00"}],
"load": "empty"
}Even this simple structure is extremely useful for building intelligent behaviour later.
The assistant can now scan the schedule and offer light, non-intrusive suggestions, nothing pushy, just helpful observations when they make sense.
Suggestions are based on:
- conflicts
- heavy days
- completely free days
- large free blocks
New endpoint:
| Endpoint | Description |
|---|---|
GET /assistant/suggestions |
Returns conflict, overload, and free-time suggestions |
Example suggestion:
{
"reason": "conflict",
"message": "'meeting with flatmates' overlaps with 'video call with relatives'."
}This sets the foundation for future:
“Would you like me to move it?” flows.
I added the first version of a rescheduling helper.
Given a specific task, the assistant now looks for:
- free time blocks
- lighter days in the week
- reasonable alternative times
New endpoint:
| Endpoint | Description |
|---|---|
GET /assistant/reschedule-options?task_id=... |
Suggests new times or days for the selected task |
Even at this early stage, it shows that LifeOS can reason about where a task belongs, not just that it exists.
To prepare for the UI layer, LifeOS now exposes a simple category → colour map for consistent visual styling.
New endpoint:
| Endpoint | Description |
|---|---|
GET /meta/categories |
Returns colour codes for each category |
Day 5 shifted LifeOS from storing schedules to interpreting them.
Building the Today view, suggestions, and early rescheduling logic made me think more about how assistants spot friction and surface small, meaningful insights.
It’s still simple, but the system is now reacting to the shape of my day rather than just listing tasks — much closer to the behaviour I originally envisioned.
Today I focused on making the assistant reliably understand task references, detect conflicts, and return clean, structured responses that the UI can act on. Instead of adding new features, this day was about tightening behaviour and making the AI predictable and safe to integrate later.
The chat endpoint now returns strict, single JSON objects — no markdown, no extra text.
This makes the assistant’s output consistent and UI-friendly.
I added a safer task-matching utility (ai/utils.py) that prevents false matches (e.g., "run" matching "brunch") using:
-
exact match
-
whole-word match
-
fallback contains match
This lets the assistant correctly identify when the user refers to an existing task.
When the user mentions a task and it has a scheduling conflict, the system bypasses the LLM and returns a structured reschedule flow.
Example:
{
"assistant_response": "Your run is scheduled at 10:00 AM tomorrow. Confirm reschedule to 6:00 PM?",
"ui": {
"action": "confirm_reschedule",
"task_id": "...",
"options": ["18:00"]
}
}This is powered by:
-
updated conflict_engine
-
improved rescheduling helper
-
stricter system prompt rules
I added a wrapper (generate_reschedule_suggestions_for_task) so both:
-
/assistant/chat -
/assistant/reschedule-options
use the same suggestion logic. This ensures consistent options across the whole system.
-
fixed imports (
get_all_tasks,load_data) -
updated main routes to match the new assistant logic
-
removed outdated code
-
stabilised JSON parsing
The backend is now consistent and ready for UI integration.
Today’s work wasn’t about big new features, it was about making the assistant reliable.
It can now:
-
recognise tasks in natural language
-
detect when a change affects a scheduled item
-
surface rescheduling options cleanly
-
speak in predictable JSON the UI can read
This is the first step toward a usable conversational layer.
Today was a turning point.
Instead of adding new endpoints, I focused on making the assistant actually able to act — reliably, deterministically, and without confusion.
This was the foundation needed before building the UI.
I created a full pending action system so LifeOS can now:
- ask for confirmation (“Should I move ‘run’ to 19:00?”)
- wait for the user’s answer
- execute the action only when the user says “yes”
- cancel cleanly on “no”
This added a real assistant behaviour loop for the first time:
NLP → intent → ask → confirm → execute → update UI
** New features added:
create_pending_actionget_current_pendingclear_current_pending- persistent storage inside
data.json
Inside /assistant/chat, I implemented deterministic routing:
- “yes”, “ok”, “sure”, “confirm” → apply pending action
- “no”, “cancel”, “ignore” → cancel pending action
- otherwise → normal NLP flow
This made the assistant’s behaviour stable and predictable for the UI.
I added backend functions that actually change the schedule:
- apply_reschedule(task_id, new_datetime)
- edit_task(task_id, fields)
- delete_task(task_id)
These directly mutate the JSON store so the UI updates instantly after confirmation.
They are used by both:
/assistant/chat/assistant/confirm
This endpoint executes whichever pending action exists.
The UI later will call this when a “Yes” button is pressed.
Example response:
{
"assistant_response": "Okay, I moved 'run'.",
"ui": { "action": "update_task", "task_id": "…" }
}Throughout the day we resolved several deep issues:
- the assistant confusing “run” with “gym”
- wrong task selected when multiple tasks shared the same title
- pending actions referencing the wrong task ID
- date parser overriding explicit dates incorrectly
- time parser failing on inputs like “8 am”, “6pm”, etc.
Fixes included:
Now it is:
- time-aware
- date-aware
- strict in title-matching
- safe against fuzzy or misleading matches
(e.g., “run” no longer matches “brunch”)
It now correctly interprets:
- tomorrow / today
- weekdays (Monday–Sunday)
- explicit numeric times (“5pm”, “08:30”)
- semantic times (morning, afternoon, evening)
These improvements were essential for correct, predictable, conflict-free rescheduling.
To prepare for the frontend, I finalised stable UI instruction formats such as:
{
"action": "apply_reschedule",
"task_id": "...",
"new_time": "17:00"
}I also built a minimal web client that:
- sends chat messages to
/assistant/chat - displays backend responses
- shows pending actions
- supports Yes/No confirmation flows
This created the first fully working end-to-end conversational assistant loop.
Today’s work transformed LifeOS from a smart parser into a real assistant.
It can now:
- identify tasks accurately
- propose schedule changes
- ask for confirmation
- execute real mutations
- update the UI
- behave consistently across conversations
This stability was essential before continuing into the React Native UI phase.
Today wasn’t about visuals — it was about making LifeOS actually run on a real device and stabilising the conversational engine.
Most of the work focused on connecting, fixing, and making the assistant behave reliably, not designing final UI elements.
Improved natural-language understanding for:
- “add gym at 6pm”
- “work from 9 to 5”
- “add study session tomorrow at 8am”
- “meeting Friday afternoon”
Fixes included:
- detecting natural time ranges (
from X to Y) - auto-calculating duration + end time
- improved weekday mapping
- more accurate date/time interpretation
This made task creation consistent and predictable.
Before creating or rescheduling a task, the assistant now:
- checks for overlaps
- warns the user
- suggests the nearest free slot
This is the first step toward full conflict-awareness in the UI.
Created a new unified bootstrap endpoint returning:
- Today view
- Week stats
- Suggestions
- Conflict info
- Category colors
- Pending actions
This prepares the backend for proper app screens (Today, Week, Calendar).
Main achievement of the day.
Completed:
- Created the React Native + Expo project
- Added tab navigation (Today, Assistant, Explore)
- Built a functional chat screen
- Connected the iPhone Expo Go app to FastAPI backend
- Fixed network/localhost issues (using device IP instead of 127.0.0.1)
- Verified real device → backend communication
The UI is still minimal — the focus was functionality.
From the phone, I successfully:
- checked today/tomorrow schedules
- created events
- rescheduled tasks
- confirmed actions
- watched the JSON data update live
LifeOS officially became a mobile assistant, not just a backend.
Day 8 wasn’t a “design” day — it was a plumbing, fixing, and connecting day.
But it achieved something huge:
✨ LifeOS now runs fully on a real device and responds intelligently through natural-language chat.
Today was a lighter day, but an important one for aligning the direction of the mobile UI.
Building on yesterday’s backend → mobile integration, I focused on improving the structure and clarity of the Today View. This included:
- adding the first version of the time-aware greeting
- connecting the load meter from the backend to the UI
- cleaning the layout of the morning / afternoon / evening sections
- rendering free time blocks and early insights
- fixing several UI bugs (component imports, routing, and null mappings)
These are small but necessary steps toward a clean, minimal iOS-style interface.
I also finalised the visual direction for LifeOS:
a calm, neutral, minimal iOS aesthetic with moderately rounded components. This foundation will guide all UI improvements going forward.
Not a big feature day — but still consistent progress, keeping the momentum of the challenge.
Even though today’s progress was small, it felt meaningful. After several days of backend-heavy work, shifting attention toward the mobile UI made the project feel more “real” and closer to what I ultimately want LifeOS to become.
Defining the visual direction, calm, minimal, and iOS-native, created a sense of clarity for the rest of the challenge. The Today View is still early, but the structure is forming, and every small improvement makes the app feel more coherent.
Today was a quiet but important day focused on stabilising the mobile UI and preparing for the next stage of design work.
I spent most of the time:
- fixing UI bugs (imports, routing, undefined elements)
- resolving backend ↔ frontend naming mismatches
- cleaning the Today View layout for consistency
- improving task grouping and free-block rendering
- thinking through the broader app structure (Today → Week → Month)
Not a big visual day, but a strong foundational one — the app feels more stable and ready for the deeper design work ahead.
Even without major new features, today’s progress mattered.
Fixing small issues, improving consistency, and clarifying how the UI should flow helps prevent rework later and keeps the project aligned with the long-term vision.
Today was focused on getting the redesigned LifeOS Today View fully aligned with the backend. After making the strategic choice to use React Web for speed and design consistency (instead of the more complex React Native), I finalized the app’s core layout in Lovable. I then shifted my attention to rebuilding the logic and data flow needed to support this cleaner UI structure.
- Horizontal Day Scroller (WIP): Implemented the UI and began wiring logic to load tasks for specific dates.
- Energy Status Card: Connected the BalanceScoreCard to backend load calculations so the app now reflects daily workload and completion progress in real time.
- Task Structure Update: Replaced the old morning/afternoon/evening buckets with a simpler format:
- Scheduled tasks → tasks with time;
- Anytime tasks → flexible tasks without a time;
- Manual Task Creation: Implemented the AddTask modal; new tasks now appear instantly in the Today View and update the energy/load indicator.
To support the new frontend architecture, several backend components were rebuilt:
- Frontend Adapter Layer: Converts backend task objects into the format expected by the UI.
- Extended Task Model: Added duration, endTime calculation, and value/category mapping.
- Updated Today View Engine: Now returns structured data with grouped tasks, load level, and free blocks.
- New & Updated Endpoints: Added create/update/delete/move task endpoints in frontend-compatible format, and extended
/assistant/todaywith optional date support.
This was a structural day, but an important one. The frontend and backend finally speak the same language thanks to the new adapter layer, and the Today View now feels coherent and responsive. There’s still design polishing ahead, but the foundation built on the React Web standard is now strong enough to confidently build the more complex calendar views.
Today focused on completing two closely related foundations of LifeOS:
the Daily Check-In flow and the Energy Status logic that reflects daily workload honestly and consistently.
Together, these changes moved LifeOS closer to behaving like a real operating system — one that clearly separates execution, load, and reflection.
- Daily Check-In Flow: Completed the multi-step check-in modal allowing users to:
- Review and toggle completed tasks
- Handle incomplete tasks by moving them to future dates
- Write a daily reflection note
- Real-Time Task Updates: Ensured task completion and rescheduling happen immediately when the user interacts, not at the end of the check-in.
- Reflection as Snapshot: Designed the check-in to act as a historical record of the day, capturing what happened without mutating task state retroactively.
- Energy Status Behaviour: Finalised how the Today View energy indicator works so it reflects planned workload, not momentary progress.
- Frontend–Backend Sync: Fully wired the Lovable UI to backend endpoints so state stays consistent across Today, Calendar, Energy Status, and Check-In flows.
Several backend additions and refinements were made to support both check-ins and energy tracking:
- New Check-In Endpoints:
POST /checkins— create or update a daily check-inGET /checkins?date=YYYY-MM-DD— retrieve a check-in for a specific day
- Check-In Persistence: Check-ins are stored in
data.jsonas dated records containing:- completed task IDs
- incomplete task IDs
- moved tasks (with new dates)
- reflection note
- timestamp
- Energy Status Calculation:
- Calculated fully on the backend
- Step 1: Sum total scheduled minutes for the day (anytime tasks excluded)
- Step 2: Compare against a daily capacity of 480 minutes (8 hours)
- Step 3: If total scheduled minutes ≥ 624 minutes (130% of capacity) →
Prioritize Rest - Step 4: Otherwise, compute a weighted task load:
- Anytime tasks → weight 0.5
- Scheduled ≤ 30 min → weight 1.0
- Scheduled 30–90 min → weight 1.5
- Scheduled > 90 min → weight 2.0
- Step 5: Map total weight to:
Space AvailableBalanced PacingPrioritize Rest
- Energy labels are fixed for the day and do not change as tasks are completed
- Task Operations During Check-In:
- Task completion toggles update task state immediately
- Task moves update task dates immediately and track original dates
- Clear Separation of Concerns: Task logic is deterministic and immediate; energy reflects planned demand; check-ins are purely reflective and historical.
This was a key architectural day. Energy status, task execution, and daily reflection are now clearly separated, making the system more honest and predictable. Heavy days remain heavy even when handled well, progress is visible without rewriting reality, and check-ins act as trustworthy daily logs rather than hidden control mechanisms.
LifeOS now feels less like a task app and more like an operating system that understands load, action, and reflection as distinct layers.
Today focused on completing two core system layers in LifeOS: Reminders and Settings, with a strong emphasis on simplicity, consistency, and calm UX.
- Full CRUD reminders flow with backend sync
- Grouped views: Today, Upcoming, Past
- Two reminder types:
- Notify → sends a notification at a set time
- Show → appears on Today View only (no time required)
- Reminders surface directly on the Today screen
- Removed urgency levels and multiple notification counts to reduce complexity
- Clean, minimal structure with only essential options:
- Profile (user, timezone, language)
- Task categories & week start
- Reminder defaults
- Data & privacy actions
- Navigation and spacing aligned with Reminders page
- Centralized category color system
- Dynamic color usage across:
- Month Calendar
- Week Timetable
- Today View
- Removed hardcoded colors; added consistent fallback
- Added reminder update endpoint
- Ensured reminders without time are supported
- Backend remains source of truth with frontend sync
LifeOS now handles remembering, showing, and configuring in a calm, intentional way. The system feels more complete, less noisy, and closer to a real personal operating system.
Day 14 focused on implementing LifeOS’s Calendar system, introducing fully functional Month and Week views backed by efficient date-range loading. This completes the core planning layer and moves LifeOS closer to a true personal operating system.
- Month View — high-level overview and pattern recognition
- Week View — time-based execution and scheduling (scheduled tasks only)
GET /tasks/calendar?start={start}&end={end}
- Returns all tasks within a date range
- Optimised for month/week calendar loading
- Reduces per-day API calls
Existing endpoints reused for categories, notes, and check-ins.
Calendar.tsx
- Month / Week toggle
- Date navigation (buttons + swipe)
- Date-range task loading
- Category filtering
MonthCalendar.tsx
- Grid-based monthly layout
- Colored task blocks with truncation (
+N more) - Today highlighted with subtle border
- Category filtering
- Swipe to change month
WeekScheduleView.tsx
- Time grid (6:00–21:00)
- Scheduled tasks only
- Tasks positioned and sized by duration
- Today column emphasis
- Swipe to change week
CalendarFilters.tsx
- Collapsible filter UI (default collapsed)
- Multi-select category pills
- Filter state persisted in localStorage
- Added
loadTasksForDateRange()to Zustand store - Cached task merging to avoid duplicates
- Single API call per month/week for performance
LifeOS now has a scalable, performant calendar that supports both long-term planning and daily execution, built on deterministic logic and ready for future intelligence layers.
Day 15 focused on turning the calendar from a read-only planning surface into an interactive execution layer.
The goal was to make Month and Week views usable in real workflows while keeping the system deterministic, calm, and performant.
This work intentionally prioritised interaction patterns, state flow, and consistency over visual novelty.
A dedicated Day Modal was introduced for Month view to support fast inspection and edits without navigating away.
Key behaviour
- Tasks ordered by priority: scheduled → anytime
- Completion progress indicator (completed / total)
- Quick task creation from the modal
- Category colours reduced to subtle left accents to avoid visual overload
- Swipeable tabs: Tasks and Photo & Notes
Context-aware rendering
- Past days: notes + friendly photo placeholder (no uploads)
- Today: full functionality (tasks, notes, photo upload)
- Future days: tasks only (progressive disclosure)
This keeps the modal lightweight while still functional.
The Week view now supports direct task manipulation, aligned with execution-focused use.
- Tap empty time slot → open task modal (scheduled by default)
- Single tap task → edit
- Double tap task → toggle completion
- Only scheduled tasks rendered (Anytime tasks intentionally excluded)
This keeps the timetable readable and avoids mixing intent-based tasks with time-bound execution.
A single AddTaskModal now handles both creation and editing, reused across Month, Week, and Today views.
Features
- Anytime / Scheduled toggle
- Start time + duration with quick presets
- Automatic end-time calculation
- Recurring configuration (weekly, period, custom)
- Reactive category selection from global store
Repeat logic is handled server-side to keep frontend state predictable.
- Added repeat configuration support to task creation flow
- Deterministic task instance generation for recurring rules
- Photo uploads stored on filesystem with JSON references
- Notes model simplified to support a single photo attachment
- Validation and graceful fallbacks for dates and files
This layer makes the calendar practically usable, not just visually complete.
There are still areas to refine — especially around edge cases, interaction polish, and minor UX inconsistencies — but the core interaction model now feels solid enough for daily use.
Most importantly, the system remains deterministic and predictable, which is critical if LifeOS is going to scale beyond a personal project.
Today focused on completing the authentication lifecycle by adding a real transactional email system.
This was a critical step to move LifeOS from “works locally” to a system that supports real users, secure flows, and production-grade UX.
The goal was reliability, clarity, and graceful failure — not just “sending emails”.
- Implemented a dedicated email service (
email_service.py) using Resend - Reads configuration from environment variables:
RESEND_API_KEYEMAIL_FROMEMAIL_ENABLED
- Supports development and production modes
- Gracefully falls back to console logging when email delivery fails
- Explicit error handling for domain verification and delivery issues
- HTML + plain-text templates for:
- Email verification
- Password reset
- Clickable links with secure expiration:
- Verification: 24 hours
- Password reset: 15 minutes
- Personalized with username (e.g. “Hi Feruza,”)
- Verification token generated on signup
- Verification email sent immediately via Resend
- Existing unverified users can re-trigger verification
- Verified emails are enforced before sensitive actions
- Frontend automatically verifies when user clicks email link
- Secure token-based reset flow
- Reset email sent only if email is verified
- Frontend detects reset mode via URL
- Simplified UI: only new password + confirmation shown
-
Verify Email page
- Reads token from URL
- Auto-verifies on load
- Clear success / error states
- Manual resend option if needed
-
Auth page
- Automatically switches to reset-password mode
- Cleaner forms, fewer steps
- Inline validation and feedback
EMAIL_ENABLEDflag allows disabling emails in devFRONTEND_URLused to generate correct verification/reset links- Default sender works without a custom domain during development
The authentication system now supports:
- Real transactional emails
- Secure, time-limited verification and reset links
- Personalized communication
- Graceful fallbacks for development
- End-to-end flows that mirror production systems
The only remaining limitation is Resend’s free tier, which restricts delivery without a verified domain — a known and acceptable constraint at this stage.
This wasn’t about adding “email support” — it was about closing the loop on identity, trust, and recovery.
With email verification and password reset in place, LifeOS now has a complete, realistic authentication lifecycle.
Today was about locking authentication down properly.
Not adding features — but making sure what exists is secure, scalable, and safe to build on.
The goal: finish auth once, so it doesn’t need to be revisited while the rest of LifeOS evolves.
- Rate limiting (slowapi)
- Login / signup: 5 attempts per 15 minutes per IP
- Password reset / verification email: 1 per 5 minutes per IP
- Account lockout
- Locks after 5 failed login attempts
- Auto-unlock after 30 minutes
- No user-existence leakage (same error for all cases)
- Replaced
localStoragewith httpOnly cookiesSecure(prod),SameSite=Lax,httpOnly
- Short-lived access tokens (30 min)
- Refresh token rotation
- Stored in DB with expiry
- Old token invalidated on use
- Logout revokes refresh tokens
- Frontend auto-refreshes tokens on
401
- CORS locked down via
ALLOWED_ORIGINS - Security headers
- HSTS (prod only)
- X-Frame-Options: DENY
- X-Content-Type-Options: nosniff
- Minimal CSP (non-breaking)
- Structured audit logs for all auth events
- Login, logout, signup, password reset, verification, lockouts
- Includes timestamp, user, IP, user-agent
- Removed legacy / insecure logic
- Centralised auth utilities
- Added full technical documentation:
backend/AUTH.md
Authentication now supports:
- Brute-force protection
- Secure cookie-based auth
- Refresh token rotation
- Account lockout
- Production-grade headers & CORS
- Full audit trail
Auth is production-ready behind HTTPS and can remain unchanged while:
- The database migrates
- The assistant layer grows
- New features are added
Today I made a deliberate architectural decision to move LifeOS away from JSON-based storage and fully migrate the backend to PostgreSQL.
At this stage of the project, JSON was starting to limit reliability and future scalability. Since I want LifeOS to eventually support real users, multi-device sync, and more advanced assistant logic, a proper database foundation was necessary.
- Migrated the entire backend from JSON file storage to PostgreSQL (Supabase)
- Finalised and deployed a production-ready database schema
- Set up async SQLAlchemy using
asyncpg - Built a repository adapter (
db/repo.py) that:- preserves the existing dict-based interface
- internally uses ORM models
- allows a clean swap without breaking the API
- Migrated existing data using a safe migration script with:
- dry-run support
- transaction handling
- ID preservation
- Refactored the backend to be fully async:
- authentication flows
- all main API endpoints
- shared business logic modules
- Replaced all JSON repository calls with async PostgreSQL access
The focus today wasn’t adding new features, but making sure the system is stable, scalable, and ready for real usage.
- PostgreSQL schema deployed
- Data successfully migrated
- All endpoints now read/write from the database
- JSON storage kept temporarily as a safety fallback
- Full endpoint and flow testing still pending
This was one of the most important technical days of the challenge so far.
Moving to PostgreSQL wasn’t about optimisation — it was about committing to LifeOS as a real system rather than a prototype. The backend now has a solid foundation, but I still need to test everything end-to-end and make small adjustments where necessary.
Yesterday was about moving LifeOS to PostgreSQL.
Today was about making sure that decision is actually real.
After the initial migration, there were still small remnants of the old JSON world — fallbacks, one-off scripts, temporary logic that made the system look migrated without fully committing to it. Today I removed that ambiguity.
- Finished the PostgreSQL migration end-to-end
- Removed all remaining JSON storage usage from the running application
- Migrated pending actions and assistant state fully into the database
- Updated intent handling and core flows to rely only on PostgreSQL
- Removed one-time migration scripts and temporary fixes
- Fixed a few migration-related bugs that only showed up once everything was database-backed
At this point, LifeOS no longer has two mental models. There is one source of truth, and it's the database.
- 100% PostgreSQL-backed
- No JSON fallbacks in active code
- All endpoints and business logic read/write from the database
- Old migration scripts removed or archived
- App runs consistently across sessions and restarts
Today I moved the LifeOS assistant from a rule-based command parser toward something that actually feels conversational. After finishing the PostgreSQL migration, the foundation was finally stable enough to focus on what the app is really about: the assistant itself.
This is also the day I formally named the assistant SolAI, from Sol, meaning light, clarity, and grounding.
SolAI is the quiet intelligence at the center of LifeOS - a personal assistant designed to help you stay organised, balanced, and intentional. Not something that demands attention, but something that supports it.
-
Introduced an LLM-powered assistant module (
intelligent_assistant.py) that:- Generates natural responses instead of template strings
- Maintains short-term conversation context (last 10 messages)
- Pulls real user context automatically (tasks, schedule, conflicts, energy)
- Uses a system prompt to anchor SolAI’s calm, intentional tone
-
Added conversation memory
- Backend now accepts conversation history
- Frontend sends the last 10 messages with each request
- Enables real follow-up questions instead of one-off commands
-
Kept a hybrid architecture
- Language and reasoning handled by the LLM
- Task creation, rescheduling, and updates remain rule-based
- Natural conversation without sacrificing reliability
-
Fixed conflict detection for anytime tasks
- Tasks without a specific time were incorrectly treated as conflicts
- Conflict checks now ignore anytime tasks
- Scheduling behaviour now matches real user expectations
- SolAI can hold context-aware conversations
- Follow-up questions feel natural
- Responses reflect the user’s actual day and workload
- Critical task operations remain predictable and safe
- Conflict detection behaves correctly across task types
This felt like a real shift in the project.
Until now, the assistant was useful, but mechanical. Today was about giving it presence and intention.
What matters to me isn’t making SolAI sound impressive, but making it feel trustworthy. Something you can open every day without friction. Something that helps you think more clearly rather than overwhelming you with suggestions or automation.
There's still a lot to refine, but for the first time, SolAI feels aligned with what LifeOS is meant to be: calm, supportive, and quietly capable.
Yesterday SolAI learned to have conversations. Today it learned to see patterns.
The problem I kept running into was that SolAI could only see what was already on screen — today's tasks, upcoming schedule, conflicts. It was helpful, but not insightful. It couldn't answer "How am I doing?" because it didn't know what "doing well" meant for me specifically.
So I built a system that looks back, not just forward.
-
Created a historical context engine (
_get_historical_context())- Fetches tasks from the past 30 days, not just today
- Uses a dual-query strategy: gets tasks from the past month OR tasks scheduled for last week (even if created today)
- This handles the case where someone adds tasks retroactively — SolAI still sees them
- Merges everything intelligently, avoiding duplicates by tracking task IDs
-
Built a pattern analysis module (
pattern_analyzer.py)- Analyzes completion rates across time periods
- Identifies time preferences (when tasks are actually scheduled)
- Tracks category usage and scheduling style (anytime vs scheduled)
- Generates insights only when patterns are meaningful (e.g., only highlights categories if they're 30%+ of tasks)
- Prioritizes insights by importance — completion rates matter more than minor preferences
-
Added weekly summary generation
- When you ask "How did my last week go?", SolAI now builds a detailed breakdown
- Groups tasks by day with completion stats
- Shows the actual date range and day names
- Handles edge cases gracefully (no data, limited data, etc.)
-
Enhanced the system prompt architecture
- Now includes three new context sections:
- Historical summary (past 30 days stats)
- Patterns and insights (from the analyzer)
- Last week summary (detailed breakdown)
- Instructions emphasize meaningful insights over raw numbers
- Tells SolAI to focus on trends and comparisons, not just listing tasks
- Now includes three new context sections:
-
Made responses more concise
- Updated instructions to aim for 2-4 sentences for most responses
- Pattern insights are limited to top 3-4 most meaningful
- Weekly summaries are formatted more compactly
- Progress questions focus on insights, not exhaustive lists
The pattern analysis runs server-side using Python's defaultdict for efficient time distribution tracking. Completion rates are calculated with proper null handling, and insights are filtered by significance thresholds to avoid noise.
The historical context gathering uses SQLAlchemy queries with OR conditions to ensure comprehensive coverage:
tasks_query = select(Task).where(
and_(
Task.user_id == UUID(user_id),
or_(
Task.date >= cutoff_date, # Past 30 days
and_(Task.date >= week_start, Task.date <= week_end) # Last week
)
)
)This dual-query approach means SolAI sees tasks scheduled for last week even if they were added today — important for retroactive planning.
Context merging happens with set-based deduplication, preserving all task metadata while avoiding duplicates. The weekly summary builder filters by date range and groups by day, keeping output concise to prevent token overflow.
- SolAI can answer "How did my last week go?" with specific data
- Identifies patterns in completion rates, time preferences, and category focus
- Provides insights that go beyond what's visible in today's view
- Compares current performance to historical patterns
- Responses are more concise and focused on meaningful insights
This was a shift from reactive to proactive intelligence.
Before today, SolAI could help you manage what's in front of you. Now it can help you understand how you're actually doing — not just today, but over time. It can spot patterns you might not notice yourself, like "you complete morning tasks at 85% but evening tasks at 40%."
The technical challenge was balancing comprehensiveness with efficiency. I didn't want to load everything into every request, so I limited historical data to 30 days and weekly summaries to 7 days. The pattern analysis runs server-side to keep the frontend responsive.
What I'm most excited about is that this foundation enables something bigger: SolAI can now learn from your patterns and provide personalized insights. Not generic productivity advice, but observations specific to how you actually work.
The assistant feels less like a tool and more like a quiet observer who understands your rhythm.
Today I laid the groundwork for something I've been thinking about for a while: how SolAI should remember things about me over time, not just what I said in the last conversation.
The challenge was building a memory system that feels intentional and respectful, not like surveillance. I didn't want SolAI to remember everything — I wanted it to remember what actually matters, with clear boundaries and high confidence.
I created a complete memory foundation with four core components:
1. Memory Taxonomy
Defined four types of memories SolAI can store:
- Preferences (0.75 confidence threshold) — things I like or prefer
- Constraints (0.85 threshold) — hard boundaries I can't cross
- Patterns (0.70 threshold) — behaviors observed over time
- Values (0.80 threshold) — core principles I prioritize
Each type has clear examples, signals that indicate it, and explicit rules about what NOT to store. This isn't just categorization — it's a framework for what deserves to be remembered.
2. Memory Guardrails
Built validation rules that ensure only high-quality memories get stored:
- Content validation (length, format)
- Confidence thresholds specific to each memory type
- Security checks that block sensitive information
- Temporal validation (rejects temporary preferences)
- Type validation (ensures content actually matches the claimed type)
The system is designed to forget aggressively — if something doesn't meet the threshold, it doesn't get stored. This keeps the memory layer clean and trustworthy.
3. Memory Repository
Created retrieval methods that can find relevant memories when needed:
- Get top N most relevant memories (by confidence + recency)
- Filter by type, source, or confidence level
- Bridge between memory candidates and actual storage
Importantly, this is retrieval-only right now. Memories aren't automatically injected into conversations yet — that comes later when I'm confident the extraction and validation work properly.
4. Memory Candidate System
Built a candidate model that represents potential memories before they're persisted. Candidates can be created from conversations, pattern analysis, or explicit user input, but they must pass validation before storage.
This separation between "potential memory" and "stored memory" gives me control over what actually gets remembered, even when extraction becomes automated.
The foundation is there, but the automation comes later. Right now, it's about having the structure, guardrails, and retrieval rules in place so that when I do add extraction, it has clear boundaries to work within.
This was a quiet but important day. Memory is one of those features that can easily become invasive if not designed carefully. By starting with the structure and guardrails instead of the automation, I'm making sure SolAI's memory will be intentional, transparent, and respectful.
Today I finished two major pieces: context awareness signals for behavior adaptation, and LLM-based memory extraction. Both are foundation-only - no UI, no user-facing features, just the intelligence layer working quietly in the background.
I built a system that extracts abstract signals from Notes and Reflections to help SolAI adapt its behavior over time. SolAI should notice when I'm overloaded, adapt its tone, and reduce pressure when I ignore suggestions — all without announcing the analysis.
I extract sentiment (positive/neutral/strained) and recurring themes like work pressure, health/energy, focus/distraction, and relationships using simple keyword-based analysis. The system also detects drift: overload, disengagement, and avoidance. These are silent flags for SolAI's reasoning.
I use photo existence as a signal too — frequent weekend photos suggest meaningful weekends, photo-heavy days mean avoiding scheduling pressure afterward.
All of this feeds into SolAI's system prompt as background context. SolAI adapts its tone, reduces suggestions when ignored, and decreases pressure during overload — all silently. I cache signals weekly to keep things efficient and ensure behavior adapts gradually.
I implemented LLM-based memory extraction that quietly identifies potential long-term memories from conversations. The LLM analyzes the user message, assistant response, and context signals, then outputs structured JSON with should_store, memory_type, content, and confidence.
The rules are strict: extract at most one per turn, prefer explicit statements, don't infer from assistant suggestions. I use context signals to down-weight temporary stress — if I'm having a bad week, that shouldn't become a permanent memory.
The extraction runs after the assistant response, converts to a MemoryCandidate, runs through MemoryPolicy validation, and persists only if valid. It fails silently — never blocks responses. I also updated the memories table schema to support this.
I tested with a few examples:
- "I prefer workouts in the mornings" → ✅ Stored as preference (confidence: 0.80)
- "I prioritize family time over work" → ✅ Stored as value (confidence: 0.85)
- "Today I want to focus on emails" → ✅ Correctly rejected (temporary)
The system is working. It's conservative by design — most conversations won't produce memories. Only explicit, high-confidence statements get stored.
This is identity learning, not behavior control. SolAI is quietly building a model of who I am — preferences, constraints, values, patterns. The system forgets aggressively — if something doesn't meet the threshold, it doesn't get stored. This keeps the memory bank sparse, clean, and trustworthy.
I also made SolAI more concise — responses are now 1-2 sentences by default, especially important for mobile users.
Today I closed the loop on memory: SolAI now uses stored memories to shape its behavior, but silently. No mentions, no announcements; memories just make the assistant feel more aligned.
I built relevance-based memory retrieval that finds the top 3-5 memories related to the current conversation. The system extracts keywords from the user's message, scores memories by keyword matches × confidence × recency, and injects only the most relevant ones into the system prompt.
The key is selectivity — not all memories, just the ones that actually matter for this conversation. If you're talking about workouts, it finds workout-related preferences. If you're scheduling, it finds scheduling constraints. Fallback to top memories by confidence if no keywords match.
Memories now shape SolAI's judgment without being mentioned. I categorized memories by type and gave each type specific behavioral guidance:
- Preferences bias suggestions: If you prefer morning workouts, SolAI suggests morning times naturally
- Constraints limit proposals: If you can't work after 6pm, SolAI never suggests evening tasks
- Values influence tone: If you value work-life balance, SolAI is gentler about overload
- Patterns inform defaults, but user intent always overrides
The system prompt includes clear instructions: use preferences to bias, use constraints to limit, use values to shape tone. But never mention memories explicitly, they simply make SolAI feel more aligned with the user.
Memories don't control behavior — they inform it. If you explicitly want something that conflicts with a memory, your intent wins. Memories are hints, not rules.
The assistant should feel understood, not managed. When you say "schedule a workout" and SolAI suggests morning times because it knows you prefer mornings, it should feel natural - like the assistant just gets you, not like it's following a script.
Today I focused on getting LifeOS deployed and accessible to real users. I thought it made sense to handle deployment before building new features. This way I can test everything in a real environment, and other people can actually use the app.
-
Bought and configured domain (
mylifeos.dev)- Connected it to Resend for email delivery
- Updated email templates to use the new domain
- Multiple accounts can now sign up and use the app
-
Deployed backend to Railway
- Set up PostgreSQL connection (Session mode for better deployment compatibility)
- Configured environment variables
- Backend is live and accessible
-
Deployed frontend to Vercel
- Set up PWA configuration (manifest, service worker, icons)
- Configured routing and environment variables
- Frontend is live and accessible
-
Fixed deployment issues
- Resolved CORS configuration for cross-domain cookies (Vercel ↔ Railway)
- Updated cookie settings to use
SameSite=Nonefor production - Fixed CSS build errors (import order)
- Improved error handling for API calls
Both deployments are successful, but I'm still working through some edge cases:
- Login/signup flow needs final polish
- Mobile device detection for local development
- Some CORS edge cases to resolve
The app is functional in production, but I want to make sure the authentication flow is rock-solid before moving forward.
Deployment always takes longer than expected, but it's worth doing now. Having a real environment to test against makes everything else easier. Plus, it's satisfying to see LifeOS actually running on a real domain instead of just localhost.
Today was about moving from "it works on my machine" to "it works everywhere." Deployment is only half the battle; cross-browser stability, especially with cookies and mobile Safari, was the real challenge today.
- Auth Stability: Refactored login to use a native HTML
<form>submission withRedirectResponse. This bypasses Safari's XHR cookie restrictions by treating the session set as a top-level navigation event. - Backend Resilience: Performed a full syntax audit and resolved several
IndentationErrorbugs inmain.pythat were blocking production startup. StandardizedMonthlyFocusRequestto allow flexible bulk saving of goals. - Vite Configuration: Locked the
basepath to/to prevent domain-relative URL rewriting in production.
The Explore page (formerly Align) is now the strategic brain of LifeOS.
- SVG Completion Trends: Replaced static bars with a custom SVG Line Chart. It visualizes a 4-week moving completion average calculated from historical check-ins and task states.
- Goals Carousel: Added support for up to 5 concurrent monthly goals. The UI features an auto-rotating carousel (5s) with manual touch/swipe support and a smooth dot-indicator navigation.
- Smart Nudges: The backend now uses a priority-weighted logic engine to suggest improvements based on category drift (e.g., when Health tasks are consistently moved) and peak productivity windows.
- SolAI Precision: Hardened the assistant's date resolution with explicit Calendar Guidance for relative dates (e.g., "next week"). Enforced a strict category selection policy where SolAI guesses but always asks for confirmation, ensuring better data integrity.
- Week Review Fix: Resolved a critical infinite re-render loop in the Weekly Overview by stabilizing date dependencies with
useMemo.
Production is a stern teacher. The transition to a custom domain revealed several "invisible" bugs—from Safari's strict cookie policies to Vite's path handling. Resolving the backend indentation errors and stabilizing the frontend re-renders makes the OS feel significantly more professional. The Explore page is finally a place for meaningful insight, not just more data.
Today I built a goal-aware intelligence system that connects monthly goals to daily tasks, and added voice input capabilities to SolAI.
The system automatically matches completed tasks to monthly goals using semantic similarity, calculating progress without manual input. When you complete a task like "Reading a book 20 pp", it recognizes it relates to your goal "Read 2 books" and updates progress automatically.
SolAI now acknowledges goals when you create related tasks via chat ("Scheduled! This aligns with your goal to Read 2 books."). For manual task creation or completion, the floating assistant shows subtle popup notifications that auto-hide after 2 minutes - celebrating progress without being intrusive.
The goal matching engine uses keyword overlap, substring matching, and category inference to connect tasks to goals. Progress updates happen in the background when you complete tasks, and the system provides smart suggestions for neglected goals (only when contextually relevant).
Added speech-to-text to both the quick SolAI view and full-screen chat. Users can tap the microphone button to speak instead of typing, making task creation faster on mobile. The system checks browser compatibility and gracefully handles unsupported environments.
- Created
goal_engine.pyfor semantic task-goal matching and progress calculation - Integrated goal awareness into SolAI's system prompt for natural acknowledgments
- Added temporary notification system in
CoreAIFABwith 2-minute auto-hide - Implemented frontend goal matching utility for real-time detection
- Fixed category management issues (global categories can now be edited, proper persistence)
The goal system is subtle and helpful - it recognizes your work, celebrates progress, and suggests next steps only when it makes sense. No nagging, just intelligent support.
Added intelligent task suggestions to the New Task modal and refined the Explore page visualizations.
SolAI now suggests up to 6 tasks when you open the New Task modal. The system analyzes frequently scheduled tasks from the last 30 days and goal-related tasks, then auto-fills title, time, and category when you select one. The /tasks/suggestions endpoint uses pattern analysis to identify your most common task patterns and their typical scheduling times.
- Category Balance: Redesigned pie chart with gap segments and better visual hierarchy. Balance score displayed in center.
- Productivity Insights: Merged Consistency metrics into this section. Check-in frequency now appears with a progress bar between "Most productive day" and "Overall completion".
- Weekly Reflection: Reflection text centered next to photos, rotates every 5 seconds with highlights from the week.
- Created
/tasks/suggestionsendpoint for task frequency and goal relationship analysis - Enhanced category balance calculation to handle multiple category field formats
- Improved pie chart SVG rendering with gap segments
Day 30. We're almost there. Today I stepped back from features and focused on what makes software actually good: architecture, performance, and maintainability. This is the kind of work that doesn't show up in screenshots but makes everything feel faster, cleaner, and more professional.
The app was loading everything upfront—all pages, all components, all the time. Not great for a mobile-first experience. I implemented lazy loading for all heavy routes (Explore, Calendar, Notes, Settings, etc.) with React's lazy() and Suspense. Now only the Today page loads initially; everything else loads on-demand. The difference is noticeable, especially on slower connections.
Added a consistent PageLoader component so users see a smooth loading state instead of a blank screen. Small detail, big impact.
The Explore page had grown to 1,790+ lines. I broke it down into a proper component structure:
- WeeklySummary - Stats card with photos and reflections
- WeeklyPhotos - Photo rotation widget with typing animation
- CategoryBalanceView - Pie chart visualization
- EnergyPatternsView - Weekly energy trends
- ProductivityInsightsView - Completion metrics and peak times
- HabitFocusView - AI-powered forward-looking insights
- RotatingStats - Carousel container managing all stat views
Created two custom hooks to extract logic:
- useExploreData - Centralized data fetching with photo reloading on visibility change
- useRotatingStats - Carousel state management with swipe detection
Each component is now under 200 lines, focused, and testable. The main Explore component will shrink from 1,790 lines to around 400 once fully refactored. This is the kind of refactoring that pays dividends when you need to add features or fix bugs.
Removed debug console.log statements throughout the codebase. Kept error logging for production debugging, but eliminated the noise. The console is now clean and professional.
Spent time testing the lazy loading across different network conditions and verifying that the component split doesn't break any existing functionality. The rotating stats carousel still works, photos still load correctly, and all the analytics views render properly.
Also refined the component interfaces to be more type-safe and added proper prop validation. Small improvements that prevent bugs before they happen.
This is the work that separates a prototype from production software. Features are exciting, but architecture is what makes them sustainable. Breaking down the Explore page wasn't glamorous, but it means I can now add new analytics views without touching 1,700 lines of code. That's the real win.
The lazy loading makes the app feel snappier, especially on mobile. And having a clean component structure means future features will be easier to build and maintain. Day 30 is about setting up Day 31 (and beyond) for success.
We're almost at the finish line. One more day.
31 days ago, LifeOS was a blank script. Today, it's something I actually use.
LifeOS evolved from a simple intent parser into a calm, goal-aware system that helps notice patterns over time. Not a productivity machine; just a quiet assistant that respects attention and context.
Core Features: Today View with energy status, Calendar (month/week views), SolAI (conversational assistant with memory), Explore (analytics dashboard), Check-Ins (daily reflection), Goals (monthly tracking), Reminders, Notes, and a Memory System.
Technical Foundation: Full-stack TypeScript/Python, PostgreSQL with async SQLAlchemy, deployed on Vercel/Railway, PWA support, secure authentication, AI-powered intent parsing.
Day 31 focused on polish: unified UI menu system, real-time data synchronization, optimized loading states, and improved error handling. These weren't headline features, but they're what separate a prototype from something you'd actually use every day.
The real value of AI isn't just in building it, but in learning how to use it well — in your own domain, in your own life. LifeOS taught me that building AI systems isn't about complexity or impressive demos. It's about creating something that fits quietly into your routine, respects your attention, and helps you see patterns you might miss otherwise.
The challenge ends here. The system doesn't. I'll keep using it, refining it, and learning from it; slowly, through real life.
✅ Challenge Complete — 31 days of consistent building
✅ Production Ready — Deployed and accessible at mylifeos.dev
✅ In Active Use — Daily tool for planning, reflection, and organization
While the 31-day challenge is complete, LifeOS will continue to evolve through real-world use. Key areas for future development:
- Testing & Quality — E2E testing, mobile compatibility, edge cases, automated test suite
- Data Security & Privacy — Security audit, GDPR compliance, data export/deletion, privacy policy
- Mobile Experience — Wrap PWA in native container (Capacitor), push notifications, offline support, app store deployment
- Performance — Further optimization, caching, query improvements, bundle size reduction
- Infrastructure — Monitoring/logging, backups, CI/CD improvements, scalability planning
- Documentation — API docs, setup guides, architecture decisions
The focus moving forward is on stability, security, and real-world usability rather than rapid feature development. LifeOS will grow organically based on actual usage patterns and needs.