ALIProWeb is a production SaaS platform that processes and delivers subscriber PS/ALI (911 location) data to the Washington State ALI DBMS as part of a public safety data pipeline.
- Built and operated independently since 2019
- Supports real-world 911 location update workflows
- Integrates cloud-native processing with business systems
- Designed for correctness, reliability, and controlled execution
The system has been built and runs end-to-end in a production environment. The architecture emphasizes managed services and event-driven execution, resulting in a highly efficient, reliable, and operationally lightweight platform.
This repo focuses on architecture, workflows, and system design.
It does not include proprietary code, but reflects how the production system is structured and operated.
PS/ALI (Public Safety / Automatic Location Identification) data links a telephone number to a physical location.
During a 911 call, dispatchers rely on this data to determine where the caller is and route assistance.
This system sits in a public safety data pipeline.
While it is not a dispatch system, it directly affects the quality and availability of location data used during emergency response.
That drives different priorities:
- correctness of transformations
- reliability of processing
- traceability of changes
- safe handling of malformed input
Downstream systems assume this data is accurate.
ALIProWeb has been running in production since 2019, supporting workflows tied to 911 infrastructure.
It processes customer data and feeds systems relied on during emergency situations. The client base is primarily public sector entities.
The platform operates within NENA-defined standards.
This affects:
- how location data is structured and validated
- how inputs are normalized
- how updates are handled
- how downstream systems consume data
This is not a flexible data model:
- formats are externally defined
- input is often inconsistent
- compliance and normalization must coexist
- interoperability depends on precision
Much of the system complexity comes from these constraints.
At a high level, the system ingests data, stages it durably, processes it through a controlled workflow, delivers validated output to downstream ALI systems and reporst status to the end user.
- Deterministic processing pipeline
- Hybrid execution model (serverless + containers)
- Integration with business systems
- Controlled operational layer for replay and diagnostics
[ Customer Portal / Web API ]
│
▼
[ Object Storage / Ingestion Boundary ]
│
▼
[ Processing Layer ]
│
├───────────────┬───────────────────────┐
▼ ▼ ▼
[ Serverless ] [ EC2 + Containers ] [ Systems Manager ]
│ │ │
└──────┬────────┴──────────────┬────────┘
▼ ▼
[ Business Integrations ] [ Control Plane ]
│
▼
[ Downstream ALI DBMS ]
- Customer ingestion via portal and API
- Durable staging and replayable ingestion
- Data normalization and transformation
- Workflow tracking through business systems
- Controlled delivery to ALI DBMS
- Centralized configuration and secrets
- Remote operational control
- Hybrid compute model
- Infrastructure managed via CloudFormation
All inbound data is written to object storage before processing.
This enables replay, auditability, and separation of intake from execution.
Processing is staged:
- ingestion
- validation
- transformation
- tracking
- submission
- reconciliation
Each stage is observable and recoverable.
Workloads run across multiple environments:
Serverless
- event-driven
- short-lived
- scalable
EC2 + Containers
- long-running
- batch processing
- persistent integrations
Systems Manager
- operator-triggered workflows
- diagnostics
- recovery
Configuration and secrets are centrally managed:
- API credentials
- environment-specific config
- separation from code
- access controlled via IAM
Operational actions run through controlled interfaces rather than direct infrastructure access.
This includes:
- replay
- retries
- diagnostics
- maintenance
The platform is defined using CloudFormation.
- environments are reproducible
- changes are versioned
- infrastructure is not manually configured
| Workload Type | Execution Model |
|---|---|
| Event-driven ingestion | Serverless |
| Lightweight validation | Serverless |
| Short transformations | Serverless |
| Long-running jobs | EC2 + Containers |
| Batch processing | EC2 + Containers |
| Persistent integrations | EC2 + Containers |
| Retry / replay | Systems Manager |
| Diagnostics | Systems Manager |
docs/ → architecture and design documentation
examples/ → sample payloads
adr/ → design decisions
supplemental/ → supporting materials
- separation of concerns
- auditability
- replayability
- least-privilege security
- operational simplicity
The production application code is maintained in AWS CodeCommit.
This repo focuses on architecture and system design.
The system was designed and implemented end-to-end by a single engineer.
This includes:
- architecture
- workflows
- infrastructure (CloudFormation)
- integrations
- operational tooling
- configuration and secrets
- lifecycle management
Shows how the workflow core connects to identity, state, business systems, and platform control.
- runtime execution across multiple environments
- supporting services for identity, state, and configuration
- integration with CRM and accounting systems
- delivery into the ALI DBMS