New Grad SWE | Backend & ML Systems | MS Data Science @ Texas A&M
π United States β’ Actively looking for opportunities
Research @ Texas A&M β Distributed LLM training pipelines (DeepSpeed, SLURM), reduced multimodal inference latency by 21%, built scalable benchmarking frameworks.
@ Viewzen Labs (2 yrs) β Architected an MLaaS platform handling 80+ concurrent training jobs, built data pipelines processing 5M+ daily events with Kafka & Docker, shipped a production prediction system.
- [SemEval-2025] Constrained Unlearning for LLMs β π₯ 2nd Place, Task 4
- [EACL 2026] Adaptive Helpfulness-Harmlessness Alignment with Preference Vectors
Open to Backend, ML Infra, and Platform Engineering roles.

