Engineering AI platforms with rigorous data pipelines and measurable outcomes.
I’m Devarsh Radadia, a computer engineering student focused on building production grade ML systems. I’m most energized by backend pipelines that transform messy data into reliable, decision ready intelligence and keep stakeholders from inventing their own metrics.
My recent work spans FDA MAUDE ETL pipelines, RAG systems with citation grounding and FastAPI services optimized for low latency inference. I care deeply about traceability, evaluation and designing AI that earns trust. Not just applause.
Outside product delivery, I study system design patterns for agentic workflows, vector search tradeoffs and scalable AI infrastructure with an eye on performance and correctness.
- Dec 2025 to Present · AI Engineering InternNeujin Solutions
- 2023 · Software Engineering InternYo4GIS
- B.E. Computer EngineeringGTU · CGPA 8.8
Depth across AI, data engineering and scalable backend systems.
Production-grade systems with measurable impact.
Each project highlights architecture decisions, ML pipelines and reliability tradeoffs from real deployments. No hand wavy magic. Just systems that work.
ETL platform that ingests FDA MAUDE reports, validates data quality and powers analytics with traceable lineage.
RAG + agentic workflow that extracts structured insights from large PDF corpora and answers queries with citations.
Analytics portal that tracks cloud spend, flags anomalies and recommends cost saving actions.
Notes on RAG quality, vector search and inference at scale.
Written like a human who has actually debugged a pipeline at 2 AM. Sarcasm is a side effect, not the feature.
RAG Optimization Playbook
A practical blueprint for improving retrieval quality, reranking and source attribution in RAG systems in plain language.
Vector Search Tradeoffs for Production
How to choose between ANN libraries, index sizes and metadata filters when latency matters and budgets are pretending to be infinite.
Scaling FastAPI for Inference
Concurrency patterns, batching strategies and caching to keep inference under 1s without setting the server on fire.
Evaluating LLM Reliability (Without the Drama)
A framework for measuring hallucinations, attribution quality and regressions that actually matter in production.
RAG Guardrails That Do Not Ruin UX
Guardrails should reduce risk without making users wait 12 seconds for a refusal.
Shipping AI infrastructure with measurable outcomes.
- Built FDA MAUDE ETL pipelines with retries, validation and structured logging.
- Designed RAG pipelines to answer compliance and safety questions with citations.
- Hardened ingestion with idempotent workflows and audit trails.
- Developed FastAPI services powering geospatial workflows.
- Introduced Redis caching and reduced response latency by 35%.
- Improved performance with async IO and query optimization.
Download the full resume or scan the ATS optimized version below. Yes, it is ATS friendly. You are welcome.
Ready to build reliable AI systems together.
Open to AI/ML internships, research collaborations and backend ML platform work. I respond quickly and value clarity, not 12 page requirements with no problem statement.