AI Engineer building LLM workflows that turn complex business logic into production-grade AI systems.
Currently @ aiOla · Tel Aviv, Israel
I design, evaluate, and ship AI solutions that solve real operational problems. From multi step LLM pipelines and agentic tagging systems to structured output validation and evaluation frameworks.
- Saved more than $10,000/month by replacing manual annotation with a multi-model AI tagging pipeline
- Reduced turnaround from weeks to minutes by building LangGraph-based LLM orchestration tools
- Built production AI systems
AI/ML: LLM Orchestration · Prompt Engineering · RAG · Agentic Systems · Structured Outputs · Embeddings · Model Fine-Tuning · LLM Evaluation Data: SQL · Snowflake · Data Analysis · Statistics Tools: Claude Code · Cursor · Codex · OpenRouter · PyTorch
An agentic contract review pipeline built with Python and LangGraph. Ingests contracts, extracts clauses via multi-agent LLMs, classifies risk against a deterministic policy engine, auto-resolves low-risk cases, and routes high-risk clauses to a human review queue — with full audit tracing and business KPI tracking.
Multi-model annotation pipeline with parallel extraction, field-level agreement scoring, judge-model escalation, and selective human review. Replaced large portions of manual transcript tagging — saving ~$12K/month.
Human-in-the-loop pipeline combining Triton ASR n-best outputs with Gemini via OpenRouter. Automatically processes clear transcripts, escalates ambiguous ones to LLM review, and routes edge cases to humans.
"I build AI systems that don't just demo well — they reduce cost, cut turnaround, and run in production."



