I am a seasoned Data Engineer with a proven track record of designing, building, and optimizing modern data platforms that enable scalable analytics and AI/ML use cases. My experience spans both enterprise environments and fast-paced startups, where I have consistently delivered robust, production-ready data infrastructure from scratch.
In my current role, I have led the migration of legacy SSIS and SQL-based workflows into a modern Azure lakehouse using Databricks, Spark, and Unity Catalog. This involved implementing a medallion architecture, optimizing Delta Lake tables for performance, and building automated CI/CD pipelines in Azure DevOps. By introducing dependency handling, retries, and alerting via Azure Data Factory and Databricks Jobs, I significantly improved both reliability and operational visibility.
I have deep expertise in SQL, Python, and Scala, with hands-on experience developing complex ETL/ELT pipelines, implementing data quality checks, and integrating large-scale external datasets via APIs, S3, and FTP. My work has included both batch and streaming solutions, leveraging Confluent Kafka and Flink to deliver low-latency, real-time data pipelines.
Previously, at Dream Games, I played a pivotal role in architecting the company’s data platform from the ground up on Google Cloud. This included designing ingestion and storage layers, implementing partitioning strategies, and supporting analytics with Looker dashboards. The platform became a key enabler of Dream Games’ rapid growth, contributing to its unicorn status within six months.
Beyond technical delivery, I am passionate about governance and security. I have implemented role-based access control through Unity Catalog, integrated secret management with Key Vault, and built data expectations frameworks for schema evolution and quality monitoring. I also mentor junior engineers, sharing best practices in data modeling, pipeline optimization, and cloud-native development.
My approach balances technical depth with pragmatic decision-making: I start small, deliver incrementally, and scale systems to meet growing business needs. I thrive in collaborative, cross-functional environments, working closely with product managers, analysts, and data scientists to translate requirements into efficient technical solutions.
With hands-on expertise across Azure, Databricks, Spark, Kafka, Iceberg, and modern orchestration tools, coupled with a strong focus on scalability, governance, and operational excellence, I bring the skills and mindset to design and lead next-generation data infrastructure. I am eager to contribute this experience to Syrup’s mission of transforming inventory decision-making with AI-powered, predictive systems.