I’m a passionate and detail-oriented Data Engineer with hands-on experience in building reliable, scalable, and efficient data pipelines. I specialize in transforming raw data into valuable business insights using modern cloud technologies and data engineering tools.
My core strengths lie in working with cloud platforms like Azure and AWS, where I’ve developed robust ETL workflows using tools like Azure Data Factory, Databricks (PySpark), and Synapse Analytics. I’m also skilled in handling large datasets using SQL, Python, and Apache Spark, and have experience working with Delta Lake and modern data lakehouse architectures.
I have successfully delivered end-to-end solutions involving data ingestion, transformation, cleansing, and loading into data warehouses or visualization tools like Power BI. I’m comfortable working in both batch and streaming environments and follow best practices in data security and automation using Azure Key Vault, CI/CD pipelines, and version control.
I enjoy solving data challenges and take pride in delivering clean, maintainable, and well-documented code. Whether you need help migrating data, building pipelines, or setting up a cloud-based data platform, I can help bring your data vision to life.
Let’s connect to discuss how I can support your project goals with modern, efficient data engineering solutions.