I am a Data Engineer with experience in building scalable data pipelines and data processing solutions using Databricks, PySpark, and Delta Lake. I specialize in designing ETL pipelines, transforming large datasets, and implementing efficient data workflows for analytics and reporting.
I have experience working with big data technologies such as Apache Spark, cloud data platforms, and modern data architectures like the Medallion Architecture. My work focuses on processing structured and semi-structured data such as JSON, CSV, and log data to create reliable and optimized data pipelines.
I am passionate about data engineering and continuously learning new technologies to improve data processing performance and scalability. I focus on writing clean, maintainable code and delivering reliable solutions that help businesses make better data-driven decisions.
Key Skills: