I am a Data Engineer with experience in building data pipelines, lakehouse platforms, and reporting systems. I have worked on projects in banking, finance, airline, and internal analytics systems.
My main skills include AWS, Databricks, PySpark, SQL, Airflow, dbt, Kafka, MinIO, Iceberg, ClickHouse, Trino, PostgreSQL, Oracle DB, Docker, Kubernetes, and Terraform.
I have experience designing and developing batch data pipelines, processing CDC data, building staging/silver/gold layers, creating dimension and fact tables, and optimizing data for analytics and reporting. I also work with Business Analysts to understand requirements, analyze data schemas, and implement transformation logic.
In previous projects, I helped migrate data jobs from AWS to Databricks, build on-premise lakehouse platforms, and process large-scale data for business reporting. I focus on writing clean, reliable, and maintainable data solutions.