Ashutosh Jasrotia

Ashutosh Jasrotia

$15/hr
Building fast, scalable ETL pipelines using Python, AWS, and Databricks.
Reply rate:
66.67%
Availability:
Full-time (40 hrs/wk)
Age:
23 years old
Location:
Kangra, Himachal Pradesh, India
Experience:
5 years
About

I am a Data Engineer with over five years of hands-on experience designing, automating, and scaling data systems across cloud environments. My focus is building reliable, high-performance data pipelines that transform raw, scattered data into clean, actionable insights. I work with AWS, Databricks, Redshift, Python, SQL, and modern orchestration tools to create end-to-end workflows that power analytics, machine learning, and real-time decision-making.

Throughout my career, I have led multiple cloud migration initiatives, modernizing legacy ETL systems and redesigning them into efficient, cost-effective architectures. I’ve built and optimized more than 50 ETL pipelines—improving performance, reducing operational costs, and ensuring that teams always have access to high-quality, trusted data. My experience extends across batch and streaming systems, leveraging technologies like Spark, Delta Lake, and Airflow to build scalable pipelines that grow seamlessly with business needs.

I also bring strong backend engineering capabilities. I’ve developed and deployed REST APIs using frameworks such as FastAPI, Flask, and Ray Serve, supporting real-time data access and ML model-serving across GPU-backed clusters. Using Docker and Kubernetes, I’ve containerized workloads, automated deployments, and built distributed systems capable of handling large-scale data and demanding compute environments.

Beyond engineering, I have worked closely with analytics teams to implement data governance, improve data reliability, and enable self-service analytics using tools like Power BI and Tableau. I believe in creating systems that are not only powerful but also easy to maintain and extend.

My recent work includes contributing to LLM training, building realistic multi-turn conversation datasets, integrating enterprise tools, and improving tool-use reasoning for large AI models. This has expanded my skill set into the AI and model training domain, giving me a well-rounded understanding of both data engineering and AI pipeline development.

I take pride in solving complex problems with simple, elegant solutions. Whether it’s migrating a data warehouse, optimizing a pipeline, integrating APIs, deploying ML services, or automating workflows, I approach every project with a focus on clarity, reliability, and long-term scalability.

I am available for short-term projects, long-term collaborations, or specialized consulting. If you need a data engineer who can deliver clean, dependable, production-ready systems—let’s work together.

Languages
Get your freelancer profile up and running. View the step by step guide to set up a freelancer profile so you can land your dream job.