key skills and characteristics
Operating System: Windows 95/98/00/NT/XP/7/8/10, Windows Server 2003/2008, UNIX (Linux).
Programming Languages: SQL / PLSQL, Unix shell scripting, Python.
Databases:Oracle 10g/11g, Snowflake, MS SQL Server, MySQL (MariaDB), Hive, Impala
ETL Tools: INFORMATICA 9.x/10.x, SAP BODS, Abinitio, Azure Data Factory/Lake, SAS, SAP BODS
Data Modeling: Star-Schema Modeling, Snowflake Modeling, Fact & Dimension Tables
Big Data/Spark Ecosystem:Hive, PySpark, Spark SQL
WORK EXPERIENCE
Senior Data Engineer
Client: Royal Bank of Canada, Toronto, Canada
SEPT,2024 – Present
Role and Responsibilities
•Designed and built robust PySpark and Python-based frameworks to support flexible, scalable ETL pipelines.
•Managed end-to-end handling of sensitive client data, ensuring compliance with regulatory standards and RBC’s internal data governance policies.
•Developed and maintained parameterized, modular data transformation layers to support multiple downstream consumers and reporting tools.
•Collaborated with cross-functional teams (security, infra, analytics) to align data architecture with evolving business and compliance needs.
•Enhanced existing data workflows with unit testing, validation logic, and comprehensive logging to support better debugging and traceability.
Environment:
PySpark, Spark SQL, Hive, HDFS, Cloudera, Oracle, SQL Server, Python, Unix/Linux, Git, Jira, Confluence
Hadoop Data Engineer Maintenance
Company: Royal Bank of Canada, Toronto, Canada /TATA CONSULTANCY SERVICES
Mar,2022 – SepT 2024
Role and Responsibilities
•Developed and managed data pipelines using Spark Scala and Spark SQL to process large-scale datasets in a Hadoop ecosystem.
•Created reusable ETL frameworks to automate ingestion and transformation from multiple structured and semi-structured sources.
•Installed and configured Cloudera CDH 7.8.1 clients on development and production edge nodes.
•Wrote complex Hive/Impala queries, optimized Spark jobs, and ensured high-performance data retrieval and transformation.
•Implemented data quality checks and monitoring mechanisms to maintain accuracy and consistency.
•Worked with mainframe systems to manage JCL jobs and integrated legacy data sources.
•Participated in Agile ceremonies including daily stand-ups, sprint planning, retrospectives, and collaborated across teams for issue resolution.
•Contributed to DevOps efforts, incident resolution, and UCD deployment processes.
•Mentored junior developers and participated in architectural discussions to scale data solutions.
Environment:
Hive, Flat Files, PySpark SQL, Mainframe JCL, Unix, Jira (Confluence), Oracle, MS SQL Server
Hadoop Data Engineer
Client: Economical Insurance, WFH, India /TATA CONSULTANCY SERVICES
June,2021 – Feb,2022
Role and Responsibilities
•Attending Daily Standup Meetings, Sprint Planning/Retrospective meetings
•Analyze the existing SAS code and the architecture and proposed the New Architecture.
•Understand the requirements with the help of BA’s and Deliver the solutions for every Sprint
•Designed the SQL’s Scripts for the respective requirements in the yaml’s (reusable Hadoop code) in the Hadoop Spark
•Lead and guide the total team in understanding the existing architecture and SAS code.
•Implemented one of the Best Scenarios in SQL, PySpark SQL’s best practices.
•Design and execute the very complex SQLs in the Hive/Impala using the Hive bult-in functions.
•Mitigate the Data Risks in handling different source systems Oracle, SQL Server and Flat Files
•Determined, committed and hardworking individual with strong communication, interpersonal and organizational skills.
Environment:
Hadoop Eco System, Impala, Hive, Flat Files, PySpark SQL, Autosys, Unix, Jira (Confluence), Oracle, MS SQL Server, SAS
Azure Data Engineer
Client: Euroclear (Brussels, Belgium), WFH, India /TATA CONSULTANCY SERVICES
Mar,2021 – May,2021
Role and Responsibilities
•Attending Daily Standup Meetings, Sprint Planning/Retrospective meetings
•Created a New Architecture using Azure Data Factory/Data Lake for the existing solution
•Entire Project Timeline was designing architecture to propose a solution for implementation of Azure Cloud for the Cloudera source with adding new Investment Banking Liquidity and Reference Data Layers
•Daily Client KT meetings on Briefing of their existing architecture and their solutions of Role of Investment Banker in Capital Market areas Liquidity, Reference Data (Corporate Actions)
•Designing the transformations using the Azure Data Factory/Lake
Environment:
Azure Data Engineer, Azure Data Lake, SQL
ETL Developer
Client: Bank of Paribas Fortis, WFH, India /TATA CONSULTANCY SERVICES
Feb,2020 – Feb,2021
Role and Responsibilities
•Attending Daily Standup Meetings, Sprint Planning/Retrospective meetings
•Created reusable transformations for data from operational data source to Data Warehouse.
•Used Variables and Parameters in the mappings to pass the values between mappings and sessions.
•Performed the performance evaluation of the ETL for full load cycle.
•Designed Complex SQL’s/Procedures in the Snowflake Database and calling the procedure from the Informatica Mapping (Stored Procedure) or Workflow (Command Task).
•Designed multiple shell scripts in the UNIX environment to minimize the mapping creation and performance issues also for automating the validation of data
•Designed the ETL processing in the Snowflake procedure using the JavaScript mode.
•Preparing the project level documents for the better knowledge scope
•Working on the performance tuning and fixing issues
Environment:
Informatica 10.4, Oracle, MS SQL Server, Unix, Autosys, Jira (Confluence), Snowflake Database
ETL/TDM Developer/ Onshore Lead
Client: Elisa Corporation, Helsinki Finland /TATA CONSULTANCY SERVICES
Dec,2017 – Jan,2020
Role and Responsibilities
•Lead and guide the team from client location(onshore) towards the overall requirements design in Agile Methodology
•As Informatica Administrator installing the hotfixes, maintaining the repositories and fixing the Informatica Server related issues
•Performing Informatica installations such as Patch Fixes, hotfixes and software, bouncing activities or restarting the services, and make good changes.
•Performing Informatica installations for Power Center Tools, Mapping Designer, Workflow Designer, Monitor and Repository Manager and Informatica TDM Tools.
•Understanding the PII elements and Data Model
•Implementation the GDPR design using Informatica TDM
•Developing the Data Masking rules, Masking Plans with respective Informatica Workflows
•Designed the Unix shell script for the automating the validation of data
•Designed the reusable code in windows batch script and PowerShell scripting
•Preparing the project level documents for the better knowledge scope
•Working on the performance tuning and fixing issues
Environment:
Informatica 10.2, Informatica TDM, MySQL (MariaDB) Oracle, MS SQL Server, Windows PowerShell Scripting, Jira (Confluence), Informatica Admin
ETL/TDM Developer/ Technical Lead
Client: Bank of America, Hyderabad, India /TATA CONSULTANCY SERVICES
Oct,2015 – Nov,2017
Role and Responsibilities
•Analyze the existing MS Excel VB Macros code and the architecture and proposed the New Architecture.
•Designed the mapping documents by myself and guided the Business Analyst in the technical aspects.
•Lead and guide the total team in understanding the existing architecture and Excel VB Macro code.
•Designed the High-Level Architecture, low-level architecture and deliver the document for each and every phase to the client.
•Involved in the Physical architecture implementation and guided the team.
•Proposed the best solutions for the New proposed architecture.
•Implemented the Informatica code in very new different scenarios as like one of the best practices in the Informatica coding.
Environment:
Informatica 10.2, MS Excel VB, Oracle, Unix Shell Scripting, Autosys, Sybase
ETL Developer
Client: Kaiser Permanente, Hyderabad, India / COGNIZANT TECHNOLOGY SOLUTIONS
May,2014 – Sept,2015
Role and Responsibilities
•Prepared technical design/specifications for data Extraction, Transformation and Loading.
•Worked on Informatica Utilities Source Analyzer, warehouse Designer, Mapping Designer, Mapplet Designer and Transformation Developer.
•Analyzing the sources, transforming data, mapping the data and loading the data into targets using Informatica Power Center Designer.
•Performance tuning to improve Data Extraction, Data process, Load time delivers the unaffected business runtime in production environment.
•Created reusable transformations to load data from operational data source to Data Warehouse and involved in capacity planning and storage of data.
•Used Variables and Parameters in the mappings to pass the values between mappings and sessions.
•Performed the performance evaluation of the ETL for full load cycle.
•Designed multiple shell scripts in the Unix environment to minimize the mapping creation and performance issues.
•Implemented best practices as per the standards while designing technical documents and developing Informatica ETL process.
Environment:
Informatica 9.8, Oracle, Unix Shell Scripting, Autosys, Abinitio
ETL Developer
Client: Amex, Hyderabad, India / COGNIZANT TECHNOLOGY SOLUTIONS
Feb,2014 – Apr,2014
Role and Responsibilities
•Work with a consultative mindset to gather requirements from marketing sponsors and other team professionals to identify the right requirements
•Implemented Hadoop based data warehouses, integrated Hadoop with Enterprise Data Warehouse systems.
•Built real-time Big Data solutions (MapReduce) using HBase handling billions of records.
•Implemented Big Data analytical solutions that 'close the loop' and provide actionable intelligence.
•Pulling data to Hive for interactive query and modelling and Pig for providing the Data Insight solutions.
•A geographical tracking of the customers are visualized in the tableau.
•Use the Agile testing methodology to ensure the application is thoroughly tested throughout development and before release.
•Participated for the regular updates and releases in the status call.
Environment:
Pig Latin, Hive, Sqoop, Apache Storm, Tableau
ETL Developer
Client: Metropolitan Life Insurance Company, Hyderabad, India / COGNIZANT TECHNOLOGY SOLUTIONS
Jul,2013 – Jan,2014
Role and Responsibilities
•Participate with End-users and do requirement gathering and convert into technical documentation
•Translate requirements into design and develop robust solutions using state-of-art tools and methodologies.
•Lead data modeling and database design and development Lead Informatica ETL solution design and development using multiple various industry standard or custom-built tools
•Collaborate with Data Architect as remodeling the Physical design for the existing ETL architecture and designing centralized or distributed systems that both address the user's requirements and perform efficiently and effectively
•Sketch the full Software Development Life Cycle (SDLC) of the data warehousing project; project planning, business requirement analysis, data analysis, logical and physical database design, setting up the warehouse physical schema and architecture, developing reports, security and deploying to end users
•Recommend new ways of coding, suggest ways of improving existing coding architecture and Work as a business partner with client users to evaluate / improve business processes and arrive at mutual, cost-effective solutions.
Environment:
Informatica 9.0, Cognos 10, MySQL, IBM Tivoli (JS)
Production Support Engineer
Client: Avon Products Inc, Hyderabad, India / COGNIZANT TECHNOLOGY SOLUTIONS
Oct,2012 – Jun,2013
Role and Responsibilities
•Ensuring regular loads are completed in time, status and performance reporting of datastores and loads and ensuring the SCD data updated.
•Monitor performance of data warehouse ELT processes and implement tuning as needed to address scalability, recoverability and performance issues.
•Should have demonstrated capability and willingness to work in a very dynamic and challenging environment.
•Identifies and documents impact to a module with an application Documents and maintain technical specifications, system design specifications, data flows and job flows, SLAs and data model.
•Create or modify scripts when required as a result of production support or related activities.
•Includes providing on-call out of hours support assistance and escalation during severity 1 incidents on a roster basis.
•Diagnose and resolve customer implementation issues, in a timely and effective manner
•Delivered the KT to the Stakeholders with appreciation.
•Identify the recurring issues and overcoming through the code change enhancements according to the client business standards through PL/SQL procedures and Data Flow.
Environment:
Toad for Oracle, SQL/PLSQL, Autosys
ETL Developer
Client: Avon Products Inc, Hyderabad, India / COGNIZANT TECHNOLOGY SOLUTIONS
Dec,2011 – Sept,2012
Role and Responsibilities
•Assist in requirements gathering and data analysis as per the client specified document.
•Understand business analytics requirements and translate them into solutions that provide optimal user experience and developing the SCD TYPE-1 mapping through the Informatica ETL tool.
•And Implementing the Client level KPI’s through the different transformations.
•Handling the Deliverables to the client and avoiding Escalations.
•Working on the different issues raised by the Reporting Team.
•Developing the code through the Informatica ETL tool through different transformations like Filter Transformation, Expression Transformation, Sorter Transformation, Source Qualifier Transformation, Update Strategy Transformation and SQL Transformation and etc.
•Be involved in developing the quality goals and targets in the organization’s strategic plan.
Environment:
Informatica, SAP BODS. Oracle, MicroStrategy, Autosys
DECLARATION & CONSENT
I consider myself as a potential and an agile learner and enjoy the challenges of discovering and working over new tasks. I hereby declare that the above-mentioned information is correct to best of my knowledge
Place: Toronto (ON), CA
Date: 18-SEPT-2023 Sasanka Srikakolapu