GNANENDRA REDDY
Mobile : -
Email:-
Professional Summary:
• An expert professional having total 8+ years of experience in IT industry with around 4 years of
relevant experience in Azure cloud, Azure data factory, Azure Databricks, Pyspark, Data
Analysis, ETL, Development, Maintenance, Testing and documentation.
•
Experience in engaging the Azure Data Factory (ADF) to design Pipelines using control
flow activities such as Get Metadata, Copy, If Condition, For Each, Delete Validation,
sending email notifications etc.
•
Experience in Creating ADF pipelines to execute Databricks Notebook activities to carry
out transformations.
•
Debugging data pipelines, investigating issues, fixing failures etc. and scheduling pipelines
using schedule Trigger in Azure Data Factory (ADF).
•
Experience in Developing Spark applications using Spark - SQL in Databricks for data
extraction, transformation and aggregation from multiple file formats for analyzing &
transforming the data to uncover insights into the customer usage patterns.
•
Good Experience on Databricks with Pyspark for Data migration and ETL project
implementation using Databricks Notebooks and scheduling jobs.
•
Good understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames,
Driver Node, Worker Node, Stages, Executors and Tasks.
•
Expertise in coding like Oracle - SQL and PL/SQL.
•
Experience in implementing UNIX shell scripting.
•
Experience in writing PL/SQL concepts like Packages, Procedures, Functions, Triggers,
Cursors, Exception Handling and Collections.
•
Strong experience in Performance testing and tuning in Oracle Database.
•
Expertise in Retail and Banking domain such as SCM, Client Mortgages System, Enterprise
Governance Risk and Compliance Platform (EGRCP) with modules GRC, Audits, Risk, ISM.
•
Good experience in Utilities, such as Import/Export, SQL*Loader and SQL*PLUS.
•
Excellent communication and interpersonal skills with ability to effortlessly learn new
technologies and concepts.
•
Co-operative, Helpful and Share knowledge with colleagues as a Team member.
Educational Qualifications:
•
Completed Bachelor of Technology(Computer Science) from J.N.T University, Anantapur with
70% aggregation.
•
Technical Skills
•
Cloud Big Data Technologies
Azure Databricks, Azure Data Factory
(ADF), Spark, ADLS
Programming
Languages
PySpark, Apache Hive, Oracle SQL &
PL/SQL, Shell-Scripting
Database & DWH
Azure SQL DB, Oracle, Delta
SQLSERVER 2014, SnowFlake
Productivity Tools
TOAD, SQL Developer, SQL Server
Management Studio
Tools
Jira, Bitbucket, GitHub, SVN, Jenkins, Data-Dog,
Lake,
Sql Loader, Confluence
Professional Experience:
Project#1
Duration
Technologies
•
Freelancer – Rainbow program
: Nov -2023 to Present.
: Azure Data Factory(ADF),Azure Databricks, Pyspark, ADLS, Sql
:
Description:
This program intends to migrate its data from reporting platform to Azure platform to support
Business vision, align to Data strategy and reduce the cost of operating the MS reporting
platform. The project objective is to migrate off a legacy Microsoft SQL platform, to redirect
the legacy ETL pipelines to the cloud and to replicate all downstream reporting and analytics
output.
In it, Merchant services establish a reusable process to onboard and access data from
target solution to end users. Implement a defined and transparent framework for capturing
metadata, business definition and to support merchant data requirements. Also to reduce
resource utilization and accelerate project delivery for business.
Responsibilities:
•
•
•
•
•
Analyze, design, and build modern data solutions using Azure PaaS service to support
visualization of data. Understand current Production state of application and determine
the impact of new implementation on existing business processes.
Developed Spark applications using Pyspark and Spark-SQL for data extraction,
transformation, and aggregation from multiple file formats for analysing &
transforming the data to uncover insights into the customer usage patterns.
Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform, and
load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse,
write-back tool and backwards.
Involved in design of the DW tables in Azure synapse analytics to support reporting and
analytics platform by performing transformations from Bronze layer to Silver layer.
Understanding the client requirements and working on priority according to user stories
assigned.
Project#2
Organization
Duration
•
:
EDP - Retail
: Synechron Technologies
: Aug -2020 to Nov -2022.
Technologies
Sql, Pl/sql,
: Azure Data Factory(ADF),Azure Databricks, Snowflake, Blob,ADLS,
Description:
Enterprise Data and Analytics Platform is designed to support for near real-time insights to
business/stakeholders. And it is used for optimizing business performance and strives to
achieve company goals including profit, new item listings, deduction management, and sales.
In order to better predict the amount of articles, that might get sold and therefore to
better predict from available data sources to support retail reporting and managing
deliverables, business takes the preprocessing, analytical and organizational data.
Responsibilities:
•
Involved in designing pipelines by using Spark and Python.
•
Created DataBricks pyspark notebooks for validating and processing source systems
data into Azure storage and Snowflake DWH for use of PowerBI Reports team.
Implemented pipelines to Copy data to Landing(L0) layer from external sources and
built the Notebooks in Databricks to apply transformations and write to L1 and
L2/snowflake Analytics layers.
Pipelines are scheduled in Azure ADF using triggers and made parent-child dependency
execution by Execute activity.
Debugging data pipelines, investigating issues, fixing failures etc.
Understanding the client requirements and working on priority according to user stories
assigned.
•
•
•
•
•
•
•
•
•
•
•
Maintaining the previous versions all source code for any Recovery Management.
Project#3
Organization
Duration
Technologies
: Sony, e-Dolphin
: Indecomm Digital Services
: May -2019 to Jul -2020.
: Pl/sql, Sql, Unix shell scripting, Pro*C and Control-M
Description:
e-Dolphin is a supply chain management application which deals with the Factories
and OEMs This application deals with Order generation, Invoice Creation,shipping, inventory,
prices and new model creation and Reports. This is used by Factory users who supplies the
parts/semi-finished goods and finished goods(OEM) for the factories as well as the business
users.
Responsibilities:
• Based on the functional requirements implementing the change requests for the current
version of our product.
• Developing the database objects such as Packages, Procedures, Functions, Cursors,
Exception Handling and Collections.
• Implemented the shell scripts to call the Proc*C and database objects to meet the
requirements.
• Resolves the multiple control-M failure jobs by providing the hotfixes within timelines.
• Implementation of order migration tool to upload bulk data for front end customers.
•
•
Project#4
:
1)RDA2 and Deutsche Bank
2)Royal Bank of Scotland(RBS)
Organization
: Infosys Limited.
Duration
: Dec -2015 to Mar - 2019.
Technologies : Pl/sql, Sql, Unix shell scripting and Control-M
Description:
1)Rosetta is ultimately a solution to the challenge of constructing a single Enterprise view of
reference data. It aims to accelerate the construction of view while existing golden sources are
re-architected in line with division of responsibilities. Rosetta will source the data from current
golden sources then match and merge it into a single record. The solution uses enterprise
standards where appropriate such as DIF for standardized inbound and outbound interfaces
2)Client Management System (CMS): CMS application in RBS is identified for migration to
new bank Williams & Glynn. Through 3 processes sales, setup and service, bank operators
handle offset mortgage applications of customers via CMS.
Responsibilities:
•
•
•
•
•
•
•
Interaction with the clients to get the requirement in the form of JIRAs and translating those
business requirements to technical specifications.
Developing Procedures, Functions, Packages in Oracle 12c to meet the customer requirements.
Writing complex SQL queries using joins, Sub queries and correlated sub queries to retrieve
the JSON data from the database.
Created the database objects with IS JSON check constraint to deal with JSON data in
oracle 12c database.
As in Agile methodology, took complete responsibility of owned task and required to
give demo to the client at end of each sprint with required changes.
Implemented SYS-REFCURSORs to expose JSON data to the DIF application.
Written shell scripts to call the database objects and scheduled the jobs in Control-M
tool.
•
•
•
Project#5
Duration
Technologies
CITI IA (Internal Audits) - CITI Bank (Group)
: July-2013 to Nov-2015.
: Pl/sql, Sql, Unix and Sql*Loader
:
Description:
The Internal Audit Committee of Citigroup Inc. is a standing committee of the Board. The
purpose of the Committee is to assist the Board in fulfilling its oversight responsibility relating to
•
•
•
The performance of the internal audit function (Internal Audits).
Policy standards and guidelines for risk assessment and risk management.
Citigroup’s compliance with legal and regulatory requirements, including Citigroup’s
disclosure controls and procedures.
Audits plans are creating respective for Branch and Non Branch (Risk Based, third party etc.)
Responsibilities:
•
Interaction with the clients to get the requirement and translating business requirements to
technical specifications
•
•
Developing Procedures, Functions, Packages to meet the customer requirements
Writing complex SQL queries using joins, Sub queries and correlated sub queries to retrieve
the data from the database
•
Monitoring the UNIX jobs
Involved in design of Technical Documentation.
•