Krishnamurti V Upadhyaya

Krishnamurti V Upadhyaya

$10/hr
Hadoop Admin and Linux Admin
Reply rate:
-
Availability:
Hourly ($/hour)
Age:
33 years old
Location:
Bangalore, Karnataka, India
Experience:
6 years
About
  • Currently I am working as a Hadoop Admin in Tata Consultancy Services Ltd., Bangalore from August’2013 to till date.
  • Providing L3 application level support for a smooth working of cluster environment.
  • Completed certification on Red Hat Certified Engineer, RHCE&RHCSA-7.
  • Demonstrate Continual Improvement in terms of Individual Performance.
  • Strong verbal and written communication skills; able to communicate in a clear, constructive, and professional manner.
  • Excellent analytical and problem solving skills.

Skills :

·      Monitoring and managing different type of environments like Dev, ITG, Prod clusters.

·      Worked on Hadoop Ecosystem Hive, HDFS, Yarn, Tez view, Ranger, Kerberos, MapReduce, SmartSense and involved in Performance tuning also.

·      Apache Ambari and SmartSense Upgradation activities on all environments from older version to new version.

·      Clearing Disk Utilization alerts. Maintaining disk usage of all environments.

·      Decommissioning and Commissioning of nodes.

·      MySQL database backups and restoration.

·      Re-Balancing of Cluster

·      Ability to take complete ownership and handle all issues without missing SLA’s.

·      Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments.

·      Planned RFC (Ready for Change) to change in an Application services and Hive maintenance activities in Dev, ITG and PROD cluster.

·      Involved in designing and Implementing of Hive/HBase databases and tables.

·      Creating access policies for the service users for the HDFS, Hive, Hbase plugins in Ranger tool.

·      Creating and adding of principal keytabs for generating the tokens in the Kerberos

·      Involved in the creation of a principal for the service users under kerberos master server.

·      Being involved in day to day cluster issues like finding out which jobs are taking more time, if users say that jobs are stuck to find out the reason.

·      Running DDL and DML script files under beeline to connecting to the Hive databases.

·      Troubleshooting on performance issues like long running application jobs and failed jobs.

·      Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required.

·      Handling backups to the metadata of the cluster and other Eco-system meta data

·      Being involved in data migration between clusters if needed.

·      Manage Customer calls and analyze issues reported by the users.

·      Cluster Maintenance : Service Restarts, Troubleshooting Error and get to the root cause of the issue

Languages
Get your freelancer profile up and running. View the step by step guide to set up a freelancer profile so you can land your dream job.