Mandeep Singh

Mandeep Singh

$50/hr
Devops, SRE
Reply rate:
-
Availability:
Hourly ($/hour)
Age:
35 years old
Location:
Hapur, Uttar Pradesh, India
Experience:
10 years
DevOps Lead 9 Years of Experience Mandeep Singh Email:- Contact no: - LinkedIn: linkedin Technical SKILLS & Summary Operating Systems: Linux (Ubuntu, CentOS, Amazon Linux), Unix, Windows Cloud Platforms: AWS, GCP, Open Stack CI/CD: Jenkins, GitHub Actions, Gitlab CI/CD, Teamcity Infrastructure as Code: CloudFormation, Terraform Container Tools: Docker, Kubernetes, EKS, ECS. Storage: S3, EBS, EFS Web Servers: Apache, Nginx, IIS. Database: MYSQL, Aurora, Mongo db. Configuration management: Ansible, Puppet. Languages: Bash/Shell, Groovy, Python Security: AWS Macie, GuardDuty, IAM Backup and Disaster Recovery: AWS Backup Plan Development tools: GitHub, Bitbucket, Jira AWS Experience: Proficient in managing and configuring EC2 instances, including creating AMIs, managing security groups, and configuring instance profiles. Experienced in using ECS to deploy and manage Docker containers, including setting up task definitions, creating services, and configuring load balancing. Proficient in using EKS to deploy and manage Kubernetes clusters, including configuring worker nodes, setting up networking, and deploying applications. Proficient in using Cloudfront to distribute and deliver content to users around the world, including setting up origin servers, configuring caching behavior, and monitoring performance. Experienced in using CloudFormation to manage and automate infrastructure resources, including creating and updating stacks, defining templates, and using AWS CloudFormation Designer. Proficient in using S3 to store and manage data, including setting up buckets, configuring access policies, and using lifecycle rules. Experienced in using AWS load balancers, including creating and configuring ALBs and NLBs, setting up target groups, and configuring health checks. Experienced in using Autoscaling to automatically adjust the number of instances based on traffic, including configuring scaling policies, setting up alarms, and monitoring scaling activity. Proficient in using AWS Macie to automatically discover, classify, and protect sensitive data in S3, including configuring policies, setting up alerts, and reviewing findings. Proficient in using AWS Target Group to direct traffic to different instances based on URL path, HTTP headers, or hostnames, including configuring target group attributes, setting up listeners, and managing rules. Experienced in using EFS to provide scalable and highly available file storage for EC2 instances, including setting up file systems, configuring access policies, and monitoring performance. Proficient in using VPC to create and manage isolated network environments, including setting up subnets, configuring route tables, and managing security groups. Experienced in using IAM to manage user and application access to AWS resources, including setting up policies, creating roles, and configuring identity providers. Proficient in using WAF to protect web applications from common attacks, including setting up rules, creating web ACLs, and monitoring traffic. GCP experience: Proficient in using GCP Compute Engine to create and manage virtual machines, including configuring networking, storage, and security. Experienced in using GCP Kubernetes Engine (GKE) to deploy and manage Kubernetes clusters, including configuring node pools, setting up networking, and deploying applications. Proficient in using GCP App Engine to deploy and manage web applications, including configuring automatic scaling, setting up custom domains, and managing traffic splitting. Experienced in using GCP Cloud Storage to store and manage data, including setting up buckets, configuring access policies, and using lifecycle management. Proficient in using GCP Identity and Access Management (IAM) to manage user and application access to GCP resources, including setting up policies, creating roles, and managing service accounts. WORK EXPERIENCE Telus International, Noida Feb-2020 - till date DevOps Lead Tasks: Led a team of 5 DevOps engineers responsible for implementing and managing continuous integration and delivery pipelines using Jenkins, and successfully migrated to GitHub Actions for improved efficiency and scalability. Managed the migration of Bitbucket repositories to GitHub, including configuring repositories, migrating code, and setting up access controls. Successfully migrated an on-premises Java Spring Boot application to AWS, including configuring infrastructure, setting up database connections, and ensuring high availability and scalability. Led the modernization of legacy applications, including containerization using Docker and deployment on Kubernetes clusters. Deployed and configured Elasticsearch, Logstash, and Kibana (ELK) for log analytics, and application monitoring in integration with AWS Lambda and CloudWatch. Then store that logs and metrics into an S3 bucket using Lambda function. Integrated AWS DynamoDB using AWS lambda to store the values of items and backup the DynamoDB streams, implemented load balanced, highly available, fault tolerant, auto-scaling Kubernetes AWS infrastructure and microservice container orchestration Worked with Terraform Templates to automate the AWS IaaS virtual machines using terraform modules and deployed virtual machine scale sets in production environment. Configured the Kubernetes provider with Terraform which is used to interact with resources supported by Kubernetes to create several services such as Deployments, services, ingress rules, Config Map, secrets etc., in different Namespaces. Integrated Docker container-based test infrastructure to Jenkins CI test flow and set up build environment integrating with Git and Jira to trigger builds using Webhooks and Slave Machines. Implemented Docker-Maven plugin and Maven POM to build Docker Images for all microservices and later used Docker file to build the Docker Images from the java jar files. Worked with RedHat OpenShift Container Platform for Docker and Kubernetes. Used Kubernetes to deploy scale, load balance and manage Docker containers with multiple namespace versions. Implemented clusters using Kubernetes and worked on creating pods, replication controllers, Name Spaces, deployments, Services, labels, health checks, Ingress resources and Controllers by writing YAML files. Integrated them using weave, flannel, calico SDN networking. Deployed Kubernetes clusters on top of Servers using KOPS. Managed local deployments in Kubernetes, creating local clusters and deploying application containers. Building/maintaining docker container clusters managed by Kubernetes and deployed Kubernetes using helm charts. Developed microservice onboarding tools leveraging Python and Jenkins, allowing for easy creation and maintenance of build jobs, Kubernetes deploy and services. Managed Ansible Roles by using tasks, handlers, vars, files and templates in installing, configuring and deploying the webserver application. Wrote several Ansible playbooks for the automation that was defined through tasks using YAML format and run Ansible Scripts to provision Dev servers. Used Jenkins as Continuous Integration tools to deploy the Spring Boot Microservices to AWS Cloud. Worked in complete Jenkins plugins and administration using Groovy Scripting, setting up CI for new branches, build automation, plugin management and securing Jenkins and setting up master/slave configurations. Deployed and configured Git repositories with branching, forks, tagging, and notifications. Worked with MAVEN for building the application, and written maven scripts and shell scripts to automate the build process. Daily maintenance of GIT source repositories and builds. Supported development and QA teams in troubleshooting issues related to infrastructure, deployment, and application performance Adobe Systems, Noida Jan 2019 - Feb 2020 Site Reliability Engineer(SRE) Tasks: Managed AWS instances across multiple regions, ensuring high availability and scalability for Adobe Campaign's marketing automation platform. Configured and maintained AWS components such as EC2, ELB, Route53, S3, AMI, Security Groups, and CloudFront to support Adobe Campaign's infrastructure. Provisioned and decommissioned new and existing customer infrastructures, working closely with customers to understand their requirements and provide effective solutions. Troubleshot application and infrastructure-level issues across production and other environments, using tools such as Splunk and Nagios to identify and resolve issues quickly. Configured SSL certificates on EC2 instances and ELB to ensure secure communication between Adobe Campaign and customers' systems. Configured SFTP to enable secure file transfer between Adobe Campaign and customers' systems. Managed product version and build upgrades, ensuring smooth deployment and minimizing downtime for customers. Configured GPG to enable encryption and decryption of sensitive data in Adobe Campaign's platform. Scalemonks Technologies, Noida May 2016 - Jan 2020 DevOps Engineer Tasks: Administered LAMP/LEMP stack and documented Linux scripts for future reference, ensuring smooth operation and easy maintenance of production servers. Maintained and monitored production servers, using tools such as Nagios and Zabbix to ensure high availability and uptime. Created and maintained Docker environment, enabling efficient deployment and scaling of applications. Created and maintained Inventory Server, enabling effective asset management and resource planning. Set up, configured, and debugged network configurations for Ubuntu servers, ensuring reliable and secure communication between servers. Maintained Dev environment based on virtualization (KVM), creating instances (Ubuntu/CentOS) as per the requirement for developers. Developed infrastructure on AWS, employing services such as EC2, RDS, CloudFront, CloudWatch, and VPC to provide reliable and scalable infrastructure for clients. Created EBS volumes for storing application files for use with EC2 instances, ensuring reliable and efficient storage for applications. Deployed SAP HANA Application on AWS, ensuring optimal performance and availability for clients. Created snapshots to take backups of the volumes and images to store launch configurations of the EC2 instances, ensuring reliable and efficient disaster recovery. Designed AMI images of EC2 instances by employing AWS CLI and GUI, enabling efficient deployment of instances and reducing deployment time. Maintained firewall (PFSense), ensuring security and integrity of network infrastructure. Deployed Graylog open-source tool on the client-side, providing efficient log management and analysis for clients. Provided technical support to Linux users, troubleshooting issues and providing effective solutions to maintain smooth operation of servers. Aditya Infotech, Noida July 2014 to Feb 2016 Technical Support Engineer Tasks: Testing and troubleshooting of Network Video Recorder, IP and Analog CCTV Systems. Troubleshooting of Biometric Devices( Analog and IP). Troubleshooting of local as well as remote network to make device online on cloud to be accessed from anywhere by configuring router through Teamviewer or Ammyy Admin. Installing and configuring Servers, Routers, Switches, Stand-alone and PC based DVR & NVR. Dealing with clients for toubleshooting the issues with network. Configuration of devices on different CMS on different platforms(MAC or Windows). Associated with R&D team to develop and analyse new features for various Security products. EDUCATION Bachelors in Information Technology at UPTU University, Uttar Pradesh, India. CERTIFICATIONS AWS Certified Solutions Architect Validation Number P6EYEYLKKM1Q1P33 GCP Professional Cloud Architect
Get your freelancer profile up and running. View the step by step guide to set up a freelancer profile so you can land your dream job.