Objectives
Summary
Overview
Skills
Work History
Education
Timeline
Hi, I’m

Anish Anandan

Senior Cloud Devops/Cloud Engineer
Richmond,NV
Anish Anandan

Objectives

Objectives

To obtain a DevOps Engineer role where I can utilize my experience in CI/CD pipeline design, cloud infrastructure, and automation to drive operational efficiency and improve system reliability.

Summary

Over 11 years and 4 months of experience in the IT industry as a Senior Cloud /DevOps Engineer with a significant focus on Cloud Resource Utilization, Infrastructure Automation, Continuous Integration, Continuous Delivery, Continuous Deployment. Automatic infrastructure and deployment processes. My responsibilities were to deploy the required infrastructure for the applications and automate the process with CI/CD pipeline. Deploying applications with container orchestrations. Collaborating with development teams on code releases and testing. Hands-on experience building the infrastructure for Apigee platform Services and Terraform Enterprise. Experience in automating application deployments through UCD and Terraform. Experience working in platforms like Unix, Linux and Windows. Worked as data analyst for short term and also worked as Production support engineer was supported 50+ commercial applications and to perform end of end transition of applications and document upgrades and a single point of contact for any escalations. Have contributed support for several critical bank applications using various software tools and utilities like GitHub, Jenkins, Oracle SQL Developer, HP Service Manager, ServiceNow, Control - M, Arow, Quality Center1, and ITIL Process, etc. Have undergone intensive training in AWS, Solaris, Linux, Monitoring Tools.

Overview

11
years of professional experience

Skills

  • Strong experience in implementing CICD pipeline
  • Experience in Docker/ECS/Kubernetes Experience in integrating security in DevOps pipeline Continuous deployment using Terraform and good experience working on Terraform Enterprise/Opensource
  • Experience in writing terraform modules/Sentinel Policies Experience in writing python scripts for lambda functions to automate the process
  • Hands on experience on Pipeline creation from scratch and troubleshooting cloud configuration issues
  • Experience with DevOps tools like Jenkins, Terraform, SonarQube, Checkmarx, Jfrog
  • Experience in working with monitoring tool like Splunk, Datadog, Newrelic Good understanding of DevOps process, standard standards, best practices followed in Enterprise level with Jira, Confluence

Work History

CapitalOne Bank

Senior Devops Engineer/Cloud Engineer
10.2021 - Current

Job overview

• Worked on the DevOps project, where my key task Defining and setting development, testing, release, updating, and support processes for DevOps operation
• Worked on the GEN3 migration activity, as part of migration I worked on provisioning the new infrastructures in the AWS accounts using Bogie Pipeline (Terraform, Jenkins).
• Worked with the Platform team to identify the gaps in the manual deployment process and automate that with CI/CD pipeline using Github, Terraform and Jenkins and train them to use the pipeline for their day to day tasks.
• Worked on performing rehydrations for EC2-based ECS instances to maintain the latest AMI and SG updates
• Experience creating and managing infrastructure as code using Terraform
• Familiarity with Terraform modules and wrote terraform modules to provision AWS resources like EC2, EBS Lambda, Kinesis, and SNS
• Deployed the application which pulls images from the artifactory and provisions the required ECS/EKS clusters with the tasks and services.
• Written docker files and K8's configuration files for deployment and services in the YAML format and deployed them.
• Knowledge of Kubernetes concepts such as pods, services, and deployments
• Familiarity with the kubectl, eksctl and kubeadm tools commands
• Worked on pipeline automation by creating new Jenkins jobs to deploy the resources based on terraform script
• Worked on migrating data dog monitors to the new relic and created new relic monitor dashboards and metrics for infra resources and integrated with the PagerDuty for critical notifications
• Experience with deploying code to various environments, such as development, staging, and production
• Experience with managing and maintaining GitHub organization, teams, and permissions
• Familiarity with Jenkins pipeline as code and how to use Jenkins file for defining the pipeline
• Mentoring and guiding the team members and sharing knowledge with another team on how to use the automated pipelines.
• Troubleshooting techniques and fixing the code bugs.

Fannie Mae

Senior Cloud Engineer/ Devops Engineer
04.2020 - 09.2021

Job overview

• Worked in an enterprise team where I was responsible for developing infrastructure in the cloud for enterprise-level tools such as Apigee, Terraform Enterprise, and many more.
• This is achieved by using the DevOps tools like UCD, Jenkins, Bitbucket/GitHub.
• Worked on creating infrastructures (required EC2/ALBS) for Apigee Platform service.
• Experience working with APIGEE Edge, Developer Portal & APIGEE Baas platform.
• Skilled in Infrastructure Development and Operations involving AWS Cloud platforms, EC2, EBS, S3, VPC, RDS, SES, ELB, Auto scaling, Cloud Front, Cloud Formation, Elastic Cache, Cloud Watch, SNS.
• Used Route53 to route traffic between different regions
• Involved in AWS Cloud IaaS stage with components EC2, VPC, ELB, Auto-Scaling, Security Groups, Route53, IAM, EBS, AMI, RDS, S3, SNS, SQS, Cloud Watch, Cloud Formation, Cloud Front, & Direct Connect.
• As terraform infrastructure team, I have written multiple modules and sentinel policies.
• Experience in Build automation tools like Jenkins, Maven.
• Expert in using different source code version control tools like GIT, Bitbucket.
• Experience in writing a Python program to automate resources through lambda.
• Worked on writing multiple lambda functions to perform desired tasks.
• Written a python script to perform a few tasks for terraform deployment.
• Worked on integrating application logs with Splunk and wrote several custom Splunk queries for monitoring and alerting.

CapitalOne Bank

Cloud Engineer(Onsite)/Devops Engineer
03.2017 - 04.2020

Job overview

  • Designing and building of generic data pipelines and distributed data processing framework using distributed computing architectures such as AWS services (EC2, EMR, Elastic search, CloudFormation), Hadoop and Python.
  • Interacting with business end-users, architects and other stake holders to understand the business requirements prepare functional design documents based on the requirements and prepare detailed design/technical document that encompasses all the requirements.
  • Providing technical thought leadership on Big Data strategy, adoption, architecture and design, as well as data engineering and modeling.
  • Created Lambda Functions to trigger Ec2 instance through which all the required installation set up is done in Ec2 and calling the shell script from the lambda.
  • Migrating application data from On-prem to Cloud with the help of build in pipeline.
  • Designing highly scalable and fault tolerant, highly available and secured, distributed computing services using EC2 instances, EMR, EBS, S3, RDS, Auto Scaling, Lambda, Snowflake, Redshift etc.
  • Creating python scripts and integrate with boto3 module for developing services in AWS like S3, EC2, EMR and VPC etc.
  • Developing a python Script to stop all the instances with a specific tag in AWS instance using Lambda functions and made it into Cloud Watch Scheduler to schedule it every day.
  • Designing and creation of complete Cloud Formation Templates (Json/YML Template) to implement the whole AWS infrastructure through scripting.
  • Designing different performance testing includes regression testing, failover testing, stress testing, capacity testing, spike testing, and soak testing and segmented testing to address the performance issue.
  • Participating in software release and post-release activities, including support for product launch evangelism (e.g
    developing demonstrations and/or samples) and competitive analysis for subsequent product build/release cycles.
  • Responsible for leading team and helping team members manage their career.
  • Guided all the teams across commercial LOB to work on One Lake Migration and prepared clear documentation on Enterprise level.

CaplitalOne Bank

Hadoop Developer
07.2014 - 02.2017

Job overview

  • Developed MapReduce programs to parse the raw data, populate staging tables and store the refined data in partitioned tables in the EDW.
  • Provided design recommendations and thought leadership to sponsors/stakeholders that improved review processes and resolved technical problems.
  • Creating Hive tables and working on them using Hive QL.
  • Managed and reviewed Hadoop log files.
  • Tested raw data and executed performance scripts.
  • Shared responsibility for administration of Hadoop, Hive and Pig.
  • Manage and maintain Hadoop clusters for uninterrupted job.
  • Good Knowledge on Hadoop Cluster architecture and monitoring the cluster.
  • Responsible for creating Hive tables, loading data and writing Hive queries.
  • Dealing with humongous amount of data, analyze them and derive insights using Spark SQL.
  • Worked in teams and as individual contributor as well as trained several people in Hadoop and Spark.

CaplitalOne Bank

Production Support Engineer
02.2012 - 06.2014

Job overview

  • Troubleshooting the technical issues reported in the prod environment and fixing it.
  • Troubleshoot the problems related to Control M batch jobs for the various customer-specific applications on UNIX and Windows platforms.
  • Performing analysis on the reoccurring problems, troubleshooting them, and suggest the client how to prevent the recurrent failures.
  • Documenting all the processes daily.
  • Have actively provided contributions in handling the sessions on Unix, Storage, and the Client specific applications.
  • Perform various changes involving Deployments, Application upgrades in the Unix and Windows servers, and performing restarts of the application as and when required.
  • Worked on the Peregrine tool and HPSM tool to provide time-to-time status of the work on the tickets related to any Batch job issues.
  • Support server infrastructure and application availability with the help of HP SiteScope and BSM monitoring.
  • Coordinating with Onsite and providing training on Client specific applications supported and Mentor the new joiners to perform business as usual.
  • Coordinating with different teams working on different applications to provide them with data as per their requirements.
  • Helping and guiding other team members to carry out their responsibilities and meet their deadlines.
  • Handling high-severity bridge calls and gauges the required team to drive solutions to the issue.

Education

Vellore Institution of Technology (VIT)

Master of Science from Information Technology
07.2015

University Overview

GPA: 76

Nazareth College of Arts And Science

Bachelor of Science from Computer Science And Programming
04.2011

University Overview

GPA: 78

CH PORT & DOCK

HSC
03.2008

University Overview

GPA: 72

Shebha Matric Higher Secondary School

SSLC
03.2006

University Overview

GPA: 72

Timeline

Senior Devops Engineer/Cloud Engineer
CapitalOne Bank
10.2021 - Current
Senior Cloud Engineer/ Devops Engineer
Fannie Mae
04.2020 - 09.2021
Cloud Engineer(Onsite)/Devops Engineer
CapitalOne Bank
03.2017 - 04.2020
Hadoop Developer
CaplitalOne Bank
07.2014 - 02.2017
Production Support Engineer
CaplitalOne Bank
02.2012 - 06.2014
Vellore Institution of Technology (VIT)
Master of Science from Information Technology
Nazareth College of Arts And Science
Bachelor of Science from Computer Science And Programming
CH PORT & DOCK
HSC
Shebha Matric Higher Secondary School
SSLC
Anish AnandanSenior Cloud Devops/Cloud Engineer