Home

yedunanda - DevOps Engineer
[email protected]
Location: Dallas, Texas, USA
Relocation:
Visa: H1B
Yedunandan Vitakula
[email protected]

PROFESSIONAL SUMMARY:

12+ years of IT experience as Cloud DevOps CI/CD and Data Engineer on Domains like Telecom, Banking, Finance, Health care and clinical research and Public Education department.
Experience in software configuration management, Build and Release.
Experience in SCM tools Subversion SVN and GITHUB.
Experienced in Ansible Tower, which provides an easy-to-use dashboard and role-based access control and in developing Ansible playbooks for managing the application/OS configuration files in GIThub, integrating with Jenkins, and verifying with Jenkins's plugins, deploying the application in Linux environment.
Experience in working with the groovy scripts in Jenkins to execute jobs for a continuous integration pipeline where Groovy Jenkins Plugin and Groovy Post Build Action Plugin is used as a build step and post build actions.
Extensively worked on Jenkins and Bamboo by installing, configuring and maintaining the purpose of Continuous Integration (CI) and End-to-End automation for all build and deployments implementing CI/CD for databases using Jenkins.
Proficiency in using Docker Hub, Docker Engine, Docker images, Docker Weave, Docker Compose, Docker Swarm and Docker Registry and used containerization to make applications platform when moved into different environments.
Profound Experience in designing Strategies to increase the velocity of development and release for Continuous integration, delivery and deployment, by using technologies like Bamboo and Jenkins. Also, experience in using SCM tools like GIT, Bitbucket and TFS on Linux platforms in maintaining, tagging and branching the versions on multiple environments.
Used Kubernetes to provide a platform for automating deployment, scaling and operational containers across clusters of hosts and managed containerized applications using its nodes, config maps, selectors, and services.
Hands-on experience on AWS cloud services like EC2, IAM, Code-Commit, Code star and Code star.
Responsible for Managing, Implementing, Troubleshooting Apache-tomcat servers /instances in all environments and Application Monitoring using AppDynamics.
Configured the distributed and multi-platform server using Nagios.
Project tracking tool using JIRA to provide the updates to the management.
Use of Log tools like Splunk and Datadog.
Knowledge on scripting languages like Shell, Bash and Python.
Expertise in Azure Scalability and Azure Availability - Build VMs availability sets using the Azure portal to provide resiliency for IaaS based solution and Virtual Machine Scale Sets (VMSS) using Azure Resource Manager (ARM) to manage network traffic.
Experience in working with AWS CodePipeline and creating CloudFormation JSON templates to create custom sized VPC & migrate a production infrastructure into an AWS utilizing Code Deploy, Code Commit, OpsWorks.
Possess AWS Certification at a professional level or equivalent experience, showcasing advanced proficiency in AWS services and best practices.
Experience in writing Infrastructure as a code (IaC) in Terraform, Azure resource management, AWS Cloud formation. Created reusable Terraform modules in both Azure and AWS cloud environments.
He has worked before with the New Mexico Public Education Department.

TECHNICAL SKILLS:

Cloud Environments: Microsoft Azure, Amazon Web Services, Google cloud platform.
Build and Testing Tools: Maven, ANT, Gradle, Selenium, JIRA.
Databases: Oracle 10g/9i, Oracle 11g, Teradata, MSSQL, MYSQL, Snowflake, SQLServer.
Scripting Languages: Python, Shell Scripting, Bash Shell, PowerShell, YAML,
Operating System: Unix, Linux, Ubuntu, Mac OS, WINDOWS NT/2000/2003/XP/7/8/10.
Other Tools WinSCP, IBM UCD, Active Directory, AMW.
Monitoring Tools Autosys, M-Control.
IDE Tools: Microsoft Visual Studio, NetBeans, Eclipse, PyCharm, Oracle SQL DBA.
Web Technologies HTML5, CSS3, Bootstrap, JSON, JQuery, JavaScript, C#, ASP.NET, XML
Monitoring and Bug tracking tools Nagios, Splunk, AppDynamics and Datadog.
Version control: SVN, Git, GitHub.
Configuration management: Chef, Puppet, Ansible, Terraform.
Deployment Tools: Bamboo, Jenkins, Gitlab Pipeline, Azure pipelines.
Container Tools: Docker, Kubernetes
Networking Protocols: DNS, DHCP, FTP/TFTP, NFS, SMTP, TCP/IP, HTTP/HTTPS, WAN, LAN

Education and Certification:
Bachelor of Technology, ECE, JNTUK, 2011.
Certified AWS Solutions Architect
Certified AZ-204


PROFESSIONAL EXPERIENCE:

Role: Senior Cloud Engineer (October - 2021 Till date)
Company: AT&T, Dallas, TX.
Project: Customer Discovery
Responsibilities:
Implement and maintain core AWS products, ensuring optimal performance and reliability of infrastructure components such as Kubernetes, VPC, EC2, S3, RDS, ELB/ALB, IAM, Lambda, SQS, EBS, Cloud Functions, VPC, Networking, and VPNs.
Build AWS Infrastructure resources, like physical machines, VMs and even Docker containers using Terraform from Code (Infrastructure as code).
Designed and implemented Terraform configurations to provision and manage AWS infrastructure resources, including EC2 instances, S3 buckets, RDS databases, VPCs, subnets, and IAM roles.
Implemented CICD for EKS env using Jenkins in EKS through Helm charts and Kubernetes Manifest files.
Configured and maintained Akamai services such as Web Performance Optimization, Content Delivery Network (CDN), Cloud Security, and Edge Servers.
Managed local deployments in Kubernetes, creating local cluster and deploying application containers.
Implemented caching strategies, edge logic, and ensure efficient delivery of content from Akamai s CDN edge servers.
Configured and resolved LAN, WAN and TCP/IP issues and generated reports to show the Resource Utilization, User / CPU / Network Load.
Set up F5 BIG-IP for high-performance service proxy and load balancer widely used in enterprise environments to manage traffic between clients and applications.
Deployed application which is containerized using Docker onto a Kubernetes cluster which is managed by Amazon Elastic Container Service for Kubernetes (EKS). Configured Kubectl to interact with Kubernetes infrastructure and used Terraform Modules to launch a cluster of worker nodes on Amazon EC2 instances.
Wrote Terraform templates for AWS Infrastructure as a code to set up build & automation for Jenkins.
Create and maintain fully automated CI/CD pipelines for code deployment using Jenkins.
Worked on GitLab configurations and practices that comply with industry standards and organizational policies.
Involved in design, implementation and modifying the Python code.
Set up Jenkins server and implemented Jenkins Code Deploy plugin to deploy to AWS and used to automate the build process and deploy the application to ECS and EKS.
Changing the AWS infrastructure Elastic Beanstalk to Docker with Kubernetes.
Worked on F5 SPK to work seamlessly with Kubernetes clusters, acting as a proxy for managing service-to-service traffic within the cluster and ingress/egress traffic.
Implemented ispio, a service mesh for security and network enhancements in EKS applications.
Involved in design, implementation and modifying the Python code.
Utilized Terraform's AWS provider to interact with AWS APIs and automate the deployment of infrastructure components across multiple AWS regions and availability zones
Deployed application which is containerized using Docker onto a Kubernetes cluster which is managed by Amazon Elastic Container Service for Kubernetes (EKS).
Worked on building and troubleshooting skills on various Linux Operating systems (Redhat,Ubuntu,CentOS), Windows.
Worked on managing cloud architecture on OpenShift with a focus on bare-metal infrastructure involves deploying a containerized platform directly on physical servers without relying on a virtualized environment.
Deployed 3-tier architecture infrastructure on AWS cloud using terraform-IaC
Automated setting up server infrastructure for the DevOps services, using Ansible, shell and python scripts.
Supervised the implementation and maintenance of DHCP, NFS, NIS and DNS.
Automated Akamai configurations and deployments through tools like Akamai CLI, Terraform, or custom scripts using Akamai API.
Created scripts in Python to automate log rotation of multiple logs from web servers.
Involved in development of test environments on Docker containers and configuring the Docker containers using Kubernetes.
Wrote various MSSQL scripts and stored procedures to support applications.
Adding tasks build pipelines for Code Analysis, Version tracking and Security check.
Worked on deployment automation of all the micro services to pull image from the private docker registry and deploy to docker swarm cluster using Ansible.
Managed the automated build system like ANT, Maven for implementing new scripts for the build system. Initiated the deployment of the build artifacts to Web logic application server using Maven.
Designed and documented CI/CD tools configuration management.


Role: Sr. Cloud Devops Engineer (May - 2021 October - 2021)
Client: Chewy
Project: Pet cart
Responsibilities:
Configure and manage AWS services such as DynamoDB, S3, and Cognito to handle data storage, authentication, and authorization requirements for cloud applications.
Virtualized the servers using the Docker for the test environments and dev-environments needs and also configuration automation using Docker containers and experience working on Kubernetes to orchestrate the deployment, scaling and management of Docker Containers.
Deployed application which is containerized using Docker onto a Kubernetes cluster which is managed by Amazon Elastic Container Service for Kubernetes (EKS). Configured Kubectl to interact with Kubernetes infrastructure and used Terraform to launch a cluster of worker nodes on Amazon EC2 instances.
Exposed APIs from Kubernetes environments with F5's advanced API management capabilities.
Developed microservice on boarding tools leveraging Python and Jenkins allowing for easy creation and maintenance of build jobs and Kubernetes deploy and services.
Worked on deployment automation of all the micro services to pull image from the private docker registry and deploy to docker swarm cluster using Ansible.
Maintained and updated documentation related to GitLab configurations, best practices, and troubleshooting guides.
Relied on solid systems network stack experience DNS, DHCP, TCP/IP.
Developed build workflows using gradle, gitlab-ci, docker and kubernetes.
Design and implement serverless architectures on AWS using services like Lambda, API Gateway, DynamoDB, and S3 to build scalable and cost-effective applications.
Created multiple Terraform modules to manage configurations, applications, services and automated the complete deployment environment on AWS.
Implemented ispio, a service mess for security and network enhancements in EKS applications.
Used Ansible to manage Web applications, Environments configuration Files, Users and Mount points. Integrated Terraform with Ansible Packer to create and Version the AWS Infrastructure.
Developed microservice on boarding tools leveraging Python and Jenkins allowing for easy creation and maintenance of build jobs and Kubernetes deploy and services.
Implemented Istio as a service mesh for security and network enhancements in EKS applications.
Worked on Migrating an entire project, including issues, merge requests, to GitLab.
Accomplished AWS Server Engineer with hands-on experience in implementing core AWS products such as Kubernetes, VPC, EC2, S3, RDS, ELB/ALB, IAM, Lambda, SQS, EBS, Cloud Functions, VPC, Networking, and VPNs
Implemented CICD for EKS env using GITLAB in EKS through Helm charts and Kubernetes Manifest files.
Developed, built and deployed scripts using ANT and MAVEN as build tools and Jenkins to move from one environment to other environments and create new jobs and branches through Jenkins.
Created multiple Python, Bash, Shell and Ruby Shell Scripts for various application-level tasks.
Worked on integrating SonarQube code analysis, code coverage etc.. in CI/CD pipelines
Worked on integrating different kinds of tests (unit, smoke, regression etc..) in CI/CD Pipelines
Develop TypeScript/JavaScript applications for AWS environments, leveraging AWS SDKs and frameworks like AWS Amplify for rapid development and deployment
Monitored application Insights, Logs to Splunk by triggering respective functions and pushing events to Splunk by using Splunk search, WMI Issues, Splunk crash logs and Alert scripts for real-time Analysis and Visualization.

Role: Sr. Cloud Devops (Feb 2020 April 2021)
Client: New Mexico Public Education Department., Santa Fe, NM.
Company: BVM Technologies Associates.
Project: Teachcert
Responsibilities:
Implemented serverless architecture using API gateway, Lambda, and DynamoDB and deployed AWS Lambda code from Amazon S3 buckets.
Supervised the implementation and maintenance of Google Cloud DNS.
Implemented CI/CD pipelines using Jenkins to automate the build, test and deploy containerized applications using Helm on OpenShift.
Worked on with Git / GitHub for code check - ins/checkouts, branching etc.
Extensive hands-on experience in AWS services including Compute (EC2, EMR), Storage (S3, EBS), Databases (RDS, DynamoDB), Data Integration (Glue), and Lambda, driving successful migration projects from on-premises to cloud environments.
Managed Kubernetes charts using Helm, Created reproducible builds of the Kubernetes applications, managed Kubernetes manifest files and Managed releases of Helm packages.
Managed local deployments in Kubernetes, creating local cluster and deploying application containers.
Contributed on development of new features or enhancements in GitLab, depending on the organization's needs.
Wrote Terraform templates for AWS Infrastructure as a code to build staging, production environments & set up build & automations for Jenkins.
Designed Terraform modules to automate building the EC2, S3, Auto Scaling, VPC and multiple resources in AWS.
Executed migration in the production environment, using the same GitLab pipeline.
Ran OpenShift on bare metal giving direct access to hardware resources like CPU, memory, and storage, minimizing the overhead introduced by EC2 instances.
Resolved Merge Conflicts, configured triggers and queued new builds within the release pipeline.
Worked on migrating Infrastructure as Code (IaC) pipelines to GitLab CI/CD.
Responsible for creation and implementation of AWS security guidelines and storage mechanism.
Worked on Docker-Compose, Docker-Machine to create Docker containers for testing applications in the QA environment and automated the deployments, scaling and management of containerized applications across clusters of hosts using Kubernetes.
Leveraging AWS Cloud Formation designer templates or Terraform for automation of infra and OpenShift.
Created openShift clusters which consist of master and worker nodes, where the master nodes control the orchestration, and the worker nodes host the container workloads. Bare-metal deployment allows you to assign physical servers to these roles based on performance requirements.
Worked closely with QA Teams, Business Teams, and DBA team and Product Operations teams to identify QA and UAT cycles release schedule to non-prod and prod environments.
Created Build definition and Release definition for Continuous Integration and Continuous Deployment.
Installation, Administration, Upgrading, Troubleshooting Console Issues & Database Issues for AppDynamics.
Identifying the Critical applications for System resource utilization (CPU, Memory, and Threads etc.) & JVM heap size was monitored using AppDynamics.
Worked on Automating End-to-end Application testing using Selenium QA Automation.




Role: Devops Engineer (July 2018 Feb 2020)
Client: American Heart Association, Bangalore, India.
Company: IQVIA RDS (India) Private Limited (Sep 2019 Feb 2020)
Company: Skill Demand (July 2018 Sep 2019)
Project: AHA/ACS
Responsibilities:
Managing security groups on AWS, focusing on high - availability, fault tolerance, and auto-scaling using Terraform templates and also Hands on experience in Architecting Legacy Data Migration projects such as Teradata to AWS Redshift, AWS Cloud from on-premises
Configured Inbound/Outbound in AWS Security groups according to the requirements
Proficient in infrastructure as code (IaC) tools such as Terraform and CloudFormation for automating the provisioning and management of AWS resources
Configuring the Continuous integration and Deployment (CI/CD) of the code on to AWS cloud
Configured and managed various AWS Services including EC2, RDS, VPC, S3, Glacier, Cloud Watch, CloudFront, and Route 53 among others
Worked on Docker-Compose, Docker-Machine to create Docker containers for testing applications in the QA environment and automated the deployments, scaling and management of containerized applications across clusters of hosts using Kubernetes.
Designed a distributed private cloud system solution using Kubernetes (Docker) on CoreOS and used it to deploy scale, load balance and manage Docker containers with multiple name spaced versions.
Used Kubernetes to deploy scale, load balance, and worked on Docker Engine, Docker HUB, Docker Images, Docker Compose for handling images for installations and domain configurations.
Used Ansible and Ansible Tower as Configuration management tool, to automate repetitive tasks, quickly deploys critical applications, and proactively manages change.
Integrated Jenkins with various DevOps tools such as Nexus, SonarQube, Ansible and used CI/CD system of Jenkins on Kubernetes container environment, utilizing Kubernetes and Docker for the runtime environment for the CI/CD system to build and test and deploy.
Implemented High Availability setup with the help of AppDynamics Coe team.
Integrating AppDynamics with ServiceNow for Auto ticketing and incidents.
Experience on PowerShell scripts to automate the Azure cloud system creation including end - to-end infrastructure, VMs, storage, firewall rules.
Involved in installing and administering CI/CD tools like Jenkins for managing weekly Build, Test and Deploy chain, GIT with Test/Prod Branching Model for weekly releases.




Role : AWS DevOps Engineer (March 2017 June 2018)
Client: Barclays Bank UK, Pune, India.
Company: Virtusa Polaris Pvt Ltd.
Project: Mpay
Responsibilities:
Set up the scripts for creation of new snapshots and deletion of old snapshots in Amazon S3 and worked on setting up the life cycle policies to back the data from AWS S3 bucket data.
Created Build definition and Release definition for Continuous Integration and Continuous Deployment.
Worked on with Git / GitHub for code check - ins/checkouts, branching etc.
Resolved Merge Conflicts, configured triggers and queued new builds within the release pipeline.
Involved in installing and administering CI/CD tools like Jenkins for managing weekly Build, Test and Deploy chain, GIT with Test/Prod Branching Model for weekly releases.
Migrate infrastructure to AWS
Automated AWS components like EC2 instances, Security groups, ELB, RDS, IAM through Terraform.
Worked on provisioning the Kubernetes clusters in EKS and managed the clusters and nodes using kubectl and as command line utilities.
Experience in creating alarms and notifications for EC2 instances using Cloud Watch.
Experience with ElasticSearch, Logstash Kibana stacks.
Creating Lambda function to automate snapshot backup on AWS and set up the scheduled backup.
Worked with Terraform Templates to automate the AWS IaaS VPN using terraform modules and deployed virtual machine scale sets in production environments.
Managed AWS design architectures with AWS IaaS/PaaS, DevOps, Storage, Databases Components also Work with Cloud Platform Teams on implementing new features on AWS platform and Design and Development work on building scripts and automations for AWS cloud.
Monitored and tracked deployments using DataDog and CloudWatch.


Role: Site Reliability Engineer (Feb 2015 March 2017)
Client: Deutsche Bank Germany, Bangalore, India.
Company: HCL Technologies Pvt Ltd.
Project: Grimismis
Responsibilities:
Integrated in Infrastructure Development and Operations involving AWS Cloud platforms, EC2, EBS, ECS, S3, VPC, RDS, SES, ELB, Auto scaling, Cloud Front, Cloud Formation, Elastic Cache, Cloud Watch, SNS. Strong Experience on AWS platform and its dimensions of scalability including VPC, EC2, ELB, S3 and EBS, ROUTE 53.
Managed AWS design architectures with AWS IaaS/PaaS, DevOps, Storage, Databases Components also Work with Cloud Platform Teams on implementing new features on AWS platform and Design and Development work on building scripts and automations for AWS cloud.
Set up the scripts for creation of new snapshots and deletion of old snapshots in Amazon S3 and worked on setting up the life cycle policies to back the data from AWS S3 bucket data.
Created automated pipelines in AWS CodePipeline to deploy Docker containers in AWS ECS using services like CloudFormation , CodeBuild , CodeDeploy , S3 and puppet .
Built and Deployed Docker images on AWS ECS and automated the CI-CD pipeline.
Worked in AWS Cloud IaaS stage with components VPC, ELB, Auto-Scaling, EBS, AMI, ECS, EMR, Kinesis, Lambda, CloudFormation template, CloudFront, CloudTrail, ELK Stack, Elastic Beanstalk, CloudWatch, EKS and DynamoDB
Built and configured a virtual data center in the AWS cloud to support Enterprise Data Warehouse hosting including Virtual Private Cloud (VPC), Public and Private Subnets, Security Groups, Route Tables and Elastic Load Balancer.
Worked on the creation of custom Docker container images, tagging and pushing the images and Docker consoles for maintaining the application of life cycle. Implemented Docker containers to create images of the applications and dynamically provision slaves to Jenkins CI/CD pipelines.
Managed Ansible Playbooks with Ansible roles. Created service in Ansible for automation of the continuous deployment.
Used Jenkins as a continuous integration tool to create new jobs, managing required plugins, configuring the jobs, selecting required source code management tool, build trigger, build system and post build actions, scheduled automatic builds, notifying the build reports.

Role: Build & Release Engineer (Feb 2013 Feb 2015)
Client: GE Healthcare, India.
Company: LiveCode
Project: Paragon
Responsibilities:
Automated Test, build and deployment using Jenkins, Maven, Tomcat and Shell Scripts, for their existing proprietary systems.
Participated in the release cycle of the product which involves environments like developments QA and production.
Involved in setting up builds using CHEF as a configuration management tool.
Established Chef Best practices approaches to system deployment with tools with vagrant and managing Chef Cookbook as a unit of software deployment and independently version controlled.
Involved in developing and building shell scripts.
Managed all the bugs and changes into a production environment using the JIRA tracking tool.
Assisted end-to-end release process from the planning of release content through to actual release deployment to production.
Wrote various SQL, MSSQL and PL/SQL scripts and stored procedures to support applications.
Deployed application packages on to the Apache Tomcat server. Coordinated with software development teams and QA teams.
Performed clean builds according to scheduled releases.
Managed all the bugs and changes into a production environment using the ServiceNow tracking tool.
Deployed the build artifacts into environments like QA, UAT according to the build life cycle.
Keywords: csharp continuous integration continuous deployment quality analyst sthree information technology ffive procedural language Arizona New Mexico Texas

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];4687
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: