Home

Vinod Kumar - Sr.Python Developer
[email protected]
Location: Springfield, Illinois, USA
Relocation: no
Visa: H1B
Resume file: Vinod updated reusme_1757433069287.docx
Please check the file(s) for viruses. Files are checked manually and then made available for download.
Vinod kumar Bellamkonda [email protected]
Senior Python Developer Phone: 217-220-5102
LinkedinProfile: https://www.linkedin.com/in/vinod-bellamkonda-339176221/
Professional Summary:
Python Developer with 9+ years of experience specializing in data engineering, backend development, and cloud-based solutions.
Proficient in Python and SQL, with expertise in frameworks like Django, PySpark, and FastAPI.
Strong background in building scalable ETL pipelines, distributed data processing, and backend APIs.
Skilled in cloud platforms including AWS and GCP, leveraging services like AWS Glue, S3, Lambda, Athena, Redshift, DynamoDB, SQS, and SNS.
Experienced in CI/CD pipelines with Git, Jenkins, Ansible, Docker, Kubernetes, and Terraform.
Strong database expertise in RDBMS and NoSQL databases, along with ORM tools like SQLAlchemy.
Proficient in backend testing using pytest and unittest, implementing message queues like RabbitMQ for distributed task execution.
Experienced in log monitoring and performance optimization using Prometheus, Grafana, and CloudWatch.
Extensive experience in designing and implementing robust ETL pipelines using Python, AWS services, and Databricks for scalable and efficient data processing.
Strong background in data cleaning, transformation, and analysis using Pandas and Numpy.
Expertise in Generative AI (Gen AI) applications for data engineering, leveraging LLMs (AWS Bedrock, OpenAI API) to automate data ingestion, transformation, anomaly detection, and natural language querying.
Experience in developing AI-powered data quality checks, predictive scaling strategies, and self-healing ETL workflows.
Good Knowledge on migration of on-premises data to cloud implementing the pre and post validation for the data.
Well versed with source code management tools like Git and Bit Bucket.
Work with teams to resolve high-priority production incidents, ensuring minimal downtime.
Identify and fix bottlenecks in database queries, API responses, and background jobs.
Involved in sprint planning and grooming meetings for better understanding and planning the upcoming works.
TECHNICAL SKILLS

Programming/Scripting Languages Python 3, Python 2.7, JavaScript, Bash
Python Libraries Pandas, Matplotlib, PySpark, BeautifulSoup, Numpy, Scipy, Pytest, Airflow, Pymongo
Relevant
Services/ Technologies/Tools RESTful APIs, Celery, Web Sockets MS SQL, Teradata, PostgreSQL, Terraform, Ansible, Kubernetes, Docker, Databricks, Apache Spark, AWS Glue, Snowflake, PySpark
Web Frameworks Flask, Django, Adaptable to JavaScript Frameworks(react).
Databases MS SQL, Teradata, PostgreSQL
Operating Systems Windows, Linux, Unix, MacOS
Cloud Technologies AWS, Glue, S3, DMS, Lambda functions, Terraform


Work Experience:
Senior python developer June 2024 Till Date
Blue Cross Blue Shield Association Chicago, IL

Responsibilities:
Designed and implemented Gen AI-enhanced scalable data ingestion pipelines using AWS Glue, AWS Lambda, S3, and Databricks, incorporating AI-driven anomaly detection for incoming data streams..
Leveraged LLMs (Large Language Models) via AWS Bedrock and OpenAI API to perform automated data classification, summarization, and metadata tagging before ingestion into AWS S3.
Developed and optimized AI-augmented ETL workflows in Databricks (PySpark & Spark SQL), where LLMs assist in dynamic schema evolution, auto-mapping transformations, and error resolution suggestions.
Integrated Databricks with AWS services (S3, Glue, Redshift, Athena) to enable distributed data processing, real-time analytics, and cost-effective data lake solutions.
Built and managed Delta Lake tables in Databricks to ensure ACID compliance, schema enforcement, and efficient data storage.
Identified and integrated multiple data sources (internal databases, APIs, streaming sources) into AWS S3 and Databricks, ensuring seamless data ingestion and transformation.
Integrated Gen AI-based predictive scaling to dynamically allocate compute resources in Databricks, improving performance and cost efficiency.
Implemented serverless ETL pipelines using AWS Glue and Lambda, leveraging Step Functions for workflow orchestration.
Optimized Spark jobs (written in PySpark and Scala) using partitioning, bucketing, caching, and query optimization .
Managed secure data storage in AWS S3, enforcing encryption, access control policies, and versioning to meet compliance and security standards.
Developed real-time streaming data pipelines using AWS Kinesis, and Databricks Structured Streaming, ensuring low-latency data processing and event-driven architectures.
Implemented CI/CD pipelines using Cloudformation, Git, and Codepipeline for automated deployment of Databricks notebooks, AWS Glue jobs, and infrastructure provisioning.
Monitored pipeline performance and resource utilization using AWS CloudWatch, Datadog, and Databricks Performance Monitoring, setting up alerts and logging for proactive issue resolution.
Set up AI-driven log analysis using Datadog to detect patterns in pipeline failures and suggest optimization strategies.
Integrated Gen AI-powered feature engineering pipelines, automating data transformation and feature extraction for ML models.
Worked closely with data scientists and analysts to ensure high-quality, well-structured, and query-optimized datasets for advanced analytics and machine learning use cases.
Environment: PySpark, Spark SQL, Databricks, AWS Glue, Lambda, Step Functions, S3, Redshift, Athena, Cloduformation, Bedrock ,Delta Lake, Datadog.
Mid software developer May 2022 to April 2024
Technicolor India PVT LTD - Bangalore India.

Responsibilities:
Collaborated with cross-functional teams to design, develop, and implement high-quality data solutions on AWS that meet business requirements using Python, PySpark, and AWS Glue.
Utilized PySpark for distributed data processing and handling large-scale datasets across multiple nodes on the AWS ecosystem.
Built and optimized ETL pipelines using AWS Glue to collect, transform, and load data from various sources into data lakes, data warehouses, and other target destinations.
Developed data wrangling tasks, including cleaning, transforming, and merging datasets using PySpark and Python.
Implemented automated data processing workflows using AWS Glue, reducing manual intervention and ensuring scalability.
Designed and deployed data validation checks in AWS Glue pipelines to ensure data accuracy, integrity, and consistency.
Worked closely with the data science and analytics teams to understand their requirements and provide technical support using AWS Glue and other AWS services such as S3, Athena, and Redshift.
Created unit and integration tests for AWS Glue jobs and data pipelines using Pytest, ensuring the robustness and reliability of data solutions.
Leveraged AWS Glue s job bookmarking and partitioning features to optimize data processing and reduce processing times.
Built and managed CI/CD pipelines, ensuring smooth release management and automated deployments using Jenkins and AWS CodePipeline.
Worked within an Agile/Scrum framework to deliver continuous improvements to data pipelines and foster a collaborative development environment.
Provided ongoing maintenance, troubleshooting, and enhancements for existing AWS Glue ETL jobs and data pipelines to accommodate evolving business requirements and ensure timely delivery of data products.

Environment: Python, AWS Glue, Amazon S3, PySpark, Cloud Watch, Git

Software Engineer Jan 2021 - May 2022
L&T Technologies and Services - Bangalore India.
Responsibilities:

Interacting with key business users, project stakeholders, technical team and functional consultants for gathering integration business Requirements.
Implemented the server side logic code using python and Django. Ensure back end code well organized and scalable to handle increasing user loads.
Design and develop RESTful API endpoints using Django REST Framework, providing the necessary data and functionalities for the front-end.
Implement Create, Read, Update, and Delete (CRUD) operations for managing job statuses and other related data.
Used ORM to communicate with database. Manage migrations to apply changes to the database without losing data.
Create and implement intuitive user interfaces using React, ensuring a smooth and engaging user experience.
Used React to build reusable components, making the development process more efficient and codebase easier to maintain.
Involved in the database schema design that supports business requirements and ensure data integrity. Used indexing to optimize the performance.
Performed regular tasks such as backups, indexing, and performance tuning to ensure the database runs smoothly and efficiently
Develop reusable and modular components to streamline the development process and maintain consistency across the application.
Implement secure authentication mechanisms, such as JWT or OAuth, to protect user data and ensure secure access to the API
Ensure all sensitive data is securely handled and stored, following best practices for data protection and privacy.
Write automated tests for API endpoints to ensure reliability and prevent regressions.
Conduct manual testing to identify and resolve any issues that automated tests may miss.
Set up and maintain CI/CD pipelines for deploying updates and new features to the production environment.
Implement monitoring and logging solutions to track the application's performance and identify any issues in real-time.
Created unit test framework to verify individual units of code and their interactions. Use testing frameworks like PyTest.
Used Git to track changes in the codebase, allowing multiple developers to collaborate without conflicts
Provide comprehensive documentation for the API, facilitating ease of use for front-end developers and other stakeholders
Environment: Python, Django, PostgreSQL, HTML, CSS, reactJS, celery, PyTest GIT, JIRA.

Software Developer Sept 2016 to December 2020
Technicolor India PVT LTD- Bangalore India.

Responsibilities:
Participated in the requirement gathering and analysis phase in documenting the business requirements by conducting meetings with various business users.
Design and maintain a consistent API layer across various cloud service providers (AWS, Google Cloud, Azure)
Develop strategies to efficiently utilize pre-emptible VMs, ensuring cost-effective deployment and operation
Implement solutions to deploy and manage VMs across multiple regions and availability zones, enhancing reliability and scalability.
Develop and maintain the REST API that allows Plough CLI to interact with Clearsky, ensuring seamless communication and integration.
Designed and implemented RESTful APIs using Django REST framework, enabling seamless communication between Command line tool and back-end components.
Provide comprehensive documentation for the REST API, facilitating ease of use for developers and users.
Ensure the Plough CLI can effectively instruct Clearsky via the REST API, providing a user-friendly interface for managing the cloud fleet.
Performed code reviews to maintain code quality, adherence to Django coding standards, and knowledge sharing within the Django development team.
Worked with containerization and orchestration technologies, such as Docker and Kubernetes. Collaborated with the DevOps team.
Implemented authentication and authorization mechanisms such as OAuth2, JWT, and API key authentication to secure APIs.
Implementing security measures such as authentication and authorization using GCP Service account, OAuth, and API keys to secure APIs and ensure data integrity.
Proficiently utilized Python and the Boto3 library to develop robust AWS automation scripts and custom solutions, resulting in a 30% reduction in manual tasks and improved operational efficiency.
Gather user feedback on the Plough CLI and Clear sky interactions, iterating on the design to improve usability and functionality.
Ensure Clear sky integrates seamlessly with the Tractor scheduler, facilitating the automated management of cloud resources
Implement mechanisms within Clear sky to receive and act on target scale instructions from Plough, dynamically creating, maintaining, and destroying cloud resources as required.
Implemented unites frame using python modules Pytest and Unittest modules

Environment: Python, GCP, AWS, Pytest, Git, Django, Django rest framework, Big Query, Jira.

Education:
Bachelors in Electronics and Communication Engineering From JNTUA 2016.
Keywords: continuous integration continuous deployment artificial intelligence machine learning sthree microsoft Illinois

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];6092
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: