Home

prashantha - GENAI/DATA Engineer
[email protected]
Location: Dallas, Texas, USA
Relocation: open to relocate anywhere in USA
Visa:
Resume file: Prashanth_Resume (1)_1773237497109.docx
Please check the file(s) for viruses. Files are checked manually and then made available for download.
Prashanth Reddy
(614)- 407-4176 | [email protected]
PROFESSIONAL SUMMARY:
Generative AI & Data Engineer with 8 years of total experience, including 2+ years of specialized focus on Agentic Orchestration and LLM Systems across AWS and Azure ecosystems. Proven track record in architecting scalable, multi-cloud AI solutions using AWS Bedrock, Azure AI Foundry, and Azure OpenAI Service, designing and implementing scalable AI-driven workflows for document extraction, data quality automation, and business process optimization. Skilled in context management and knowledge grounding, I am adept at integrating agentic components into enterprise applications and Architecting Multi-Agent workflows using Python, Semantic Kernel, LangChain, and FastAPI. I am well-versed in building, optimizing, and maintaining Azure-based data pipelines, leveraging modern cloud services and DevOps practices to deliver reliable, high-performance solutions in dynamic environments. Additionally, I bring strong collaborative skills, with a history of cross-functional teamwork and technical leadership supporting continuous improvement and operational excellence.
EDUCATION:
Master of Science in Data Science, University of Cincinnati, Ohio -May 2023
Bachelor of Technology from Mahatma Gandhi Institute of Engineering and Technology in the Year 2017
CERTIFICATIONS:
Databricks Certified Generative AI Engineer Associate, 2025
Microsoft Certified Azure AI Engineer Associate, 2025
TECHNICAL SKILLS:
Data Warehousing ETL, Metadata, SQL Server, Synapse Analytics, Databricks, Snowflake
GenAI Stack & Agentic Orchestration Azure (Azure AI Foundry, OpenAI Service, Cognitive Services, Azure Functions, Synapse, AKS, Azure ML, Azure DevOps, Blob Storage, Event Hub, Azure Data Factory, Azure Data Lake Storage), GCP, AWS (S3, Lambda, AWS Bedrock), Docker, Kubernetes, Neo4j, Pinecone
Languages SQL, Python, Pyspark, PL/SQL, Scala
Databases Luminate APIs, Event Hub, Microsoft Teams
Version Control GIT, GitHub, Azure DevOps
IDE &Build Tools Eclipse, Visual Studio, IDE Spring, Shell Scripting
Databases Databricks Volume, ADLS, MS SQL Server (on-prem), Azure SQL DB, Azure Synapse, MS Excel, MS Access, Cosmos DB, Snowflake
Professional Experience:
Role: Gen AI Engineer Agentic AI, Document Processing
Client: British Petroleum
Duration: 08/2024 Present
Responsibilities:
Developed Multi-Agent system utilizing AWS Bedrock (Claude 3.5) and Azure OpenAI via Semantic Kernel, automating the extraction of unstructured PDF data to streamline enterprise workflows across environments.
Engineered 'LLM-as-a-Tool' agents within Azure AI Foundry, implementing cross-platform schema mapping to categorize transactional data stored in AWS S3 and Azure Data Lake Storage for consolidated revenue reporting.
Designed cloud-agnostic reasoning loops (ReAct/CoT) to handle complex, multi-step document verification, integrating AI-driven insights into unified data pipelines accessible from both AWS and Azure consoles.
Developed autonomous Function-Calling tools that trigger Databricks jobs and ADF pipelines to query cross-cloud sources, enabling seamless data movement between Amazon S3 buckets, Blob Storage, and on-premises SQL databases.
Integrated Python-based AI systems with frameworks like Flask/FastAPI for serving ML models as REST APIs in production environments.
Migrated the codebase to Java 17, adopting modern language features and enhancing compile-time safety.
Implemented Advanced RAG (Retrieval-Augmented Generation) using semantic chunking and hybrid indexing, optimizing retrieval grounding by leveraging AWS OpenSearch and Azure AI Search for high-scale, reliable data processing.
Built and scaled Hybrid-Cloud AI solutions, focusing on high-availability architectures that utilize AWS Bedrock for model diversity and Azure for enterprise integration, significantly reducing vendor lock-in risk.
Automated dynamic data routing by developing custom classification models that migrate extracted tabular data into designated destinations, including AWS Redshift and Azure SQL Database, enhancing ETL efficiency.
Standardized CI/CD and Infrastructure-as-Code (IaC) workflows using GitHub Actions and Terraform to ensure seamless deployment of agentic AI solutions across both AWS and Azure production environments.
Developed custom middleware and API connectors to facilitate secure data exchange between on-premise systems and AWS/Azure cloud stacks, supporting multi-cloud agentic workflows and real-time data retrieval.
Environment: Azure AI Foundry, Azure OpenAI Service, Azure Data Factory (ADF), Azure Data Lake Storage, Azure SQL Database, Python, Spark SQL, Azure Databricks, Snowflake, Power BI, Synapse Analytics, GitHub, Azure DevOps (CI/CD), and Agile methodologies.
Role: BI Engineer- Agentic AI
Client: Hershey
Duration: 04/2023 07/2024
Responsibilities:
Engineered an Autonomous Data Quality Agent using ReAct prompting to self-correct schema drifts and metadata anomalies across diverse ingestion pipelines (Luminate APIs, Event Hub, Blob, S3, SQL on-prem, ADLS).
Architected a 'Supervisor-Worker' agent pattern where specialized agents audit diverse pipelines (Event Hub/Luminate APIs) in parallel., missing metadata, and data quality anomalies, significantly reducing manual validation efforts and accelerating downstream analytics.
Built production-ready RAG pipelines using vector databases for semantic search, reducing model hallucinations and improving factual accuracy.
Developed scalable RESTful APIs using Spring Boot to serve GenAI models in production.
Implemented Long-term Memory (Vector Store based) and Conversation Summary Buffer Memory for agents to track historical quality trends, enabling proactive detection and remediation of data inconsistencies over time.
Integrated agentic AI modules with Azure Data Factory (ADF) and Databricks, enabling real-time monitoring and automated issue flagging within enterprise data pipelines.
Developed Python-driven visualizations and dashboards to illustrate model insights and evaluation metrics for stakeholders.
Integrated external enterprise knowledge bases (documents, APIs, databases) into RAG architectures to enable domain-specific intelligent query responses beyond generic LLM knowledge.
Integrated LLM APIs (OpenAI / Azure OpenAI / Hugging Face) into Java microservices.
Automated generation of actionable recommendations and auto-fixes for data transformation issues, streamlining data preparation and enhancing pipeline reliability.
Wrote custom Python unit and integration tests for AI projects to validate model output consistency and reliability.
Authored technical documentation and best practices for RAG workflows, including indexing, retrieval, augmentation, and generation stages.
Orchestrated backend workflows in Python, leveraging robust model integration and memory management techniques to ensure scalable, reliable performance for large-scale data processing.
Delivered structured quality reports and notifications to engineering teams via Microsoft Teams and Azure DevOps, improving collaboration and incident response times.
Achieved a 40% reduction in manual QA by deploying 'Self-Healing' data agents that generate and execute auto-remediation scripts and prevented downstream report failures, delivering tangible business value and operational efficiency.
Championed best practices in version control, CI/CD automation (Azure DevOps, GitHub), and agile methodologies, supporting continuous improvement of scalable Azure-based AI solutions in a dynamic environment.
Environment: Azure AI Foundry, Azure OpenAI Service, Azure Data Factory (ADF), Azure Databricks, Azure Data Lake Storage (A DLS), Amazon S3, Amazon Lambda, AWS Bedrock), SQL Server (on-prem), Azure SQL Database, Luminate APIs, Event Hub, Python, Spark SQL, Snowflake, Power BI, Synapse Analytics, Microsoft Teams, Azure DevOps (CI/CD), GitHub, Agile methodologies, Version Control, CI/CD Automation.
Role: Data Scientist
Client: EDF Renewables
Duration: 08/2020 08/2022
Responsibilities:
Developed a Predictive Siting Engine using Deep Learning (ANN + LSTM) and LLM-assisted geospatial data interpretation.
Engineered a Legal-NLP pipeline for 'Policy-to-Code' translation, converting unstructured regulatory PDFs into machine-readable constraints and policy documents.
Integrated financial datasets into a multi-factor neural network prediction model to estimate multiyear profitability for wind, solar, and hybrid energy assets.
Developed a technology recommendation module to automatically propose the best asset type (onshore wind, solar PV, hybrid, or BESS) for each candidate location based on energy yield, regulatory fit, and ROI score.
Created a location ranking framework combining environmental, regulatory, financial, and operational parameters into an overall siting score for informed capital allocation decisions.
Automated compliance validation by translating extracted policy text into machine-readable constraints, reducing manual errors in feasibility studies.
Collaborated with planning, GIS, and finance teams to operationalize the model into EDF s strategic asset planning workflow, improving decision-making speed and accuracy.
Optimized capital allocation by 45% through a Multi-Factor Neural Ranking system for renewable project site selection enabling data-backed selection of high-yield, low-risk renewable project sites.
Environment: Python, TensorFlow, PyTorch, Scikit-learn, Spark, SQL Server, Azure Data Lake Storage (ADLS), Azure Databricks, Azure DevOps (CI/CD), GitHub, Power BI, ArcGIS, Jupyter, Pandas, Numpy, ANN, LSTM, Bi-LSTM, REST APIs, PDF parsing, Geospatial analytics, Agile methodologies, Version Control.
Role: Software Engineer NLP Engineer
Client: Citibank
Duration: 08/2017 07/2020
Responsibilities:
Architected an NLP-driven Intent Analysis engine to score leads based on unstructured Relationship Manager (RM) interaction logs
Implemented Sentiment Analysis and Named Entity Recognition (NER) to identify life-event triggers and churn risks in CRM data.
Drove a 20% increase in cross-sell opportunities by automating customer signal detection using Topic Modeling (LDA) and Sentiment Scoring..
Automated identification of high-value prospects and churn-risk customers, improving sales targeting and retention strategies.
Enabled early detection of potential churn through sentiment and intent analysis in RM notes.
Collaborated with cross-functional Data and Cloud teams to deliver scalable AI-driven solutions for business growth.
Increased cross-sell opportunities for financial products, including credit cards, loans, and insurance, by surfacing key customer signals.
Streamlined the sales pipeline by providing Relationship Managers with prioritized leads and actionable insights.
Utilized advanced NLP techniques to identify customer life events, such as salary hikes and home purchase interest, for targeted marketing.
Supported digital transformation initiatives by migrating legacy processes to AI-powered, data-driven workflows.
Environment: Azure, Jupyter Notebooks, Azure Machine Learning, Azure Data Factory, Azure SQL Database, Azure Blob Storage, Python, Scikit-learn, Pandas, NLTK, Power BI, Git, Agile methodologies, Version Control, CI/CD Automation.
Keywords: continuous integration continuous deployment quality analyst artificial intelligence machine learning access management business intelligence sthree database microsoft mississippi procedural language

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];6976
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: