Home

Sanjana - GCP Data Architecture
[email protected]
Location: Denton, Texas, USA
Relocation: YES
Visa: GC
Resume file: Sanjana GCP Data Architecture 1_1756304683095.docx
Please check the file(s) for viruses. Files are checked manually and then made available for download.
Sanjana Kunta
[email protected] | 9452240497

PROFESSIONAL SUMMARY:
Over all 10 years of experience in GCP Data Architecture in building data intensive applications, tackling challenging architectural and scalability problems.
Leveraged Databricks to build and manage scalable data pipelines and machine learning models, improving analytics efficiency.
SQL concepts, Presto SQL, Hive SQL, Python (Pandas, Numpy, SciPy, Matplotlib) and Pyspark to cope up with the increasing volume of data.
Designed and developed ETL processes using Java, leveraging libraries such as Apache Camel and Spring Batch to extract, transform, and load data efficiently from various sources into data warehouses and lakes.
Hands on Master Data Management concepts, Methodologies and ability to apply this knowledge in building MDM solutions.
Proficient with Informatica MDM Architecture and various modules involved.
Experience in application development using Python, Django, HTML5, CSS, Git, Java\J2EE, JavaScript, Oracle, PostgreSQL and SQLite.
Hands of experience inGCP - GCS bucket, G - cloud function, cloud dataflow, GSUTIL, BQ command line utilities, and Stack driver.
Implemented robust data validation and cleansing mechanisms in Java to identify and rectify data anomalies, ensuring accurate and reliable data for downstream analytics and reporting.
Optimized data processing performance by tuning Java applications, improving memory management, and leveraging multithreading and parallel processing techniques to handle large datasets efficiently.
Automated data workflows using Java and Apache Airflow, scheduling and managing ETL jobs, data transfers, and transformations to enhance operational efficiency and reduce manual intervention.
Experience in AWS Services EC2, IAM, Subnets, VPC, Cloud Formation, AMI, S3, SNS, SES, RedShift, CloudWatch, SQS, Route53, CloudTrail, Lambda, Kinesis, and RDS and obtaining High Availability and Fault Tolerance for AWS EC2 instances utilizing the services like Elastic IP, EBS and ELB.
Developed and maintained Docker images for ETL processes, improving deployment efficiency and reducing setup times.
Orchestrated containerized data applications using Kubernetes, ensuring high availability and scalability.
Automated data pipeline deployments and continuous integration using Jenkins, improving development efficiency.
Experience in implementing Lakehouse solutions on Azure Cloud using Azure Data Lake and Databricks Delta Lake.
Worked on a migration project to migrate data from different sources to Google Cloud Platform (GCP) using UDP framework and transforming the data using Spark Scala scripts.
Experienced in web applications development using Django/Python, Flask/Python, and Node.js, Angular.js, JQuery, while using HTML/CSS/JS for server-side rendered applications.
Developed a fully automated continuous integration system using Git, Gerrit, Jenkins, MySQL and custom tools developed in Python and Bash
Thorough knowledge in various front-end tools like HTML, DHTML, CSS, JavaScript, XML, JQuery, Angular JS, and AJAX
Extensive experience in developing enterprise web applications using Python, PHP4 and PHP5, Flask, Jinja2, Django, HTML, CSS, JavaScript, JQuery, Ajax, MySQL.
Prepare and deliver customized solution and product delivery demos.
Managed version control for data engineering projects using Git, ensuring code integrity and collaboration.
Integrated Git with CI/CD pipelines to automate testing and deployment processes.
Hands on experience with different programming languages such as Python, SAS.
Strong Working experience wif LakeHouse Architecture, Delta Lake, Databricks, Datafactory, SQL, Amazon Redshift.
Experience in data cleansing with MDM and Data Profiling.
Experience in handling python and spark context when writing Pyspark programs for ETL.
Developed interactive dashboards in Tableau, delivering actionable insights and improving decision-making across various departments.
Proficiency with all aspects of MDM implementation such as Data Profiling, Data Quality, hierarchy management, data enrichment with external sources, workflow development and management.
Developed advanced PostgreSQL queries and stored procedures to support complex data analysis and reporting.
Implemented PostgreSQL performance tuning strategies, including indexing and query optimization, to enhance database efficiency.
Hands-on Experience in developing Spark applications using Pyspark Data Frame, RDD, and Spark SQL.
Strong knowledge of SQL, ETL processes, and data warehousing concepts.
Demonstrated expertise in troubleshooting and resolving database issues promptly, minimizing downtime and ensuring uninterrupted data availability for critical business operations.
Implemented robust security measures such as data masking and secured views to safeguard sensitive information and mitigate potential security risks within the Snowflake environment.
Experience in Developing Spark applications using Spark - SQL, PySpark and Delta Lake in Databricks for data extraction.
Experienced in working with various python integrated development environments like PyCharm, PyStudio, PyDev, and Sublime.
Managed Hadoop clusters for large-scale data processing, ensuring high availability and performance.
Designed and implemented ETL pipelines between from various Relational Data Bases to the Data Warehouse (DWH) using Apache Airflow.
Designed and developed the data warehouse models using Snowflake and Star schema.

TECHNICAL SKILLS
Languages: Python, PySpark, Java, SQL, MySQL, TSQL, PostgreSQL, Shell Scripting
Cloud: Azure, AWS, GCP
ETL/Reporting Tools: Power BI, SSIS, SSAS, SSRS, Azure Data Factory, Snowflake
Big Data Tools: Hive, Pig, MapReduce, Hadoop, Apache Spark, Apache Kafka, Sqoop, HDFS
Analytics Tools: Tableau, Power BI, Microsoft SSIS, SSAS and SSRS
OLAP Tools: Business Objects and Crystal Reports 9
Data Modelling Tools: Erwin Data Modeler, ER Studio v17
IDEs: Eclipse, IntelliJ IDE, PyCharm IDE, Notepad++ and Visual Studio
Operating System: Windows, Unix, Linux
CI/CD, DevOps Tools: GIT, Git Hub, Docker, Jenkins, Kubernetes, Splunk, Grafana
Databases: SQL DB, SQL Server 2019/2016/2014/2012, Oracle 12C/11gR2/10g/9i
Methodologies: RAD, JAD, System Development Life Cycle (SDLC), Agile

CERTIFICATIONS:
AWS Cloud Practitioner
Tableau Desktop Specialist Tableau
Google Data Analytics Professional Certification
Microsoft Fabric Analytics Engineer Certification
Microsoft Certified: Azure Fundamentals (AZ-900)

PROFESSIONAL EXPERIENCE:
Paccar, Dallas TX || GCP Data Architecture || Feb 2025 - Present
Responsibilities:
Highly Involved in Data Architecture and Application Design using Cloud and Big Data solutions on AWS, Microsoft Azure.
Designed and developed data integration pipelines in Azure Data Factory to ingest 50 TB of data daily from on-prem SQL servers to Azure SQL Data Warehouse.
Involved with ETL team to develop Informatica mappings for data extraction and loading the data from source to MDM Hub Landing tables.
Experience in working with product teams to create various store level metrics and supporting data pipelines written in GCP s big data stack.
Experience in GCP Dataproc, Dataflow, PubSub, GCS, Cloud functions, BigQuery, Stackdriver, Cloud logging, IAM, Data studio for reporting etc.
Engage directly with the customers development team, understand their specific business and technology challenges in the area of distributed ledgers integration in new product delivery and services.
Developed and optimized complex T-SQL queries and stored procedures within Azure Synapse, enhancing query performance and enabling efficient data retrieval and manipulation.
Experience in Developing Spark applications using Spark - SQL, PySpark and Delta Lake in Databricks for data extraction.
Vast experience in identifying production bugs in the data using stack driver logs in GCP.
Experience in GCP Dataproc, GCS, Cloud functions, Cloud SQL & BigQuery.
Used Cloud shell SDK in GCP to configure the services Data Proc, Storage, BigQuery.
Designed and managed Snowflake data warehouses, improving scalability and performance for large-scale data analytics.
Developed pipeline for POC to compare performance/efficiency while running pipeline using the AWS EMR Spark cluster and Cloud Dataflow on GCP.
Utilized Snowflake s SQL capabilities for complex data queries and reporting, enhancing data insights.
Leveraged Azure Synapse Link to enable seamless integration between Azure Synapse Analytics and Azure Cosmos DB.
Worked with ETL Developers to integrate data from varied sources like Oracle, DB2, flat files and SQL databases and loaded into landing tables of Informatica MDM Hub.
As per the client architecture understand the customer/vendor/item (MDM) data model.
Developed the back-end web services using Python and Django REST framework.
Developed Streaming applications using PySpark to read from the Kafka and persist the data NoSQL databases such as HBase and Cassandra.
Used cloud shell SDK in GCP to configure the services Data Proc, Storage, BigQuery
Conducted performance monitoring and troubleshooting activities within Azure Synapse, proactively identifying and resolving performance bottlenecks and ensuring uninterrupted data processing and analytics operations.
Monitored and managed Airflow DAGs, addressing issues and optimizing pipeline performance.
Designed and implemented Java solutions for integrating with data warehouses such as Snowflake and Azure Synapse, optimizing data loading processes and ensuring efficient data storage and retrieval.
Utilized Java libraries such as Apache POI for handling Excel files and Jackson for JSON processing, facilitating effective data manipulation and transformation as part of ETL workflows.
Create batch groups in Informatica MDM Hub to run the staging, match and merge and load jobs as per the dependencies.
Experience in GCP Dataproc, GCS, Cloud functions, BigQuery.
Engaged with customers and prospects to demonstrate our products and effectively communicate the key differentiators
Designed datalake (delta lake, Lakehouse), data Warehouse schemas (Data Vault, 3NF model) as well as Data Mart schemas (Dimensional model)
Worked on numerous Python modules. Built dB context, run context and other python objects that's reused by application. Optimized the code using smart pointers, profilers and C++ Standard template library.
Worked on a resulting report of the application and Tableau reports. Deployed product delivery site using Apache Servers with mod Python and AWS.
Knowledge of the Software Development Life Cycle (SDLC), Agile and Waterfall Methodologies.
Expertise on using python libraries like OS, pickle and SQLite. Built vagrant/docker boxes to work on dev and test environment.
Extensive experience in web application development using Python, Django and web technologies (HTML, HTML5, DHTML, CSS, CSS3, XML and JavaScript) to create scalable and robust common components which can be used across the whole framework.
Implemented monitoring and logging for Java-based data applications using tools such as Log4j and SLF4J, ensuring visibility into application performance and facilitating troubleshooting and issue resolution.
Developed interactive notebooks in Databricks for collaborative data exploration and analysis.
Created many reports using xlrt/xlwt python packages and reportlab-2.7/3.3. Used JSON and SimpleJSON modules of python to call webservices.
Responding to tenders, RFIs, RFPs, proposals with respect to product delivery solution information
Designed and implemented Snowflake Data Warehouse solutions for scalable and performant analytics. Implemented Data Vault modeling for creation Enterprise Data Warehouse.
Implemented robust data quality checks and validations within ETL processes to ensure accuracy and completeness of healthcare data, minimizing errors and discrepancies in downstream analytics.
Proficient in implementing robust security measures and access controls in Snowflake, ensuring data privacy, compliance, and governance standards are met.
Responsible for estimating the cluster size, monitoring, and troubleshooting of the Spark Databricks cluster.
Integrated Snowflake with various data sources and BI tools, streamlining data access and visualization
Optimized Snowflake data warehouse performance through efficient data modeling, indexing strategies, and query optimization techniques, improving analytics query response times and overall system efficiency.
Environment: Hadoop, GCP, Azure Data Factory, Azure Data Lake, Azure Storage, Azure SQL, Azure Datawarehouse, Azure Databricks, Azure Power Shell, Azure Synapse, Map Reduce, Hive, Spark, Python, Yarn, Tableau, Kafka, Sqoop, Scala, HBase.

United Airlines, Chicago, IL || GCP Data Architecture || Jul 2021 Jan 2025
Responsibilities:
Involved in complete Software Development Life Cycle (SDLC) - Business Requirements Analysis, preparation of Technical Design documents, Data Analysis, Logical and Physical database design, Coding, Testing, Implementing, and deploying to business users.
Developed ETL pipelines using AWS Glue to integrate diverse data
Amazon RDS (Relational Database Service) for managing relational databases like MySQL or PostgreSQL, handling structured data from various sources.
Used Kinesis Firehose for efficient delivery of processed data to Amazon S3 and Redshift.
Designed and implemented infrastructure as code (IaC) using Terraform to automate the provisioning and configuration of Kinesis streams, ensuring consistent deployment across environments.
Proficient in Terraform for automating and managing AWS infrastructure deployments.
Lead SRE to design and onboard legacy applications to GCP. Design and implemented GKE Clusters with Istio. Worked with Devops, architects and application team to build fully automated GCP resources creation and application deployment by leveraging Jenkins/Helm/Terraform.
Conduct POC and initial assessment of GCP product to create design pattern per customer requirements for enterprise-wide adoption. Partner with infrastructure, security, operation to scope, define, and execute application migration to GCP.
Provide thought leadership and strategic vision to Senior and Program management and technical team on GCP, Cloud Native solution, Monitoring, Logging, DevSecOps, Security, Microservice and Cloud Design pattern.
Configured Jenkins pipelines for building, testing, and deploying data applications, ensuring high code quality.
Implemented Hadoop ecosystem tools (Hive, Pig) for advanced data querying and transformation.
Designed and enforced IAM policies to control access to AWS resources based on the principle of least privilege.
Designing and deploying multi-tier applications with an emphasis on high availability, fault tolerance, and auto scaling on AWS Cloud Formation utilizing all of the AWS services (EC2, AWS GLUE, Athena, Lambda, S3, RDS, Dynamo DB, SNS, SQS, IAM, etc.)
Implemented PySpark Scripts using SparkSQL to access hive tables into a spark for faster processing of data.
Played a key role in setting up complete Data analytics platform for HBO MAX release which includes product delivery like Alation, Snap logic, Snowflake, AWS EC2, EMR instances, Tiger graph, Looker
Create the packages as per the business need to pull the data from MDM.
Created advanced data analysis scripts using Python libraries (Pandas, NumPy), enhancing data insights and decision-making.
Monitored and maintained Kubernetes environments, addressing issues and ensuring reliable operations.
Designed and implemented ETL pipelines between various Relational Data Bases to the Data Warehouse using Apache Airflow.
Develop and deploy the outcome using spark and Scala code in Hadoop cluster running on GCP.
Led the end-to-end workstreams for Cyber, Ticketmaster, Concessions, IDQ, MDM, Marketing (ACS, GAM & GA360) that involved leading cross functional team meeting, gathering, analyzing and prioritizing requirements.
Implemented PostgreSQL performance tuning strategies, including indexing and query optimization, to enhance database efficiency.
Designed Lakehouse Architecture Ausing Azure Deltalake, Databricks.
Developed streaming and batch processing applications using PySpark to ingest data from the various sources into HDFS Data Lake.
Worked on NoSQL Databases such as HBase and integrated with PySpark for processing and persisting real-time streaming.
Utilized PostgreSQL extensions and tools (e.g., PostGIS) to support spatial data and advanced analytics.
Implemented data transformations and cleansing routines to ensure data quality and consistency.
Utilized Python/Java/Scala for scripting and coding ETL processes, leveraging AWS Glue's capabilities for scalable data processing.
Authored Requirements, Mappings, Architecture, sequence diagrams and Design documents detailing the solution and approach for an end to end MDM solution
Configuring and integrating the required AWS services in accordance with the business requirement to start Infrastructure as a code (Iaas) in the AWS cloud platform from scratch.
Data analysis and visualization were done using AWS Athena and quick sight.
Experienced in working with various database technologies and tools on Amazon RDS, such as MySQL, PostgreSQL, and Aurora.
Worked on automating data ingestion into the Lakehouse and transformed the data, used Apache Spark for leveraging the data, and stored the data in Delta Lake.
Integrated Snowflake with various data sources and BI tools, streamlining data access and visualization
Designing and deploying several applications that make use of practically all AWS services, with an emphasis on high availability, fault tolerance, and auto-scaling in AWS Cloud Formation, including EC2, RedShift, S3, RDS, Dynamo DB, SNS, and SQS.
Custom Kafka producers and consumers have been developed for a variety of publishing and subscribing to Kafka topics.
Writing code that optimizes performance of AWS services used by application teams and provide Code-level application security for clients (IAM roles, credentials, encryption, etc.)
Used Amazon EMR for map reduction jobs and test locally using Jenkins. Data Extraction, aggregations and consolidation of Adobe data within AWS Glue using PySpark.
Create external tables with partitions using Hive, AWS Athena and Redshift. Developed the PySprak code for AWS Glue jobs and for EMR.
Good Understanding of other AWS services like S3, EC2 IAM, RDS Experience with Orchestration and Data Pipeline like AWS Step functions/Data Pipeline/Glue.
Environment: Hive, Spark, GCP, Python, Yarn, Tableau, Kafka, Sqoop, Scala, HBase, AWS, EC2 (Elastic Compute Cloud), S3, RDS, Glue, Lambda, RedShift, Cloudwatch, Snowflake, SQL, python, Pyspark, ETL,

Duke Energy, Charlotte, NC || Azure Data Engineer || Jan 2020 June 2021
Responsibilities:
Responsibilities:
Designed, deployed, and maintained scalable cloud data solutions on Microsoft Azure to support Duke Energy s smart grid modernization, predictive maintenance, and customer analytics initiatives.
Developed serverless data processing pipelines using Azure Functions with HTTP triggers and Application Insights to monitor real-time grid sensor data and perform load testing through Azure DevOps Services.
Built robust CI/CD pipelines leveraging Docker, Jenkins, GitHub, and Azure Container Services to automate deployment of critical data integration workloads for energy consumption forecasting and compliance reporting.
Automated infrastructure provisioning for Duke Energy s Azure environment using Terraform, optimizing virtual machine scale sets to support dynamic energy demand analysis.
Implemented Ansible for infrastructure configuration management, ensuring consistent deployment of IoT gateways and data ingestion pipelines; integrated real-time monitoring with Nagios and ELK stack for system health and grid reliability.
Migrated legacy on-premises energy management services to Azure Cloud, collaborating with cross-functional teams to ensure smooth transition and minimal service disruption.
Provisioned core Azure components including Virtual Networks, Application Gateway, Storage Accounts, and affinity groups to securely handle high-volume smart meter and grid telemetry data.
Designed Java-based integrations between Duke Energy s operational data stores and enterprise data warehouses such as Snowflake and Azure Synapse, enhancing real-time data availability for predictive analytics.
Utilized Apache POI and Jackson libraries in Java to process diverse file formats (Excel, JSON) as part of ETL workflows supporting energy usage analytics and regulatory reporting.
Implemented monitoring and logging for Java-based ETL applications using Log4j and SLF4J, ensuring proactive detection of issues in energy data pipelines.
Developed and deployed microservices architectures in Kubernetes to enable modular, scalable data processing for Duke Energy s IoT data streams.
Managed Kubernetes clusters to maintain continuous data ingestion from smart grid devices and address operational issues promptly.
Configured Jenkins pipelines for automated build, test, and deployment of big data applications, ensuring high code quality and faster time-to-insight.
Integrated machine learning models for predictive maintenance of grid infrastructure and energy load forecasting, enabling smarter operational decisions.
Ensured strict compliance with regulatory requirements (e.g., NERC CIP standards) across all ETL processes, safeguarding critical energy infrastructure data.
Worked with databases like PostgreSQL, SQL Server, Oracle, and MySQL for storing and managing operational and customer data, including capacity planning and performance tuning.
Developed Databricks Notebooks using PySpark to transform large volumes of smart grid data, loading curated datasets into Azure SQL for downstream analytics.
Optimized Snowflake performance for large-scale energy usage queries through partitioning and query tuning.
Implemented Databricks Delta Lake for reliable, ACID-compliant data lake operations supporting Duke Energy s data governance framework.
Prepared capacity and architecture plans to migrate Duke Energy s legacy applications and databases to Azure, modernizing infrastructure for future scalability.
Implemented PostgreSQL solutions for managing energy transaction data and ensuring high availability of critical systems.
Performed one-time migration of multistate grid data from SQL Server to Snowflake using Python and SnowSQL to centralize analytics workloads.
Administered and optimized PostgreSQL databases used by field operations and customer information systems.
Developed and maintained CI/CD pipelines using Jenkins and Groovy scripts to automate deployments for Duke Energy s data integration solutions.
Utilized Kubernetes and Docker as the runtime for CI/CD processes, streamlining build, test, and deploy cycles for big data applications.
Created Jenkins jobs for deploying containerized applications to Kubernetes clusters handling real-time grid telemetry data.
Managed Terraform Cloud/Enterprise to provision and maintain Duke Energy s infrastructure securely and collaboratively
Environment: Azure Data Factory, Azure Data Lake, Azure Storage, Azure SQL, Azure Synapse, Azure Databricks, Hadoop, Spark, Hive, Python, Scala, Yarn, MapReduce, Tableau, Kafka, Sqoop, HBase, Azure PowerShell.

Bank of America, Plano, TX || Data Engineer || Jun 2015 Dec 2019
Responsibilities:
Used AWS Athena extensively to ingest structured data from S3 into other systems such as RedShift or to produce reports.
Worked with Snowflake cloud data warehouse and AWS S3 bucket for integrating data from multiple source system which include loading nested JSON formatted data into snowflake table.
Worked on the code transfer of a quality monitoring program from AWS EC2 to AWS Lambda, as well as the creation of logical datasets to administrate quality monitoring on snowflake warehouses.
The Spark-Streaming APIs were used to conduct on-the-fly transformations and actions for creating the common learner data model, which receives data from Kinesis in near real time.
Performed end- to-end Architecture & implementation assessment of various AWS services like Amazon EMR, Redshift, S3, Athena, Glue and Kinesis.
Hive As the primary query engine of EMR, we have built external table schemas for the data being processed.
Using AWS Glue, I designed and deployed ETL pipelines on S3 parquet files in a data lake.
Create, develop and test environments of different applications by provisioning Kubernetes clusters on AWS using Docker, Ansible, and Terraform.
Optimized PySpark jobs through performance tuning and efficient resource management.
Worked on deployment automation of all the micro services to pull image from the private Docker registry and deploy to Docker Swarm Cluster using Ansible.
Integrated Snowflake with various data sources and BI tools, streamlining data access and visualization
Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR.
Migrated on premise database structure to Confidential Redshift data warehouse.
into this application by using Hadoop technologies like PIG and HIVE.
Used JSON schema to define table and column mapping from S3 data to Redshift
On demand, secure EMR launcher with custom spark submit steps using S3 Event, SNS, KMS and Lambda function.
Monitored and tuned Hadoop clusters to maintain performance and resource efficiency.
Used the Multi-node Redshift technology to implement Columnar Data Storage, Advanced Compression, and Massive Parallel Processing
Environment: AWS, EC2, S3, RDS Glue, Lambda, RedShift, CloudWatch, Snowflake, SQL, python, Apache Airflow, AWS Glue, Talend, JAVA, Informatica, Apache NiFi, Microsoft Azure Data Factory, Apache Spark, Fivetran, Stitch, Marillion, dbt, DataStage, Apache Flink.

Media Mint, India. || Data Analyst || Jun 2011 Jul 2014
Responsibilities:
Actively involved in gathering requirements from end users, involved in modifying various technical & functional specifications.
Designed and implemented data models and reports in Power BI to help clients analyze data to identify market trends, competition, and customer behaviors.
Closely worked with ETL to implement Copy activity, Custom Azure Data Factory Pipeline Activities for On-cloud ELT processing. Created Azure DevOps pipeline for Power BI report deployments.
Worked on design, development of end user applications for the data presentation and analytics, including scorecards, dashboards, reports, monitors, and graphic presentation using Power BI and Tableau
Used Microsoft Power BI, designed dashboards and published the dashboards to the server, data gateway concept.
Developed and published reports and dashboards using Power BI and wrote DAX formulas and expressions.
Utilized Power Query in Power BI to Pivot and Un-Pivot the data model for data cleansing and data massaging
Created several user roles and groups to the end-user and provided row level security to them Worked with table and matrix visuals, worked with different level of filters like report level, visual level filter, page level filters.
Developed various solution driven views and dashboards by developing different chart types including Pie Charts, Bar Charts, Tree Maps, Circle Views, Line Charts, Area Charts, and Scatter Plots in Power BI.
Worked on data transformations such as adding calculated columns, manage relationships, create different measures, remove rows, replace values, split column, date & time column, etc.
Involved in designing/building complex stunning reports/dashboards using Filters (Slicers), Drill-down Reports, Sub reports and Ad-Hoc reports in Power BI Desktop.
Provided continued maintenance and development of bug fixes for the existing and new Power BI Reports.
Environment : PowerBI, Azure, RDS, Snowflake, SQL, python, Apache Airflow, AWS Glue, Talend, JAVA, Informatica, Apache NiFi, Microsoft.
Keywords: cplusplus continuous integration continuous deployment javascript business intelligence sthree database active directory Arizona Illinois North Carolina Texas

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];6047
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: