Home

LAKSHMI KUMAR KATIKALA - Data Engineer-Remote only
[email protected]
Location: Denver, Colorado, USA
Relocation: No
Visa: GC
LAKSHMI KUMAR KATIKALA
Sr. Data Engineer| [email protected] |
678) 707-2060 | linkedin.com/in/lakshmikumark
Professional Summary
12+ Years of Data Engineering and Business Intelligencein developing and managing complex data solutions across diverse platforms.
Extensive experience with AWS Services such as S3, Glue, EC2, IAM, Secrets Manager, EMR,Athena, Glue Crawler and Catalog,Cloud Watch, Lambda, and Step Functions. Created and managed data storage and workflows effectively using AWS Transfer Family for secure file transfers.
Extensive experience in designing and implementing ETL pipelines using Informatica PowerCenter, IICS, AWS Glue, and Databricks. Skilled in relational databases and data warehouses such as Oracle, SQL Server, Teradata, and Snowflake.
Extensive experience in working with Snowflaketables, various kinds of views, procedures, tasks, stages, loading and unloading of data and streamlit apps.
Good experience in working with Databricks notebooks, performance optimization techniques, scheduling the notebooks, auto loader and medallion architecture (bronze, silver and gold layers).
Good experience with Azure services such as Blob storage, ADLS Gen2, Azure Data Factory and Azure DevOps.
Experienced in working with dbtfor data transformations and integrating it with Matillionfor orchestration.
Good experience on orchestration tools such as Matillion, Airflow, etc.
Good experience in data modeling - for relational data, dimensions and facts (star and snowflake) and architect the data pipelines.
Managed and developed data pipelines in Azure Databricks usingPython,PySpark and SQL. Proficient with cloud based ETL tools and experienced in handling data from various sources including Oracle, flat files, sftp sources and APIs.
Developed business intelligence reports using PowerBI, Tableau, and Business Objects, providing critical insights and data visualizations to stakeholders.
Experiencein leading teams through all phases of the software development lifecycle, adhering to Agile methodologies including onsite-offshore model. Ensured thorough testing and reliable production support.
Experienced in Kafka and Spark streaming for real-time data analytics and processing.
Experienced in CI/CD pipelines to integrate and migrate pipelines between environments, enhancing code integrity and deployment efficiency. Used Gitlab, SVN.
Good experience in all phases of software development life cycle including requirements gathering, development, testing (Unit, integration and UAT) and production support in Agile methodology.
Good experience in assessing the maintaining data quality.
Professional Experience
Janus Henderson Investors, Denver, CO July 2019 Jan 2025
Role: Sr. Data Engineer
I designed and implemented ETL operations using AWS Glue scripts and Databricks notebooks for data extraction and transformation, loading data into Snowflake for analytics and Power BI for reporting.
Managed data modeling, built scalable data pipelines, and integrated systems across Snowflake and Azure Databricks, ensuring seamless data flow and efficient storage solutions using AWS S3.
Scheduled and monitored data pipelines in Databricks, optimizing performance, and handled migrations and higher-environment deployments using Azure DevOps (Git).
Managed and optimized Snowflake resources, including tables, views, stored procedures, and tasks, streamlining complex data transformation and loading processes.
Integrated dbt (Data Build Tool) to orchestrate dataflows and improve data transformation efficiency.
As a key technical resource, I supported production systems, integrated Databricks and Snowflake, enabling advanced data processing and preparing data for critical reporting in Power BI.

Charles Schwab, Lone Tree, CO Aug 2018 - June 2019
Role: Sr.Data Engineer
UsedInformatica Power Center and IICS for the ETL operations on the data from various source systems such as Oracle, flatfiles, XML, JSON and CSV files by incorporating various business rules.
Worked on tuning the Performanceof Informatica sessions and several SQL queries and stored procedures.
Createdpartitionsin informatica sessionsand created PL/SQL procedures and packages and scheduled the Informatica workflows, bteq scripts, pl/sql procedures using AppWorx scheduler.
Restricted data for users using Row level security and User filters.
Developed Tableau visualizations and dashboards using Tableau.
Supported data loads over weekdays and weekends as well.
Arrow Electronics, Centennial, CO Sep 2015 - July 2018
Role: Data Engineer
Created Informatica Power Center mappings, workflows and scheduled them to extract data from various sources such as Oracle, flatfiles, XML and csv files.
Provided production support and resolved production issues on-time.
Gathered stats regularly and kept in touch with DBAs to observe if there are any system bottlenecks.
In Teradata, I created Bteq scripts to load data from staging layer to semantic layer.
I worked on Azure blob storage to store various files and process them later in Teradata.
Worked on tuning the Performanceof Informatica sessions and several SQL queries and stored procedures.
I met with business partners regularly to understand their requirements and to show my work progress.
Created Autosys job scripts to schedule Informatica workflows, bteq scripts, pl/sql procedures.
Apple Inc, Cupertino, CA May 2009 - Aug 2015
Role: ETL Developer
Created tables, stored procedures in Teradata. Tuned performance of data loads and scripts.
Created semantic layer (reporting layer) for the Dashboards and reports to access data from and show it in sub-seconds.
Provided production support and resolved production issues on-time.
Provided weekly status updates to Technical and Product management teams.
Gathered stats regularly and kept in touch with DBAs to observe if there are any system bottlenecks.
Created Bteq scripts to load data from staging layer to data warehouse.
Developed Business object universes and reports.
Trained new joiners on Apple domain, Informatica, Teradata and Business Objects.
Created Autosys job scripts to schedule Informatica workflows, bteq scripts, pl/sql procedures.
Created documentation of all the work that I did.
Skills
AWS S3, Athena, Glue, EMR, Crawler, Step Functions, Lambda,CodeCommit, Cloud Watch, Secrets Manager, IAM, EC2, AWS Transfer Family.
Azure Blob storage, Data Factory, DevOps
Databricks Notebooks creation (SQL and Pyspark), scheduling, data sharing, Integrations with AWS and Snowflake
Snowflake Tables, Stored Procedures, Performance tuning, cloning, SnowSQL and data sharing
Informatica Informatica PowerCenter, IICS, Data Quality, Axon, Enterprise Data Catalog
Programming languages Python, PySpark, Unix shell scripting
Streaming Spark streaming, Kafka basics
Orchestration tools Autosys, AppWorx, AWS Step Functions, Matillion, Databricks Workflows.
Source Control Gitlab and SVN

Certifications
AWS Certified Solutions Architect
AWS certified Cloud Practitioner
Certified Scrum Product Owner (CSPO)
Certified Scrum Master (CSM)
Data Scientist Nanodegree
Education

Name: Master s in Data Science
University: Eastern University, PA, USA

Name: Bachelor of Technology (Computer Science & Engineering)
University: Sri Venkateswara University, India
Year: 2008
Keywords: continuous integration continuous deployment business intelligence sthree information technology procedural language California Colorado Pennsylvania

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];4731
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: