Home

Lalit - Data Architect
[email protected]
Location: Owings Mills, Maryland, USA
Relocation: Nearby MD location
Visa: H1B
Lalit Praveen Ayyagari
BigData Architect
(443) 370-5652 / (469)480-7979
[email protected]
LinkedIn: https://www.linkedin.com/in/lalit-ayyagari-97136a112/

EXPERIENCE SUMMARY
Led team to plan, design, and implement applications and software into Hadoop BigData Echo systems from various source systems.
Collaborated with business analysts, developers, and technical support teams to define project requirements and specifications.
Designed, developed, and managed web-based applications, databases, network accounts, and programs that drive in HDFS.
Launched complex recovery solutions to safeguard mission critical data.
Translated technical specifications into project scopes of work and product requirements while spearheading design and development of databases and enterprise solutions.
Implemented application developments, resolved performance issues, and provided end-user training on hardware and software.
17 Years of IT Experience in Analyzing, Designing, Developing, Implementing and Testing ofSoftware Applications and Currently working as Hadoop Architect/Senior Hadoop Developer.
Hands on experience in software design and development in Big Data/HADOOP (HDFS,PIG,HIVE, HBASE, MongoDB, SQOOP, MapReduce,SPARK, PYSPARK,KAFKA,STORM, SNOWFLAKE and SCALA)
Hands on experience in software design and development using CORE JAVA,JDBC in ZOS operating system in Mainframes.
Experience in XML parsing using DOM parser.
Good knowledge in collection frameworks to collect the structured data.
Good experience on REST webservices to pull data from different servers of CLOUD based API
Extensively worked on Hadoop and Hadoop Eco Systems.
Excellent understanding of Hadoop architecture and different components of Hadoop clusters which include componenets of Hadoop (Job Tracker, Task Tracker, Name Node and Data Node).
Data Ingestion to HDFS from various data sources.
Analyzed large data sets by running Hive queries and Pig scripts.
Good experience in writing the pig scripts.
Optimization of Hive Queries.
Ability to analyze different file formats.
Good exposure in cluster maintenance.
Loading data from LINUX file system to HDFS.
Importing and exporting the data from relational databases, NO SQL DB S using SQOOP.
Very good knowledge on HBase and Mongo DB and CASSANDRA.
Good knowledge in TES and $ schdulers.
Automated sqoop,hive and pig jobs using Oozie scheduling.
Configuration and deployment modules.
Knowledge in OOZIE workflows.
Good knowledge on writing and using the user defined functions in HIVE and PIG.
Developed multiple Kafka Producers and Consumers from scratch as per the business requirements.
Responsible for creating, modifying and deleting topics (Kafka Queues) as and when required by the Business team.
Working on implementing Spark and Strom frame work.
Knowledge on Map reduce frame work.
Extensive exposure to all aspects of Software Development Life Cycle (SDLC) i.e. Requirements Definition for customization, Prototyping, Coding (JAVA,COBOL,DB2) and Testing.
Migrating systems written in COBOL to JAVA in order to reuse JAVA programming skills and class libraries.
Deployed JAVA tasks that periodically query a database and gives the result to a dataset.
Using Java programs to access APIs such as SOAP/Web services, WebSphere MQ client API, Java Database Connectivity (JDBC) databases, custom Transmission Control Protocol/Internet Protocol (TCP/IP) socket services, and so forth.
Passing datasets created by traditional job steps to Java programs, which convert the data to XML and also reading and writing MVS datasets from Java.
Proficient in SYNCSORT for sorting of data files.
Flexible configuration of the Java Virtual Machine (JVM) and environment variables.
Routing JVM output directly to JES SYSOUT datasets.
Controlling output encoding.
Passing condition codes between Java and non-Java steps.
Executing the JVM under the original batch address space.
Communicating with the MVS system consol.
All phases of Software Development Life Cycle starting with Analysis , followed by design ,development and testing.
Acquired good knowledge on Mainframes and extensively worked in VS COBOL II, JCL languages, and well versed with DB2 and the file system like PS files and VSAM and having worked in different applications domains.

TECHNICAL PROFILE

Key Skills Familiar Proficient
Programming Languages C,C++,CORE,Advance Java Scala,Python,JAVA,COBOL,JCL, SQL,Unix
Data Handling Oozie, Ambari, Flume Hadoop,HDFS, Map Reduce, Yarn, Pig, Hive, HBase,Sqoop,Zooeeper,FLUME,IMPALA
Scripting Languages Perl Shell Scripting
Databases Explored MS SQL 2008 Oracle 11g, DB2, MongoDB,post gres
File Formats PARQUET,ORC,RC,TEXT,JSON,XML,AVRO
Descriptive and Predictive Analytics NLP,R SPARK, PYSPARK
Software Experience Oracle Data Miner(ODM), Visual Studio C++, Eclipse,Oracle SQL Developer, Web Services MQ communication
Libraries/Tools Explored: ECLIPSE,MQSeries,CHANGEMAN,TSO,FILE-AID,PANVALET,LCS,STS,NDM,SPUFI,QMF
Web Related : HTML,Extended JavaScript, Servlet JavaScript, JSP, XQuery, XPath, XML, XSLT
Operating System Android, IOS MicrosoftWindows,Unix/Solaris,Linux, MVS-OS/390,MS-WINDOWS2000/XP AND MS-DOS



ACADEMICAL PROFILE
Bachelor of Engineering in Mechanical Engineering with First Class at Velammal Engineering College in Chennai (Anna University),TN,India. 2007

WORK/PROFESSONAL EXPERIENCE

Client : Marriott International 08/2023 to Present
Project Tittle : Modern Data Platform
Role : Big Data Engineering Architect contracted thru LTIMindtree
Architect,Design,Code,Test and implement next-generation data analytics platform MDP using software engineering best practices.MDP is modern data platform that uses technologies such as Spark,Scala,DataBricks,EMR,AWS S3,SNS,SQS,Lambda,Snowflake,DBT and Fivetran
Lead 7 member team to plan, design, and implement the migration of applications to TBDP
Developed Spark applications using Pyspark and Spark-SQL for data extraction, transformation, and aggregation from multiple file formats.
Architect the orchestration thru StoneBranch and create the data pipelines.
Architect the orchestration thru AirFlow and create the pipelines and dags.
Trigger the Apache Spark Jobs thru EMR API by using hooks in Airflow hooks and operators.
NoSQL technologies like Cassandra,HBase,DynamoDB
MPP Databases like PostGres and SnowFlake.
Using Snowpark pipelines in SnowFlake and intergarting it with Airflow
Using Serverless AWS Lambda and invoking the Lambda s thru SNS and SQS
Triggering of Airflow Dag thru SNS and SQS via Lambda
Spark integration with Big Data(Hadoop On prem),Amazon EMR,Databricks,Apache Zeplin
Provide software expertise in these areas: Spark based applications using Spark Scala, Java application Integration logging, web services and Cloud Computing
Authoring Python (PySpark) Scripts for custom UDF s for Row/ Column manipulations, merges, aggregations, stacking, data labeling and for all Cleaning and conforming tasks. Migrate data from on-premises to AWS storage buckets.
Design, develop, and maintain scalable data models and transformations using DBT in conjunction with Snowflake and FiveTran, ensure the effective transformation and load data from diverse sources into data warehouse or data lake.
Implement and manage data models in DBT, guarantee accurate data transformation and alignment with business needs.
Utilize DBT to convert raw, unstructured data into structured datasets, enabling efficient analysis and reporting.
Write and optimize SQL queries within DBT to enhance data transformation processes and improve overall performance.
Establish best DBTprocesses to improve performance, scalability, and reliability.
Expertise in SQL and a strong understanding of Data Warehouse concepts and Modern Data Architectures.
Migrate legacy transformation code and Spark EMR code into modular DBT data models.


Client : Toyota, Texas(Remote) 01/2023 to 08/2023
Project Tittle : Manufaturing Internal Logistics
Role : Senior Data Engineer
Internal Logistics are part of manufacturing domain which are SQL Server based applications.In order to maintain the data for analytical reports,the applications are braought to TBDP platform which has technologies such as SQL servers,Spark,Scala,Data Bricks,EMR,AWS S3,SNS and SQS
Architect,Design,Code,Test and implement next-generation data analytics platform TBDP using software engineering best practices in latest technologies using EMR,Databricks ,Scala,Spark
Lead 7 member team to plan, design, and implement the migration of applications to TBDP
Developed Spark applications using Pyspark and Spark-SQL for data extraction, transformation, and aggregation from multiple file formats.
Architect the orchestration thru StoneBranch and create the data pipelines.
Architect the orchestration thru AirFlow and create the pipelines and dags.
Trigger the Apache Spark Jobs thru EMR API by using hooks in Airflow hooks and operators.
NoSQL technologies like Cassandra,HBase,DynamoDB
MPP Databases like PostGres and SnowFlake.
Using Snowpark pipelines in SnowFlake and intergarting it with Airflow
Using Serverless AWS Lambda and invoking the Lambda s thru SNS and SQS
Triggering of Airflow Dag thru SNS and SQS via Lambda
Spark integration with Big Data(Hadoop On prem),Amazon EMR,Apache Zeplin
Provide software expertise in these areas: Spark based applications, Java application Integration logging, web services and Cloud Computing
Authoring Python (PySpark) Scripts for custom UDF s for Row/ Column manipulations, merges, aggregations, stacking, data labeling and for all Cleaning and conforming tasks. Migrate data from on-premises to AWS storage buckets.
Develop solutions to enable metadata/Rules engine driven data analytics application leveraging open source and/or cloud native components
Develop solutions in highly collaborative and agile environment


Client : ManTech International in collabrated with CMS, Owing Mills,MD 05/2019 to 1/2023
Congensys Corp
Project Tittle : NGMC-NextGeneration measures[ESRD QIP]

Role : Senior Fullstack BigData Engineer
ESRD QIP is a existing Java based platform where the applications are running on tradional MVC architecture and the NGMC is the application that was built from scratch by remodelling all the measures in Spark,Scala,Data Bricks,EMR,AWS S3,SNS,SQS,Lambda,Snowflake
Architect,Design,Code and implement next-generation data analytics platform using software engineering best practices in latest technologies using Apache Spark ,Pyspark,Scala,Java,R,EMR and Data Bricks
Architect the orchestration thru AirFlow and create the pipelines and dags.
Trigger the Apache Spark Jobs and Databricks jobs thru EMR API by using hooks in Airflow
Graph DataBase for schduling the workflows
NoSQL technologies like Cassandra,HBase,DynamoDB
Utilized SQL, Python, and Snowflake's SnowSQL for data extraction, cleansing, and analysis from sources like Oracle and SQL Server, integrating with Azure services.
Spark integration with Big Data(Hadoop),Amazon EMR,Apache Zeplin
Provide software expertise in these areas: Spark based applications, Java application Integration, web services, Cloud Computing
Develop solutions to enable metadata/Rules engine driven data analytics application leveraging open source and/or cloud native components

Client : Johnson & Johnson Supply Chain, BridgeWaters, NJ 10/2018 to 04/2019
Congensys Corp
Project Tittle : EDG-SnappyData/TAX&GMED
Role : Senior Hadoop Developer
J&J has their ERP data in the grid ingested from the source ERP s to enterprise data grid. SnappyData is the hadoop cluster that pulls the data from various regions from the grid and integrates them in the Snappy Cluster for the downstaream reporting team to have their reports in Tablaue/Spotfire/Alteryx.
Design, configure, implement and manage the SnappyData cluster platform that processes Point of ERP data with optimal performance and ease of maintenance.
Created a scripts that ingest the data from various regions in grid.
Troubleshoot and resolve various process or data related issues. Will be on call and provide off-hour support as needed.
Assist in the ongoing development and documentation of the standards for the system and data processes.
Create project plans, manage milestones, create and distribute reports and manage risks
Communicate effectively with senior management, direct reports and customers.
Develop Hive, Pig scripts for data transformation.
Develop Python, shell,JAVA and HQL scripts for data flow orchestration.

Client : NPD market research Company, Port Washington,NY 04/2018 to 09/2018
Congensys Corp
Project Tittle : Abnitio to Hadoop Conversion projects
Role : Senior Hadoop Developer
NPD group is trying to optimize their infrastructure costs as the Abinitio is very expensive to maintain their licesing costs to have their applications and reports built.NPD group is trying to implement and build their enterprise data lake on Hadoop where their machine learning and cognitive Intelligence data models run.
Responsibilities:
Design, configure, implement and manage the Hadoop HBase platform that processes Point of Sale data with optimal performance and ease of maintenance.
Created an Application Framework that ingests data from various sources using SPARK framework.
Use SCALA/Python as the programming language to process the framework to Ingest to HBASE/Phoenix.
Use Apache Phoenix as SQL projection on top of No-SQL Hbase Projection.
Create the data models and tables that ingest data to HBASE/Phoenix.
Troubleshoot and resolve various process or data related issues. Will be on call and provide off-hour support as needed.
Assist in the ongoing development and documentation of the standards for the system and data processes.
Create project plans, manage milestones, create and distribute reports and manage risks
Communicate effectively with senior management, direct reports and customers.
Develop Hive, Pig scripts for data transformation.
Develop Python, shell,JAVA and HQL scripts for data flow orchestration.
Use MAVEN as the build tool to compile and promote the code and SVN as version control tool.
Worked with SQL Server meta data system.

Client : Inovalon, Herndon,VA 08/2017 to 01/2018
Congensys Corp
Project Tittle : iPortHD
Role : Hadoop Developer
Data Integration and Aggregation Solution provides a secure, cloud-based solution that can be quickly implemented establishing connections to incorporate baseline, historic warehouse data, as well as ongoing data imports into the Client Data Lake (CDL) leveraging more than 1,100 data quality validations.Ingest data of the various health insurance care clients and to predict the risk analytics for the data processed
Process the health care related data of the patients and do Analytics on the data.
Develop Hive, Pig scripts for data transformation.
Exposure about ETL batch and the concept of datawarehousing.
Develop Hadoop jobs through schedulers and use SSIS orchestration engine and also through oozie
Develop Python, shell,JAVA and HQL scripts for data flow orchestration;
Manage software build when needed thru Microsoft TFS and GIT
Support REST-Based ETL Hadoop software in higher environments like UAT, Production;
Built the SSIS packages that orchestrates the Green Plum Jobs and Troubleshoot SSIS Packages if needed
Worked with SQL Server meta data system; and
Ability to troubleshoot asp.net web API based REST layer.
Architect, Design and develop Hadoop ETL by using Kafka.
To create SPARK,PIG and HIVE Jobs by using Python Rest Orchestration
To build MR API's programs where we used in the combination of HIVE and HBASE
Ability to work on Green Plum(Postgres DB) for the transformed data to store.
To create MONGO collection to store in MONGO DB for the persistent storage.
Developing multiple Java based Kafka Producers and Consumers from scratch as per the business requirements.
Worked on XML formatted data,Text formatted Data,JSON formatted data.
Used AVRO s as a schema to Hive tables.

Client : Verizon, Boston,MA 02/2017 till 08/2017
Congensys Corp
Project Tittle : Verizon Service Assurance VPNS
Role : Big Data Application senior developer and SolutionsArchitect
Architected and implemented the Big Data integration platform for integrating diverse data sources, apply transformations and store to various data sinks using Hadoop.The present
archetecture includes stream of data in the form of unstructured data and also the data that comes in the form of batches in the various formats such as xml.Used various integration tools to ingest the data to Hadoop platform from various source systems such as Oracles,Linux application servers.

Responsibilities:
SME on Big data technologies( hdfs, yarn, mapreduce, impala, hive, oozie, spark, sqoop, hbase, platform architecture.) Worked with Hortonworks technical team in resolving issues.
Evaluating client needs and translating their business requirement to functional specifications thereby onboarding them onto the Hadoop ecosystem.
Working on designing the mapreduce and Yarn flow and writing mapreduce scripts , performance tuning and debugging.
Single point of contact for the lamda architetcure to develop on the Hadoop platform.
Developing multiple Java based Kafka Producers and Consumers from scratch as per the business requirements.
Responsible for creating, modifying and deleting topics (Kafka Queues) as and when required by the Business team.
Working on implementing Spark and Strom frame work to ingest the data in real time and apply transfoemations in SCALA
Creating Hive tables, loading the data and writing hive queries that will run internally in a map reduce way.
Data lineage in hadoop to track down the data from where its being ingetsed and also has sound knowledge on various tools to figureout that lineage.
Imported data using Sqoop to load data from Oracle to HDFS on regular basis.
Configure schedulers for the scripts.
Written Hive queries for data analysis to meet the business requirements.
Created HBase tables to store variable data formats coming from different portfolios.
Implemented HBase custom co-processors, observers to implement data notifications.
Used HBase thrift API to implement Real time analysis on HDFS system.
Developed Pig scripts to implement ETL transformations including Cleaning, load and extract.
Developed PIG UDFs to incorporate external business logic into pig scripts.
Developed HIVE UDFs to incorporate external business logic into hive scripts
Developed join data set scripts using HIVE join operations.
Developed join data set scripts using Pig Latin join operations.
Designed and implemented Map Reduce-based large-scale parallel relation-learning system.
Implemented data injection process using flume sources, flume consumers and flume interceptors
Validated the performance of Hive queries on Spark against running them traditionally on Hadoop
Involved in Testing and coordination with business in User testing.
Importing and exporting data into HDFS and Hive using Sqoop.
Written Hive jobs to parse the logs and structure them in tabular format to facilitate effective querying on the log data
Involved in creating Hive tables loading data and writing queries that will run internally in MapReduce way.
Used Pig as ETL tool to do transformations, event joins, filter and some pre-aggregations.
Involved in processing ingested raw data using MapReduce, Apache Pig and HBase.
Involved in developing Pig Scripts for change data capture and delta record processing between newly arrived data and already existing data in HDFS.
Used Hive to analyze the partitioned and bucketed data to compute various metrics for reporting.

Client : American Express, Phoenix, AZ 03/2016 to till 02/2017
Infosys
Congensys Corp
Project Tittle : Global Force and Open Technology
Role : Big Data Application Architect
Worked in the AMEX Credit card fraud department where I had Architected and implemented the Cornerstone Big Data integration platform for integrating diverse data sources, apply transformations and store to various data sinks using Hadoop.This project builds an innovative application that reduces the cost of the existing datawarehouses and mainframes and salesforce cloud applications and gets the data in much quicker and more transformed manner.

Responsibilities:
SME on Big data technologies( hdfs, yarn, mapreduce, impala, hive, oozie, spark, sqoop, syncsort ingestion, platform architecture.) Worked with Cornerstone technical team in resolving issues.
Evaluating client needs and translating their business requirement to functional specifications thereby onboarding them onto the Hadoop ecosystem.
Worked on designing the mapreduce flow and writing mapreduce scripts , performance tuning and debugging.
Involved in creating Hive tables, loading the data and writing hive queries that will run internally in a map reduce way.
Figured out the data lineage in hadoop to track down the data from where its being ingetsed and also has sound knowledge on various tools to figureout that lineage.
Imported data using Sqoop to load data from Oracle to HDFS on regular basis.
Imported data from Terradata to HDFS thru Informatica maps using Unix scripts..
Configure Dollar View schedulers for Hive and Pig jobs to run the Hive and Pig scripts.
Written Hive queries for data analysis to meet the business requirements.
Created HBase tables to store variable data formats coming from different portfolios.
Implemented HBase custom co-processors, observers to implement data notifications.
Used HBase thrift API to implement Real time analysis on HDFS system.
Developed Pig scripts to implement ETL transformations including Cleaning, load and extract.
Developed PIG UDFs to incorporate external business logic into pig scripts.
Developed HIVE UDFs to incorporate external business logic into hive scripts
Developed join data set scripts using HIVE join operations.
Developed join data set scripts using Pig Latin join operations.
Designed and implemented Map Reduce-based large-scale parallel relation-learning system.
Implemented data injection process using flume sources, flume consumers and flume interceptors.
Validated the performance of Hive queries on Spark against running them traditionally on Hadoop
Tested and coordinated with business in User testing.
Importing and exporting data into HDFS and Hive using Sqoop.
Written Hive jobs to parse the logs and structure them in tabular format to facilitate effective querying on the log data
Involved in creating Hive tables loading data and writing queries that will run internally in MapReduce way.
Used Pig as ETL tool to do transformations, event joins, filter and some pre-aggregations.
Involved in processing ingested raw data using MapReduce, Apache Pig and HBase.
Involved in developing Pig Scripts for change data capture and delta record processing between newly arrived data and already existing data in HDFS.
Involved in scheduling Oozie workflow engine to run multiple Hive and pig jobs.
Used Hive to analyze the partitioned and bucketed data to compute various metrics for reporting.

Client : CISCO, SAN JOSE, CA 08/2012 to till 03/2016
Infosys
Working location hydrabad
Project Tittle : 360DF
Role : Big Data Technology Analyst
Cisco Systems, Inc. is an American corporation technology company headquartered in San Jose, California, that designs, manufactures and sells networking equipment worldwide. It has traditional, software-based packet processing architectures with various source systems include Mainframes,ORACLE,TERRA DATA and traditional datawarehouses.The jobs on the above source syatems are migrated to the HADOOP platform.

Responsibilities:
Created the existing Mainframes functionality in HDFS.
Worked on analyzing Hadoop cluster and different Big Data analytic tools including Pig, Hive, HBase and Sqoop.
Migration of all the data from mainframes to HDFS thru FTP protocol to local Unix Box.
Migrating all the programs,Jobs and schdules to Hadoop.
Configure the Dollar View schedulers for the jobs migrated from the mainframes schduler.
Worked on designing the mapreduce flow and writing mapreduce scripts , performance tuning and debugging.
Involved in creating Hive tables, loading the data and writing hive queries that will run internally in a map reduce way.
Figured out the data lineage in hadoop to track down the data from where its being ingetsed and also has sound knowledge on various tools to figureout that lineage.
Designed the rules that dynamically creates the XML parsing using DOM parser.
Implemented and navigated the structured data using collection frameworks
Used the REST webservices to pull data from different servers of CLOUD based API.
Imported data using Sqoop to load data from Oracle to HDFS on regular basis.
Imported data from Terradata to HDFS thru Informatica maps using Unix scripts..
Configure Dollar View schedulers for Hive and Pig jobs to run the Hive and Pig scripts.
Written Hive queries for data analysis to meet the business requirements.
Created HBase tables to store variable data formats coming from different portfolios.
Implemented HBase custom co-processors, observers to implement data notifications.
Used HBase thrift API to implement Real time analysis on HDFS system.
Developed Pig scripts to implement ETL transformations including Cleaning, load and extract.
Developed PIG UDFs to incorporate external business logic into pig scripts.
Developed HIVE UDFs to incorporate external business logic into hive scripts
Developed join data set scripts using HIVE join operations.
Developed join data set scripts using Pig Latin join operations.
Designed and implemented Map Reduce-based large-scale parallel relation-learning system.
Implemented data injection process using flume sources, flume consumers and flume interceptors
Presently implementing KAFKA to collect the logs from the HIVE jobs.
Configured deployed and maintained multi-node Dev and Test Kafka Clusters.
Developed multiple Kafka Producers and Consumers from scratch implementing organization s requirements
Responsible for creating, modifying and deleting topics (Kafka Queues) as and when required with varying configurations involving replication factors, partitions and TTL.
Designed and developed tests and POC s to benchmark and verify data flow through the Kafka clusters.
Validated the performance of Hive queries on Spark against running them traditionally on Hadoop
Involved in Testing and coordination with business in User testing.
Importing and exporting data into HDFS and Hive using Sqoop.
Written Hive jobs to parse the logs and structure them in tabular format to facilitate effectiv querying on the log data
Involved in creating Hive tables loading data and writing queries that will run internally in MapReduce way.
Used Pig as ETL tool to do transformations, event joins, filter and some pre-aggregations.
Involved in processing ingested raw data using MapReduce, Apache Pig and HBase.
Involved in developing Pig Scripts for change data capture and delta record processing between newly arrived data and already existing data in HDFS.
Populated HDFS and HBASE with huge amounts of data using Apache Kafka.
Involved in scheduling Oozie workflow engine to run multiple Hive and pig jobs.
Used Hive to analyze the partitioned and bucketed data to compute various metrics for reporting.
Experienced in managing and reviewing the Hadoop log files.
Expertise with NoSQL databases like HBase and MongoDB.
POC work is going on using Spark,STORM and Kafka for real time processing.
Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs,Python and Scala.
Used Spark-SQL for querying and quick data analysis.
Real time streaming the data using Spark with Kafka.
Configured Spark streaming to receive real time data from the Kafka and store the stream data to HDFS using Scala
Design technical solution for real-time analytics using Kafka and HBase.
Gained knowledge in creating QlikView and Tableau dashboards for reporting analyzed data.

Client : Charles Schwab,US Brokerage firm.
Project Title : AMT(Asset Management technologies)
Employer : Infosys Technologies Limited.
Type : Development&Maintenance-Business Application
Technology /Software : JAVA,COBOL,JCL,DB2,VSAM,EZYTRIEVE,CICS,ORACLE
Role : Team Member.
Duration : 12/2010 10/2012

The Charles Schwab Corporation (NYSE: SCHW), is an American brokerage and banking company, based in San Francisco, California. Schwab offers the same services as a traditional brokerage, but with lower commissions and fees.The company serves 7.9 million client brokerage accounts, with $1.65 trillion in assets (as of September 2011), from over 300 offices in the U.S, one office in Puerto Rico, and one branch in London. Asset Management Technology (AMT) supports the business functions associated with Mutual Fund order management for all Schwab enterprises . AMT compromises of various applications which supports Mutual Fund order entry, aggregation, placement , pricing and execution processes and the data is passed to the other downstream applications for further processing.
Responsibilities:
Analysis and understanding the HLD
Involved in preparing the LLD
Preparation of UTP and Unit test cases
Participation in making the code changes
Executing the Unit test cases and obtaining the unit test results
Peer Reviews
Promoting the components to system testing
Supporting the system testing team
Promoting the components from Dev to Production with proper approvals
Analysis the Defect change request/Live Defect when received
Migrating systems written in COBOL to Java in order to reuse Java programming skills and class libraries.
Employing long-running started tasks for Java that periodically query a database to find new work to process.
Using Java programs to access APIs such as SOAP/Web services, WebSphere MQ client API, Java Database Connectivity (JDBC) databases, custom Transmission Control Protocol/Internet Protocol (TCP/IP) socket services, and so forth.
Passing datasets created by traditional job steps to Java programs, which convert the data to XML.
Flexible configuration of the Java Virtual Machine (JVM) and environment variables.
Routing output directly to JES SYSOUT datasets
Controlling output encoding.
Passing condition codes between Java and non-Java steps
Reading and writing MVS datasets from Java
Executing the JVM under the original batch address space.
Communicating with the MVS system console
Running Java batch jobs with BPXBATCH and Running Java batch jobs with a custom JVM launcher.

As a Technical Associate for AT&T South East, US (Tech Mahindra)
Working in the CCB-SE Billing applications of AT & T. Involved in many phases of the Knowledge Transfer which given by Accenture, Amdocs. Currently working for the steady state activities of All Invoice Billing applications like FLEX , BIG, IDB (AT&T), IRB and PRB.

Project Title : Customer Care Billing for South East
Employer : Techmahindra Ltd.
Type : Development&Maintenance-Business Application
Technology /Software : JAVA,COBOL,JCL,DB2,VSAM,ORACLE
Role : Team Member
Duration : 06/2007- 12/2010

AT&T has providing Enterprise bill format for MCI (IRB) , SPRINT (PRB) and AT&T (IDB) Carriers. Aligned Sprint and MCI for SE with format in other AT&T region(s) and Aligned AT&T IDB with desired standards, and more similar to format in other AT&T region(s), and to more consistency with other AT&T affiliate sections sharing the AT&T bill pages.FLEX receives customer billing invoices, adjustments, and text from BST affiliates and IXCs and adds them to the AT&T South East bill Pages. Processing associated with this system includes text code processing, invoice and adjustment processing, data validation, matching account to customers, formatting the bills, and creating data for input to the IXC billing process.
AT&T has launched the new LEC to provide the advanced features in services being provided to the customers. DTV is one of the major services which is providing to end-user. This service is coming into AT&T applications through the FLEXIBLE invoice application.
Responsibilities:
Analysis and understanding the HLD
Involved in preparing the LLD
Preparation of UTP and Unit test cases
Participation in making the code changes
Executing the Unit test cases and obtaining the unit test results
Peer Reviews
Promoting the components to system testing
Supporting the system testing team.
Promoting the components from Dev to Production with proper approvals
Analysis the Defect change request/Live Defect which received from vantive.
Involved in Design, Development, Testing and Integration of the application.
Involved in development of user interface modules using HTML, CSS and JSP.
Involved in writing SQL queries
Involved in coding, maintaining, and administering Servlets, and JSP components to be deployed on Apache Tomcat application servers
Database access was done using JDBC. Accessed stored procedures using JDBC.
Worked on bug fixing and enhancements on change requests.
Coordinated tasks with clients, support groups and development team.
Worked with QA team for test automation using QTP
Participated in weekly design reviews and walkthroughs with project manager and development teams.
Migrating systems written in COBOL to Java in order to reuse Java programming skills and class libraries.
Employing long-running started tasks for Java that periodically query a database to find new work to process.
Using Java programs to access APIs such as SOAP/Web services, WebSphere MQ client API, Java Database Connectivity (JDBC) databases, custom Transmission Control Protocol/Internet Protocol (TCP/IP) socket services, and so forth.
Passing datasets created by traditional job steps to Java programs, which convert the data to XML.
Flexible configuration of the Java Virtual Machine (JVM) and environment variables
Routing output directly to JES SYSOUT datasets
Controlling output encoding
Passing condition codes between Java and non-Java steps
Reading and writing MVS datasets from Java
Executing the JVM under the original batch address space
Communicating with the MVS system console
Running Java batch jobs with BPXBATCH and Running Java batch jobs with a custom JVM launcher.
Keywords: cprogramm cplusplus quality analyst message queue sthree database rlang information technology trade national microsoft Arizona California Colorado Maryland Massachusetts New Jersey New York Tennessee Virginia

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];4490
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: