Big Data-Columbus, Ohio 43215-Hybrid-10yr exp-only local at Columbus, Ohio, USA |
Email: [email protected] |
http://bit.ly/4ey8w48 https://jobs.nvoids.com/job_details.jsp?id=929935&uid= From: Naveen, Halcyon solution [email protected] Reply to: [email protected] Hello Greeting !!! We have an urgent requirement for the below-given role, please revert me if you are available and interested with your updated resume. Job Title: Big Data (726022) Location: 50 West Town Street Columbus, Ohio 43215-Hybrid Duration: long term Contract This resource would be required to be on-site Tuesdays and Thursdays and remote remaining days. 50 West Town Street Columbus, Ohio 43215 The Technical Specialist will be responsible for Medicaid Enterprise Data Warehouse (EDW) design, development, implementation, migration, maintenance and operation activities. Works closely with Data Governance and Analytics team. The candidate will closely with Data Governance and Analytics team. Will be one of the key technical resource for data warehouse projects for various Enterprise Data Warehouse projects and building critical Data Marts, data ingestion to Big Data platform for data analytics and exchange with State and Medicaid partners. This position is a member of Medicaid ITS and works closely with the Business Intelligence & Data Analytics team. REQUIRED Skill Sets: 8+ years of experience with Big Data, Hadoop on Data Warehousing or Data Integration projects. Analysis, Design, development, support and Enhancements of ETL/ELT in data warehouse environment with Cloudera Bigdata Technologies (with a minimum of 8-9 years experience in Hadoop, MapReduce, Sqoop, PySpark, Spark, HDFS, Hive, Impala, StreamSets, Kudu, Oozie, Hue, Kafka, Yarn, Python, Flume, Zookeeper, Sentry, Cloudera Navigator) along with Oracle SQL/PL-SQL, Unix commands and shell scripting; Strong development experience (minimum of 8-9 years) in creating Sqoop scripts, PySpark programs, HDFS commands, HDFS file formats (Parquet, Avro, ORC etc.), StreamSets pipeline creation, jobs scheduling, hive/impala queries, Unix commands, scripting and shell scripting etc. Writing Hadoop/Hive/Impala scripts (minimum of 8-9 years experience) for gathering stats on table post data loads. Strong SQL experience (Oracle and Hadoop (Hive/Impala etc.)). Writing complex SQL queries and performed tuning based on the Hadoop/Hive/Impala explain plan results. Proven ability to write high quality code. Experience building data sets and familiarity with PHI and PII data. Expertise implementing complex ETL/ELT logic. Develop and enforce strong reconciliation process. Accountable for ETL/ELT design documentation. Good knowledge of Big Data, Hadoop, Hive, Impala database, data security and dimensional model design. Basic knowledge of UNIX/LINUX shell scripting. Utilize ETL/ELT standards and practices towards establishing and following centralized metadata repository. Good experience in working with Visio, Excel, PowerPoint, Word, etc. Effective communication, presentation and organizational skills. Familiar with Project Management methodologies like Waterfall and Agile Ability to establish priorities & follow through on projects, paying close attention to detail with minimal supervision. Required Education: BS/BA degree or combination of education and experience Naveen Goud, Senior US IT Recruiter Halcyon Solutions Inc. Direct: Email: naveen @halcyonit.com Naveen Goud | LinkedIn Delivering the very best in IT talent and consulting for private, public and federal sector clients since 1992. Keywords: business analyst information technology procedural language http://bit.ly/4ey8w48 https://jobs.nvoids.com/job_details.jsp?id=929935&uid= |
[email protected] View All |
08:05 PM 12-Dec-23 |