Hiring Azure Databricks Data Engineer, Louisville, Kentucky (day 1 onsite) at Louisville, Kentucky, USA |
Email: [email protected] |
http://bit.ly/4ey8w48 https://jobs.nvoids.com/job_details.jsp?id=594766&uid= From: Swati, SIRI INFO SOLUTIONS [email protected] Reply to: [email protected] Hello, Have a nice day...! Greeting from Siri Info Solutions. I am Swati with Siri Info Solutions. We Siri Info Solutions is a global staff augmentation firm providing a wide-range of talent on-demand and total workforce solutions. We have an immediate opening for the below position with one of our premium clients. Job Title: Azure Databricks Data Engineer Work Location: Louisville, Kentucky (day 1 onsite) Interview: WebEx Hire Duration: Long Term Job Description: Job Description : The Senior Data Engineer will be responsible for the build of Enterprise Data platform. Setting up the data pipelines that are scalable, robust and resilient and build pipelines to validate, ingest, normalize/enrich and business-specific processing of healthcare data. Build Azure Data Lake leveraging Databricks technology to consolidate all data across the company and serve the data to various products and services. The scope of this role will include working with engineering, product management, program management and operations teams in delivering pipeline platform, foundational and application-specific pipelines and building the Data Lake in collaboration with business and other teams. Responsibilities Design, Develop, Operate and drive scalable and resilient data platform to address the business requirements Drive technology and business transformation through the creation of the Azure Data Lake Ensure industry best practices around data pipelines, metadata management, data quality, data governance and data privacy Partner with Product Management and Business leaders to drive Agile delivery of both existing and new offerings; assist with the Leadership and collaboration with engineering organizations within Change to manage and optimize the portfolio Required Skills 10+ / 7+ years working experience in Data processing / ETL / Big data technologies like Informatica, Hadoop, Cloudera 3 + / 2+ years working experience in Databricks (essential) and Python Experience with Cloud / Azure architectural components Experience building data pipelines and infrastructure Deep understanding of Data warehousing concepts, reporting and Analytical concepts Experience with Big Data tech stack, including Hadoop, Java, Spark, Scala, and Hive, NoSQL data stores Bachelors degree in Mathematics, Physical Sciences, Engineering or Computer Science Nice to Have skills Experience in leading design and development of large systems Demonstrates strong drive to learn and advocate for development best practices Proven track record of building and delivering enterprise-class products Full Stack Experience (End to End Development) Crisp and effective executive communication skills, including significant experience presenting cross-functionally and across all levels Please share your Employer details, doc ( Visa copy & DL Copy) & share updated resume Current Location Visa status Prefer time to call you Contact No. Skype id LinkedIn URL Education details both bachelors & Masters (university, Stream, Year of passing) Keywords: access management Idaho http://bit.ly/4ey8w48 https://jobs.nvoids.com/job_details.jsp?id=594766&uid= |
[email protected] View All |
01:30 AM 01-Sep-23 |