Openings for the Position - Redpoint Technical Lead - Data Engineer ETL Lead at Remote, Remote, USA |
Email: [email protected] |
http://bit.ly/4ey8w48 https://jobs.nvoids.com/job_details.jsp?id=908571&uid= From: Sameera, Avance Consulting [email protected] Reply to: [email protected] Job Description Job Title: Redpoint Tech Lead Job Location: Remote (USA) Technical Skills: RPI: Build real-time web personalized experience management. Building selection rules Building customer journey orchestration like o Templates Creation o Customer attribute creation, o Preferences customization o Selection criteria, o Segmentation Target audience, inbound and outbound campaigns, and promotions. Demographic-specific reports generation, etc. Customization of metadata AB testing and associated Interaction Customer Dashboards and layouts RPDM: Creation of Schema, Entity Custom table creation Establish a relationship with Golden Records. Mapping of source and target systems Establishing real-time connectivity Omnichannel connectivity of the upstream and downstream systems Creation of Datafile, folders Job Description/ Requirement 8+ years of strong ETL experience in Redpoint Database Management 8+ years of hands-on software engineering experience. 8+ years of experience integrating technical processes and business outcomes, specifically data and Prior experience in troubleshooting complex system issues, handling multiple tasks simultaneously, and translating user requirements into technical specifications. Experience working in an offshore/onshore team model. Strong database fundamentals, including SQL, performance, and schema design. Strong understanding of programming languages like Java, Scala, or Python. Design and implement Data security and privacy controls. Experience with Git or equivalent source code control software. Prior experience in a fast-paced agile development environment is a plus. Process analysis, data quality metrics/monitoring, data architecture, developing policies/standards, and supporting processes. Designing and building data pipeline (batch & streaming), extensive experience in Apache Spark, Spark Streaming Kafka. Hands-on with coding skills to do the POCs and build prototypes. Aware of various aspects of data pipelining Knowledge of Profiling/Prototyping Experience in designing solutions for large data warehouses with a good understanding of cluster and parallel architecture, as well as high-scale or distributed RDBMS and knowledge of NoSQL platforms Keywords: http://bit.ly/4ey8w48 https://jobs.nvoids.com/job_details.jsp?id=908571&uid= |
[email protected] View All |
07:28 PM 05-Dec-23 |