Home

Need profiles on Data Engineer - ETL developer at Remote, Remote, USA
Email: [email protected]
http://bit.ly/4ey8w48
https://jobs.nvoids.com/job_details.jsp?id=903507&uid=

From:

Sameera,

Avance Consultant Service

[email protected]

Reply to:   [email protected]

Job Description

Job Title: Redpoint Tech Lead

Job Location: Remote  (USA)

Technical Skills:

RPI:

  Build real-time web personalized experience management.

  Building selection rules

  Building customer journey orchestration like

 o Templates Creation

 o Customer attribute creation,

 o Preferences customization

 o Selection criteria,

 o Segmentation

  Target audience, inbound and outbound campaigns, and promotions.

  Demographic-specific reports generation, etc.

  Customization of metadata

  AB testing and associated Interaction

  Customer Dashboards and layouts

 RPDM:

  Creation of Schema, Entity

  Custom table creation

  Establish a relationship with Golden Records.

  Mapping of source and target systems

  Establishing real-time connectivity

  Omnichannel connectivity of the upstream and downstream systems

  Creation of Datafile, folders

 Job Description/ Requirement

8+ years of strong ETL experience in Redpoint Database Management

  8+ years of hands-on software engineering experience.

8+ years of experience integrating technical processes and business outcomes, specifically data and Prior experience in troubleshooting complex system issues, handling multiple tasks simultaneously, and translating user requirements into technical specifications.

Experience working in an offshore/onshore team model. 

Strong database fundamentals, including SQL, performance, and schema design. 

Strong understanding of programming languages like Java, Scala, or Python. 

Design and implement Data security and privacy controls. 

Experience with Git or equivalent source code control software. 

Prior experience in a fast-paced agile development environment is a plus. 

Process analysis, data quality metrics/monitoring, data architecture, developing policies/standards, and supporting processes. 

Designing and building data pipeline (batch & streaming), extensive experience in Apache Spark, Spark 

Streaming Kafka. Hands-on with coding skills to do the POCs and build prototypes. 

Aware of various aspects of data pipelining 

Knowledge of Profiling/Prototyping 

Experience in designing solutions for large data warehouses with a good understanding of cluster and 

parallel architecture, as well as high-scale or distributed RDBMS and knowledge of NoSQL platforms 

A bachelors degree in computer science or computer Engineering is required.

Keywords:
http://bit.ly/4ey8w48
https://jobs.nvoids.com/job_details.jsp?id=903507&uid=
[email protected]
View All
02:22 AM 02-Dec-23


To remove this job post send "job_kill 903507" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]


Time Taken: 0

Location: ,