Home

Data Engineer - Remote at Remote, Remote, USA
Email: [email protected]
From:

Ajay,

Guardian Info Group

[email protected]

Reply to:   [email protected]

Job Title : Data Engineer (Contract)

Position Type : Contract

Visa : H1b, GC & USC

Job Description :

All the must haves are also Bold and Underlined in the original JD below.

Spark/Scala and Azure are a must. Doing data analysis so SQL is a must as well. They use Jenkins and Git as well and Power BI. Contract only at this time so visa candidates should be fine and fully remote. Ready to interview and hire ASAP. They will be on the EIP project.

Must be able to do data analysis with Databricks
Python with Scala/Spark is the main need, a little Pyspark is thrown in, too.
Azure Data Factory Azure is important. They already have AWS folks on the team and the need for Azure is a driving factor for opening the reqs.
FHIR healthcare knowledge would be very helpful, but no necessary.
2 rounds of interviews, he doubts there will be a live coding exercise, but not 100% sure.
Manager said the need is immediate.

Key Responsibilities:

Design and implement scalable data processes focused on fulfilling product requirements.

Large complex components; influencing overall product architecture and patterns.

Autonomously implement component design in line with pre-defined Data and (ELT/ETL) architectural patterns.

Partner with Business, Technical and Strategic Product stakeholders to manage project commitments in an agile framework; rapidly delivering value to our customers via technology solutions.

Develop, construct, test, document and maintain data pipelines.

Identify ways to improve data reliability, efficiency, and quality.

Design and develop resilient, reliable, scalable and self-healing solutions to meet and exceed customer requirements.

Ensure that all parts of the application eco-system are thoroughly and effectively covered with telemetry.

Focus on automation, quality and streamlining new and existing data processing.

Create data monitoring capabilities for each business process and work with data consumers on updates to data processes.

Develop data pipelines using Python, Spark and/or Scala.

Automate and orchestrate data pipelines using Azure Data Factory or Delta Live Tables

Help maintain the integrity and security of the company data.

Communicate clearly and effectively in oral and written forms and be able to present and demonstrate work to technical and non-technical stakeholders.

Required Qualifications:

Undergraduate degree or equivalent experience

Minimum 5 to 7 years of IT experience in Software Engineering.

Minimum 3 to 5 years of experience in big data processing for batch and/or streaming data; data includes file systems, data structures/databases, automation, security, messaging, movement, etc.

Minimum 3 to 5 years of experience with Python and Spark in developing data processing pipelines.

Minimum 3 to 5 years of ETL programming experience in Databricks using Scala/Java.

Minimum 3 to 5 years of supporting extensive data analysis using advanced SQL concepts and window functions

Minimum 2 to 3 years of experience working on large scale programs, with multiple concurrent projects.

Minimum 2 to 3 years of experience with Agile methodologies and Test-Driven Development.

Experience with Continuous Development/ Integration (CI/CD) /DevOps skills, (Jenkins, Git, Azure DevOps/ Git Actions).

Strong written and oral communications along with presentation and interpersonal skills

Ability to lead and delegate work across other members of the data engineering team

Preferred Qualifications:

Minimum 2 to 3 years of cloud experience, preferably Azure.

Minimum 2 to 3 years' experience with Databricks or other big data platforms.

Minimum 2 to 3 years automation/orchestration experience using Azure Data Factory

Experience and familiarity with data across Healthcare Provider domains (i.e. Patient, Provider, Encounter, Billing, Claims, Eligibility etc.)

Familiarity/Experience with Clinical data exchange formats like HL7, FHIR, CCD etc.

Familiarity/experience with FHIR resources and data model

Experience in Big data processing in healthcare domain

Cloud development and computing.

Knowledge of cutting to edge technologies (AI, Client, Blockchain, Wearables, IOT).

Knowledge/Experience with Microservice design, Java Spring, Kafka, RabbitMQ.

Ability/Willingness to explore/learn new technologies and techniques on the job. 

Keywords: continuous integration continuous deployment artificial intelligence business intelligence information technology green card Colorado
[email protected]
View All
03:14 AM 28-Feb-23


To remove this job post send "job_kill 391163" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]


Time Taken: 0

Location: ,