| Data Engineer at Remote, Remote, USA |
| Email: [email protected] |
|
http://bit.ly/4ey8w48 https://jobs.nvoids.com/job_details.jsp?id=3057046&uid=1b08ff4b43c54488bc75770ac8e310d2 From: Pradeep kumar, Vizon Inc [email protected] Reply to: [email protected] Hello, Hope you are doing well. Job Description - LinkedIn, Genuine candidate, EST candidates Job Title : Data Engineer Interview : skype Work mode : Remote Consultants in EST time Zone preferred. Experience setting up Cleanroom. Your role will involve collaboration with cross-functional teams, leveraging cutting-edge technologies, and ensuring scalable, efficient, and secure data engineering practices. Top skills: Okay operating independently, pyspark, python, cleanroom experience, data bricks experience Requirements: Bachelors degree typically in Computer Science, Management Information Systems, Mathematics, Business Analytics or another STEM degree. Proven experience working within cross-functional teams and managing complex projects from inception to completion 3+ years of professional Data Development experience. 3+ years of experience with SQL and NoSQL technologies. 2+ years of experience building and maintaining data pipelines and workflows. 2+ years of experience developing with Python & Pyspark. Experience developing within databricks. Experience with CI/CD pipelines and processes. Experience with automated unit, integration, and performance testing. Experience with version control software such as Git. Full understanding of ETL and Data Warehousing concepts. Strong understanding of Agile principles (Scrum). Experience with Snowflake. Experience in building out marketing cleanrooms. Knowledge of Structured Streaming (Spark, Kafka, EventHub, or similar technologies). Experience with GitHub SaaS/GitHub Actions. Experience with Service Oriented Architecture. Experience with containerization technologies such as Docker and Kubernetes. Responsibilities Take ownership of systems, processes, and the tech stack while driving features to completion through all phases of the entire 84.51 SDLC. This includes internal and external facing applications as well as process improvement activities: Provide Technical Leadership : Offer technical leadership to ensure clarity between ongoing projects and facilitate collaboration across teams to solve complex data engineering challenges. Build and Maintain Data Pipelines: Design, build, and maintain scalable, efficient, and reliable data pipelines to support data ingestion, transformation, and integration across diverse sources and destinations, using tools such as Kafka, Databricks, and similar toolsets. Drive Innovation: Leverage innovative technologies and approaches to modernize and extend core data assets, including SQL-based, NoSQL-based, cloud-based, and real-time streaming data platforms. Implement Automated Testing: Design and implement automated unit, integration, and performance testing frameworks to ensure data quality, reliability, and compliance with organizational standards. Optimize Data Workflows: Optimize data workflows for performance, cost efficiency, and scalability across large datasets and complex environments. Mentor Team Members: Mentor team members in data principles, patterns, processes, and practices to promote best practices and improve team capabilities. Draft and Review Documentation : Draft and review architectural diagrams, interface specifications, and other design documents to ensure clear communication of data solutions and technical requirements. Keywords: continuous integration continuous deployment Data Engineer [email protected] http://bit.ly/4ey8w48 https://jobs.nvoids.com/job_details.jsp?id=3057046&uid=1b08ff4b43c54488bc75770ac8e310d2 |
| [email protected] View All |
| 06:44 AM 16-Jan-26 |