AWS Data Engineer at Remote, Remote, USA |
Email: [email protected] |
From: DEBASISH PATTNAIK, MRTECHNOSOFT [email protected] Reply to: [email protected] AWS Data Engineer Boston,MAJob Description Job Overview: We are seeking a highly skilled AWS Data Engineer with expertise in AWS Redshift, RDS, and Databricks to design, build, and optimize scalable data solutions. The ideal candidate should have experience in cloud-based data architectures, ETL processes, and large-scale data management.Key Responsibilities: Design, develop, and maintain ETL/ELT data pipelines using AWS services (Redshift, RDS, Glue, S3, Lambda, and Kinesis). Build and optimize data lakes and data warehouses to ensure high performance and cost efficiency. Develop and manage AWS Redshift clusters, optimizing queries and storage strategies. Work with AWS RDS (PostgreSQL, MySQL, or SQL Server) for transactional data management. Implement and optimize data processing workflows in Databricks using Apache Spark/PySpark. Ensure real-time and batch data ingestion from multiple sources. Collaborate with data scientists, analysts, and business teams to provide scalable and reliable data solutions. Implement data governance, security policies, and compliance standards. Automate infrastructure management using Terraform, CloudFormation, or CDK. Monitor and troubleshoot data pipelines, ensuring high availability and performance.Required Skills & Qualifications: Strong expertise in AWS Redshift, RDS, and Databricks. Proficiency in SQL for data modeling, transformations, and performance tuning. Hands-on experience with Python, PySpark, or Scala for data processing. Experience with ETL/ELT frameworks (AWS Glue, Apache Airflow, or Talend). Knowledge of big data processing and distributed computing using Spark (Databricks, EMR, or standalone clusters). Strong understanding of database optimization, indexing, and partitioning strategies in Redshift and RDS. Experience with AWS services such as S3, Lambda, IAM, CloudWatch, and Step Functions. Familiarity with streaming data pipelines (Kafka, Kinesis, or Flink). Experience with CI/CD and DevOps for data engineering projects.Nice to Have: Experience with Snowflake or BigQuery. Exposure to machine learning pipelines in Databricks and AWS SageMaker. Hands-on experience with data visualization tools (QuickSight, Power BI, or Tableau). Knowledge of real-time data analytics and event-driven architectures.Education & Experience: Bachelors/Masters degree in Computer Science, Data Engineering, or a related field. [Specify required years] years of experience in data engineering with AWS technologies. Thanks Debasish Pattnaik [email protected] www.mrtechnosoft.com Keywords: continuous integration continuous deployment business intelligence sthree AWS Data Engineer [email protected] |
[email protected] View All |
02:01 AM 05-Feb-25 |