Home

Hadoop Admin _Phoenix, AZ _ Contract (ONSITE - Local's Only) at Phoenix, Arizona, USA
Email: [email protected]
http://bit.ly/4ey8w48
https://jobs.nvoids.com/job_details.jsp?id=383823&uid=

From:

Srikanth,

Insoursys Inc

[email protected]

Reply to: [email protected]

Hi Professional

Hope you are doing Good..

I have an urgent opening on Hadoop Admin Position with my client. Please send me the resumes with full name, contact details, Salary, and availability date.

Requirement details:

Title : Hadoop Admin

Location : Phoenix, AZ (DAY 1 ONSITE)

Duration : Contract

Technical Skills & Knowledge:

Primary Skills: Hadoop Admin

Responsibilities:

Apache Hadoop and hands on experience.

Distribution Tools like Cloudera UI installation/Awareness of configuration of tools.

Sounds knowledge on INSTALLATION/CONFIGURATION AND SET UP.

Primary focus are on HDFS/YARN/HIVE/SPARK other tools like Sqoop/kafka/ES etc.

MapR Distribution or Cloudera Distribution Platform (CDP) Administration.

Installation, Configuration, Patching & Maintenance of various Hadoop component like HDFS, Hive, Spark, Hbase, MapR DB, Zookeeper, Oozie, Pig, Flume, Sqoop.

Job descriptions

MapR/Couldera Hadoop Administrator.

What will happen to the entry form if its deleted from active directory/How to handle the situation

Configuration of Spark/workflow on yarn from scratch and its dynamic allocation.

Yarn work flow.

End to end steps to configure with Kerberos.

Procedure of giving Access to gateway node to a new user.

LVM and its partition.

Handling P1 issues.

Usage of AWK and TRACEROUTE Command.

Difference between Blocks and Split.

Job issues Impala,Spark and hive.

Experience in Apache Hadoop, MapR Distribution or Cloudera Distribution Platform (CDP) Administration, Installation, Configuration, Patching & Maintenance of various Hadoop component like HDFS, Hive, Spark, Hbase, MapR DB, Zookeeper, Oozie, Pig, Flume, Sqoop.

Work closely with the Bigdata Dev Team, Application Support Team, Network Team, Analytical Team and Database Team to make sure that all the big data applications are highly available and performing as expected.

Collaborating with application teams to install operating system and MapR updates, patches, version upgrades when required.

Monitor the cluster connectivity and performance.

Implement cluster observability by using different monitoring tool like Icinga/ Splunk/ Grafana/ Prometheus.

Handle issues related to MapR volume, quota management, volume replications.

Manage & resolve ongoing core MapR issues and work with vendors to fix issues permanently.

Experience in understanding and managing Hadoop Log Files.

Help application, production support and development team for troubleshooting different types of Jobs issues like Map Reduce, Spark, Tej, Hive, Oozie, Sqoop.. etc.

Manage scheduled backup and recovery tasks, resource and security management.

Develop automation, installation and monitoring of Hadoop ecosystem components.

Capacity planning and estimating the requirements for lowering or increasing the capacity of the Hadoop cluster.

Administration & configuration monitoring tools like Icinga/ Splunk/ Grafana/ Prometheus.

Automation of different services using (bash/Python scripting).

Implement MapR Data Security using groups/roles Integration to other Hadoop platforms.

Implement and manage Cluster security, Cluster maintenance as well as creation and removal of nodes Performance tuning of Hadoop clusters and Hadoop MapReduce routines.

Help to fix ongoing security vulnerability and help to improve platform security.

Screen cluster job performances and capacity planning Finding gaps & Implementing required monitoring and observability for Hadoop clusters.

Experience in understanding Hadoop multiple data processing engines such as interactive hql, real time streaming, data science and batch processing to handle data store in Yarn platform.

Diligently teaming with the infrastructure, network, database, application and business analytical/intelligence teams to guarantee high data quality and availability.

Develop documentation and playbooks to operate Hadoop infrastructure, update confluence pages.

Thanks & Regards

Srikanth Lingala

Insoursys Inc

T: 972-440-2123

Email: [email protected] || www.insoursys.com

Keywords: user interface database information technology Arizona
http://bit.ly/4ey8w48
https://jobs.nvoids.com/job_details.jsp?id=383823&uid=
[email protected]
View All
08:33 PM 27-Feb-23


To remove this job post send "job_kill 383823" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.

Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]


Time Taken: 8

Location: Phoenix, Arizona