Home

Urgent Requirement || Big Data Engineer || Austin, TX (Day 1 onside/3 days in office) at Austin, Texas, USA
Email: [email protected]
http://bit.ly/4ey8w48
https://jobs.nvoids.com/job_details.jsp?id=857711&uid=

Hi,

This email is in regards to an excellent job opportunity
with one of our clients. We are looking

Big Data
Engineer
,
Location:
Austin, TX
(Day 1 onside/3 days in office) per below mentioned job description, if
you are interested please call me ASAP or you can reply over this email with
the updated resume, contact detail and best time to reach.

NOTE:

WE NEED BA ID FOR TCS SUBMISSION

PLEASE SHARE RESUME ONLY IF YOUR COMPANY HAS PLACED ANY
CONSULTANT IN TCS IN LAST 2 YEARS WE NEED BA ID FOR SUBMITTING YOUR
CONSULTANT

PLEASE SHARE YOUR UPDATED HOTLIST ON (
[email protected]
) ALSO,
IF I DONT RESPONSE YOUR CALL PLEASE DROP MSG ON LINKEDLN (
https://www.linkedin.com/in/mohd-tariq-053242247/
)

ATTACHED
JD

Position: Big Data Engineer

Location: Austin, TX (Day 1 onside/3 days in
office)

Duration: Contract

Experience: 8-12 years

Technical/Functional Skills:

Big Data

Tableau

Kafka

Hadoop

Spark

Roles & Responsibilities:

Having
10 years of professional experience fields of software Analysis, Design,
Development, Deployment and Maintenance of software and Big Data
applications.

Experience in Big data Implementation with strong
experience on major components of
Iceberg, Tableau, Kafka, Superset,
Druid, Hive metastore, apache, ranger, security, AWS

Experience in creating iceberg tables and loading the
data from different file formats.

Good Experience in Data importing and exporting to Hive
and HDFS with Sqoop.

Experience in using Producer and Consumer APIs of
Apache Kafka.

Skilled in integrating Kafka with Spark streaming for
faster data processin
g.

Experience in using Spark Streaming programming model for
Real-time data processing.

Experience dealing with the file formats like text
files, Sequence files, JSON, Parquet, ORC.

Extensively
used Apache Kafka to collect the logs and error messages across the
cluster.

Excellent
knowledge and understanding of Distributed Computing and Parallel
processing frameworks.

Experienced with Analytics with Hive
Megastore.

Experience
with Superset, Druid.

Experience working with EC2 (Elastic Compute Cloud)
cluster instances, setup data buckets on S3 (Simple Storage Service),
setting up EMR (Elastic MapReduce).

Good
experience working on Tableau and enabled the JDBC/ODBC data connectivity
from those to Hive Metastore.

Good with version control systems like GIT.

Strong knowledge on UNIX/LINUX commands.

Adequate
Knowledge on Python scripting Language.

Adequate
knowledge of Scrum, Agile and Waterfall methodologies.

Highly motivated and committed to the highest levels of
professionalism.

Exhibited
strong written and oral communication skills. Rapidly learn and adapt
quickly to emerging new technologies and paradigms.

Thanks &
Regards

Tariq Ahmad

Email:
[email protected]

LinkedIn: https://www.
linkedin.com/in/mohd-tariq-
053242247/

Office: 626-247-8041
Ext 161

WhatsApp
:332-228-3588

--

Keywords: business analyst sthree information technology Idaho Texas
http://bit.ly/4ey8w48
https://jobs.nvoids.com/job_details.jsp?id=857711&uid=
[email protected]
View All
08:21 PM 15-Nov-23


To remove this job post send "job_kill 857711" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.

Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]


Time Taken: 8

Location: Austin, Texas