Home

Direct Client: Database Administrator at Cleveland, Ohio (Columbus, OH) at Cleveland, Ohio, USA
Email: [email protected]
http://bit.ly/4ey8w48
https://jobs.nvoids.com/job_details.jsp?id=655798&uid=

From:

Santosh Yambari,

Comtech global

[email protected]

Reply to: [email protected]

Role Database Administrator

Location: Cleveland, Ohio (Hybrid)

Skills

Strong knowledge of SQL to aid in data visualization. Python experience is a bonus.

Working knowledge of Hadoop tools, such as Spark, Impala, Hue, and Kafka, to create, query, and manage ETL data flows for batch and streaming ingestion.

Ideally, working knowledge of data connectors for embedding data visualizations from Tableau Server or PowerBI into SharePoint.

Knowledge of Tableau Server and/or Microsoft PowerBI is helpful.

Practical understanding and experience with data cleaning/cleansing.

Conceptual understanding of networking schematics and data flows for ETL purposes.

Ability to conceptualize development of data mart environments using batch data for agency business analyst usage.

Some experience with newer data analysis tools, such as Tensorflow, is helpful.

Responsibilities

85%

Work with resources to provision necessary access to data sources for ingestion to agency Hadoop platform.

Utilizing service accounts, develop ETL pipelines through Spark and/or StreamSets to bring data from disparate source systems into agencys Hadoop platform iteration. Validate data sets in Hadoop to ensure ingestion loads are as close to 1:1 as possible.

Commit table transformations within Hadoop platform where needed to allow agency analyst to develop requested visuals. Ensure data refresh schedule is consistent with departmental need. Validate and test connections between dataset and cloud analytics software.

Create a Data Mart DB for rolled-up and simplified tables for agency analyst consumption. One set of tables may be created for Sales staff, another may be created for Executive staff, etc. Further direction to be given once this stage is reached.

Assist with the automation of several existing reports and visualizations within Tableau Server.

Document work completed and publish FAQ on agency wiki. Work with department head to conduct agency training on data usage.

Validate and ingest streaming agency data, when available. Assist in transforming the data, with the data analyst, to create more on-demand, near-live visualizations of agency activity.

Validate and test connections between dataset and cloud analytics software.

10%

Work with enterprise and corporate resources to determine how outside data and software (to mean outside of Microsoft ecosystem) can be read within Microsoft SharePoint server.

Deploy data connectors where necessary to connect IOP datasets to Microsoft ecosystem. Deploy data connectors, if necessary, to connect Tableau Server environment to Microsoft ecosystem.

5%

Document work completed and publish FAQ on agency wiki. Conduct training, as directed, throughout.

Qualifications:

Bachelor's or masters degree in computer science, software or computer engineering, information systems or similar academic background.

Working practical experience in developing and deploying ETL pipelines in an enterprise big data environment

.

Thanks and Regards

Santosh Yambari

Keywords: database
http://bit.ly/4ey8w48
https://jobs.nvoids.com/job_details.jsp?id=655798&uid=
[email protected]
View All
09:09 PM 19-Sep-23


To remove this job post send "job_kill 655798" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.

Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]


Time Taken: 8

Location: Cleveland, Ohio