This job board retrieves part of its jobs from: US Jobs | Ohio Jobs | Daly City Jobs

Fresh Coding & Programming Jobs at Tiny Companies

To post a job, login or create an account |  Post a Job

  Startup Developer Jobs  

Bringing the best, highest paying job offers near you

previous arrow
next arrow
Slider

Data engineer 3

MOALINK

This is a Full-time position in Little Rock, IL posted January 2, 2020.

Big Data EngineerResponsible for completing our transition into fully automated operational reports across different functions within Retail and for bringing our Retail Big Data capabilities to the next level by designing and implementing a new analytics governance model, with emphasis on architecting consistent root cause analysis procedures resulting in enhanced operational and customer engagement results.

Big Data Engineers serve as the backbone of the Strategic Analytics organization, ensuring both the reliability and applicability of the team’s data products to the entire organization.

They have extensive experience with ETL design, coding, and testing patterns as well as engineering software platforms and large-scale data infrastructures.

Big Data Engineers have the capability to architect highly scalable end-to-end pipeline using different open source tools, including building and operationalizing high-performance algorithms.

Big Data Engineers understand how to apply technologies to solve big data problems with expert knowledge in programming languages like Java, Python, Linux, PHP, Hive, Impala, and Spark.

Extensive experience working with both 1) big data platforms and 2) real-time / streaming deliver of data is essential.

Big data engineers implement complex big data projects with a focus on collecting, parsing, managing, analyzing, and visualizing large sets of data to turn information into actionable deliverables across customer-facing platforms.

They have a strong aptitude to decide on the needed hardware and software design and can guide the development of such designs through both proof of concepts and complete implementations.

Additional qualifications should include:
• Tune Hadoop solutions to improve performance and end-user experience
• Proficient in designing efficient and robust data workflows
• Documenting requirements as well as resolve conflicts or ambiguities
• Experience working in teams and collaborate with others to clarify requirements
• Strong co-ordination and project management skills to handle complex projects
• Excellent oral and written communication skills Job Responsibilities Big Data Engineer: ♣ Translate complex functional and technical requirements into detailed design.

♣ Design, construct, install, test and maintain highly scalable data management systems.

♣ Hadoop technical development and implementation.

♣ Loading from disparate data sets.

by leveraging various big data technology e.g.

Kafka ♣ Pre-processing using Hive, Impala, Spark, and Pig ♣ Design and implement data modeling ♣ Maintain security and data privacy in an environment secured using Kerberos and LDAP ♣ High-speed querying using in-memory technologies such as Spark.

♣ Following and contributing best engineering practice for source control, release management, deployment etc ♣ Production support, job scheduling/monitoring, ETL data quality, data freshness reporting Skills Required: ♣ 5+years of Python development experience ♣ 3+ years of demonstrated technical proficiency with Hadoop and big data projects ♣ 5-8 years of demonstrated experience and success in data modeling ♣ Fluent in writing shell scripts [bash, korn] ♣ Writing high-performance, reliable and maintainablecode.

♣ Ability to write MapReduce jobs ♣ Ability to setup, maintain, and implement Kafka topics and processes ♣ Understanding and implementation of Flume processes ♣ Good knowledge of database structures, theories, principles, and practices.

♣ Understand how to develop code in an environment secured using a local KDC and OpenLDAP.

♣ Familiarity with and implementation knowledge of loading data using Sqoop.

♣ Knowledge and ability to implement workflow/schedulers within Oozie ♣ Experience working with AWS components [EC2, S3, SNS, SQS] ♣ Analytical and problem solving skills, applied to Big Data domain ♣ Proven understanding and hands on experience with Hadoop, Hive, Pig, Impala, and Spark ♣ Good aptitude in multi-threading and concurrency concepts.

♣ B.S.

or M.S.

in Computer Science or Engineering Must haves Very strong data modeling skills Analytical and problem solving skills & experience, applied to Big Data domain is a MUST 1+ years hands on experience with SQL, Hadoop, Hive, Pig, Impala, and Spark Hive is a mandatory skill 5-8 years of Python or Scala programming or Java/J2EE development experience 3+ years of demonstrated technical proficiency with Hadoop and big data projects Day to Day duties: Data collection – gather information and required data fields.

Data manipulation – Join data from multiple data sources and build ETLs to be sent to Tableau for reporting purpose Measure & Improve Implement success indicators to continuously measure and improve, while providing relevant insight and reporting to leadership and teams.

Please add your adsense or publicity code here (inc/structure/adsfooter.php)