This job board retrieves part of its jobs from: US Jobs | Ohio Jobs | Daly City Jobs

Fresh Coding & Programming Jobs at Tiny Companies

To post a job, login or create an account |  Post a Job

  Startup Developer Jobs  

Bringing the best, highest paying job offers near you

previous arrow
next arrow

Data engineer – spark/etl

Stefanini US

This is a Full-time position in Garland, ME posted January 2, 2020.

Stefanini, Inc.

is hiring!!!

We are looking for solid Data Engineers in Plano, TX!

These are Contract-to-Hire / Long Term Contract job opportunities and relocation will be provided for the right candidate!

Must take and pass a technical assessment.

Do you want to be part of a BIG data transformation journey?

Do you love exploring new avenues and pioneer things in the technology space?

Do you love designing and implementing business critical data management & engineering solutions using emerging technologies?

Do you enjoy solving complex business problems in a fast-paced, collaborative, and iterative delivery environment?

If this excites you, then keep reading!

Help us use technology to bring ingenuity, simplicity, and humanity to banking, to spark this up we’re seeking a hands-on Data Engineer that can design, code and provide architecture solutions for the team.

In this role, you will be responsible for building generic data pipelines & frameworks using open source tools on public Cloud platforms.

The right candidate for this role is someone who is passionate about technology, interact with product owners and technical stakeholders, thrives under pressure, and is hyper-focused on delivering exceptional results with good teamwork skills.

The candidate will have the opportunity to influence and interact with fellow technologists beyond his team and influence technology partners across the enterprise.

The Job & Expectations: Partner with product owners, peers, and end users to understand business requirements Provide technical guidance concerning business implications of application development projects Strong ETL programming skills in Python, Spark and Scala Experience with Cloud computing, preferably AWS Exposure to AWS native technologies (s3, EMR/EC2 and Lambda functions) Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Nexus, Git and Docker Ability to handle multiple responsibilities in an unstructured environment where you’re empowered to make a difference.

In that context, you will be expected to research and develop cutting edge technologies to accomplish your goals.

Experience working in an Agile environment Basic Qualifications: Top 3 Skills: Spark, ETL and Snowflakes, and strong fundamentals for metadata analytic environment, how to analyze and categorize data Must have experience in Spark streaming – main skill/technology Must have experience in Snowflakes infrastructure Must have knowledge of how ETL works and how metadata are being managed Must have experience in AWS Cloud Must have experience in data analytics, metadata Agile/Sprint – guidelines of tasks for development, testing, production Bachelor’s Degree or military experience At least 1 year of experience developing, deploying, testing in AWS public cloud At least 1 years’ experience in system analysis At least 3 years of professional work experience delivering big data solutions using open-source At least 5 years of professional work experience in large scale data projects At least 4 years of professional work experience in data management & data engineering Preferred Qualifications: SQL experience is nice to have Kafka experience for data integration is nice to have 7+ years experience developing software solutions to solve business problems 1+ year of experience in AWS Cloud computing 2+ years of experience in an Agile delivery environment

Please add your adsense or publicity code here (inc/structure/adsfooter.php)