M+E Technology Job Board

Big Data Engineer

  • Full Time
  • Seattle, WA
  • Applications have closed

AWS

DESCRIPTION

AWS Business Technology & Solutions (BTS) is seeking a Big Data Engineer to join our Business Performance Management team, building a new Sales Revenue data solution. Our vision is to collect and process billions of usage and billing transactions every single day and relate it to the largest data feed supported by Salesforce.com. We apply business logic to transform to this raw data to generate the daily and monthly Sales Revenue utilized for daily and monthly AWS Sales Revenue reporting and the processing of quarterly Sales Commissions for AWS Sales on Incentive plans.

We are truly leading the way to disrupt the big data industry. We are accomplishing this vision by bringing to bear Big Data technologies like Elastic Map Reduce (EMR) in addition to data warehouse technologies like Spectrum to build a data platform capable of scaling with the ever-increasing volume of data produced by AWS services.

You should have deep expertise in the design, creation, management, and business use of large datasets, across a variety of data platforms. You should have excellent business and interpersonal skills to be able to work with business owners to understand data requirements, and to build ETL to ingest the data into the data lake. You should be an authority at crafting, implementing, and operating stable, scalable, low cost solutions to flow data from production systems into the data lake. Above all you should be passionate about working with huge data sets and someone who loves to bring datasets together to answer business questions and drive growth.

BASIC QUALIFICATIONS

· This position requires a Bachelor’s Degree in Computer Science or a related technical field, and 5+ years of meaningful employment experience.
· 5+ years of work experience with ETL, Data Modeling, and Data Architecture.
· Expert-level skills in writing and optimizing SQL.
· Experience with Big Data technologies such as Hive/Spark.
· Proficiency in one of the scripting languages – python, ruby, linux or similar.
· Experience operating very large data warehouses or data lakes.
PREFERRED QUALIFICATIONS

· Authoritative in ETL optimization, designing, coding, and tuning big data processes using Apache Spark or similar technologies.
· Experience with building data pipelines and applications to stream and process datasets at low latencies.
· Demonstrate efficiency in handling data – tracking data lineage, ensuring data quality, and improving discoverability of data.
· Sound knowledge of distributed systems and data architecture (lambda)- design and implement batch and stream data processing pipelines, knows how to optimize the distribution, partitioning, and MPP of high-level data structures.
· Knowledge of Engineering and Operational Excellence using standard methodologies.
· Meets/exceeds Amazon’s leadership principles requirements for this role
· Meets/exceeds Amazon’s functional/technical depth and complexity for this role