Description
At Nielsen, we believe that career growth is a partnership. You ultimately own, fuel and set the journey. By joining our team of nearly 14,000 associates, you will become part of a community that will help you to succeed. We champion you because when you succeed, we do too. Embark on a new initiative, explore a fresh approach, and take license to think big, so we can all continuously improve. We enable your best to power our future.
The Global Outcomes team is looking for a Senior Data Engineer to help us bring data-intensive products to market, maintain existing products for our clients, and work closely with our data science teams in a cloud-native, Python, Java, and Spark-heavy big data stack.A typical day at this role includes attending a standup with data engineers, data scientists, and our product owner, talking about various data sources we’re integrating into our machine learning pipelines with upstream Nielsen teams, guiding data scientists in how to access the data, and turning their analyses into production-ready Spark code that runs using Airflow.
Role Details:
- Work with other data engineers, data scientists, architects, and product owners on an agile scrum team that delivers products to production.
- Gather, analyze and convert business requirements into AWS (Amazon Web Services) cloud-based solutions.
- Design and build systems that load and transform a large volume of structured and semi-structured data.
- When we say “big data” we don’t mean a few gigabytes – we work with multi-terabyte datasets on a daily basis.
- Build and test cloud-based data pipelines and applications (primarily in Python and Apache Spark + SQL) for new and existing backend systems.
- Write reusable, well-tested code and components (e.g. RESTful APIs, Python packages, etc.) that can be used by multiple project teams.
- Assist in troubleshooting and debugging of ETL code and resolving data integrity issues alongside our data scientists and client-facing customer success teams.
- Work in a serverless environment.
- We don’t maintain VMs nor do we manually deploy infrastructure.
- Automation and scalability is critical.
- Write code with performance, maintainability, scalability, and reliability in mind.
- Our tech stack: Python, Apache Spark, SQL, Apache Airflow, Hive, AWS Glue, AWS Athena, AWS EC2, AWS S3, AWS CodeBuild, AWS CloudFormation, YARN, Git, RESTful Microservices, Kubernetes (k8s).
Role Qualifications:
- Minimum Requirements: Master’s degree in computer science, engineering, or a related field with an information technology focus (foreign equivalent degree acceptable) plus 3 years of experience in software design and development.
- Bachelor’s degree in computer science, engineering, or a related field with an information technology focus (foreign equivalent degree acceptable) plus 5 years of experience in software design and development.
- This must include:3 years of experience with:delivering end-to-end applications and pipelines (including architecting open source-based ETL pipelines and designing, building and implementing big data solutions).
- 2 years of experience with:AWS, Azure, or Google Cloud Platform, preferably in a serverless tech stack.
- Designing and developing Apache Spark-based applications using Python (PySpark) or Scala and Spark SQL.
- Comfort with the Linux command line, Git, Agile Scrum and at least one data orchestration tool e.g. Apache Airflow, Luigi, Azkaban, AWS Data Pipeline, Oozie, etc.
Nielsen
Advertising
Consulting
Digital Media
Market Research
Test and Measurement
TV
0 applies
48 views
Jobs from our Partners
Site Reliability Engineering Mgr
Pittsburgh, PA
US
Principal Engineer
Remote
US
Splunk Engineer, Lead
Washington, D.C.
US
Unix/Compute Engineer - Frontline Engineer Associate
Salt Lake, UT
US
Other Jobs from Nielsen
Software Engineer ( Go, Ruby, Javascript, React, Vue, ,ElK)
Bengaluru, India
Remote Hybrid
Principal Data Scientist_S
Mumbai, India
Remote Hybrid
Frontend Developer (Go, Javascript, React, Vue, Devops, postgres, cassandra, elasticsearch)
Bengaluru, India
Remote Hybrid
Data Engineer (Spark, Scala, Python, Cassandra, Elasticsearch, AWS, Airflow, SQL)
Bengaluru, India
Remote Hybrid
VP/Sales Director
Remote
Atlanta, GA
Junior Data Analyst
Prague, Czech Republic
Remote Hybrid
Similar Jobs
Software Engineer, Service Network
Seattle, WA
ML Ops Engineer
Amsterdam, Netherlands
Remote Hybrid
Staff Engineer (Backend)
Bengaluru, India
Director of Clinical Engineering
Remote
US
Senior, Software Engineer
Sunnyvale, CA
US
There are more than 50,000 engineering jobs:
Subscribe to membership and unlock all jobs
Engineering Jobs
50,000+ jobs from 4,500+ well-funded companies
Updated Daily
New jobs are added every day as companies post them
Refined Search
Use filters like skill, location, etc to narrow results
Become a member
🥳🥳🥳 241 happy customers and counting...
Overall, over 80% of customers chose to renew their subscriptions after the initial sign-up.
Cancel anytime / Money-back guarantee