PepsiCo

Architect - Data Engineering

Remote Hyderabad, India
Machine Learning Azure AWS SQL Python Scala
Description
Overview As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around finance. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. Responsibilities Work with product owners, scrum masters and technical committee to define the 3 months road-map for each program increment (sprint wise) Manage and scale data pipelines responsible for ingestion and data transformation. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Responsible for sustainability of already live pipelines in production environment. Qualifications Bachelor’s degree in Computer Science, MIS, Business Management, or related field 8 + years’ experience in Information Technology 4 + years of Azure, AWS and Cloud technologies Good written and verbal communication skills along with collaboration and listening skills Experience dealing with multiple vendors as necessary. Hands on experience of writing complex SQL queries Big Data (Hadoop, HBase, MapReduce, Hive, HDFS etc.), Spark/PySpark Sound skills and hands on experience with Azure Data Lake, Azure Data Factory, Azure Data Bricks ,Azure Synapse Analytics, Azure Storage Explorer Proficient in creating Data Factory pipelines for on-cloud ETL processing; copy activity, custom Azure development etc. Knowledge on language such as Python, Scala Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or SnowFlake. Mandatory “Non-Technical” Skills Excellent remote collaboration skills Experience working in a matrix organization with diverse priorities. Enthusiast for learning functional knowledge specific to finance business Ability to work with virtual teams (remote work locations); within team of technical resources (employees and contractors) based in multiple global locations. Participate in technical discussions, driving clarity of complex issues/requirements to build robust solutions Strong communication skills to meet with delivery teams and business-facing teams, understand sometimes ambiguous, needs, and translate to clear, aligned requirements.


Work with product owners, scrum masters and technical committee to define the 3 months road-map for each program increment (sprint wise) Manage and scale data pipelines responsible for ingestion and data transformation. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Responsible for sustainability of already live pipelines in production environment.


Bachelor’s degree in Computer Science, MIS, Business Management, or related field 8 + years’ experience in Information Technology 4 + years of Azure, AWS and Cloud technologies Good written and verbal communication skills along with collaboration and listening skills Experience dealing with multiple vendors as necessary. Hands on experience of writing complex SQL queries Big Data (Hadoop, HBase, MapReduce, Hive, HDFS etc.), Spark/PySpark Sound skills and hands on experience with Azure Data Lake, Azure Data Factory, Azure Data Bricks ,Azure Synapse Analytics, Azure Storage Explorer Proficient in creating Data Factory pipelines for on-cloud ETL processing; copy activity, custom Azure development etc. Knowledge on language such as Python, Scala Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or SnowFlake. Mandatory “Non-Technical” Skills Excellent remote collaboration skills Experience working in a matrix organization with diverse priorities. Enthusiast for learning functional knowledge specific to finance business Ability to work with virtual teams (remote work locations); within team of technical resources (employees and contractors) based in multiple global locations. Participate in technical discussions, driving clarity of complex issues/requirements to build robust solutions Strong communication skills to meet with delivery teams and business-facing teams, understand sometimes ambiguous, needs, and translate to clear, aligned requirements.
PepsiCo
PepsiCo

0 applies

3 views

Similar Jobs

Data Engineer- Manager

Columbia, SC Cincinnati, OH

Data Architect- Manager

Birmingham, AL Richmond, VA

Data Architect- Senior Manager

Nashville, TN Phoenix, AZ

Data Architect- Senior Associate

Des Moines, IA Louisville, CO

There are more than 50,000 engineering jobs:

Subscribe to membership and unlock all jobs

Engineering Jobs

60,000+ jobs from 4,500+ well-funded companies

Updated Daily

New jobs are added every day as companies post them

Refined Search

Use filters like skill, location, etc to narrow results

Become a member

🥳🥳🥳 452 happy customers and counting...

Overall, over 80% of customers chose to renew their subscriptions after the initial sign-up.

To try it out

For active job seekers

For those who are passive looking

Cancel anytime

Frequently Asked Questions

  • We prioritize job seekers as our customers, unlike bigger job sites, by charging a small fee to provide them with curated access to the best companies and up-to-date jobs. This focus allows us to deliver a more personalized and effective job search experience.
  • We've got about 70,000 jobs from 5,000 vetted companies. No fake or sleazy jobs here!
  • We aggregate jobs from 5,000+ companies' career pages, so you can be sure that you're getting the most up-to-date and relevant jobs.
  • We're the only job board *for* software engineers, *by* software engineers… in case you needed a reminder! We add thousands of new jobs daily and offer powerful search filters just for you. 🛠️
  • Every single hour! We add 2,000-3,000 new jobs daily, so you'll always have fresh opportunities. 🚀
  • Typically, job searches take 3-6 months. EchoJobs helps you spend more time applying and less time hunting. 🎯
  • Check daily! We're always updating with new jobs. Set up job alerts for even quicker access. 📅

What Fellow Engineers Say