Job Description:
Role Title: AVP, Principal Data Engineer (L11)
Company Overview:
Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more.
We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies.
Synchrony celebrates ~51% women diversity, 105+ people with disabilities, and ~50 veterans and veteran family members.
We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being.
We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles.
Organizational Overview: This role will be part of the Data Architecture & Analytics group part of CTO organization.
Data team is responsible for designing and developing scalable data pipelines for efficient data ingestion, transformation, and loading(ETL).
Data team owns and manages different tools platforms which provides an environment for designing and building different data solutions.
Collaborating with cross-functional teams to integrate new data sources and ensure data quality and consistency.
Building and maintaining data models to facilitate data access and analysis by Data Scientists and Analysts
Role Summary/Purpose:
We are looking for a Principal Data Engineer to be part of Agile scrum teams and perform functional & system development for Data warehouse and Data Lake applications supporting key business domains.
This role will be instrumental in transforming the legacy systems to modern data platforms. This role is an exciting, fast-paced, constantly changing, and challenging work environment, and will play an important role in resolving and influencing high-level decisions across Synchrony.
Key Responsibilities:
Design and develop and Implement ETL/ELT on a Data warehouse applications using Ab Initio, data lake platform(Cloudera Hadoop cluster)/ Container using Ab Initio, Spark, Hive, Kafka, RDBMS(Oracle, MySQL), NoSQL databases(Cassandra) and public cloud solutions.
Participate in the agile development process including backlog grooming, coding, code reviews, testing and deployment.
Provide data analysis for data ingestion, standardization and curation efforts ensuring all data is understood from a business context
Work with team members to achieve business results in a fast paced and quickly changing environment
Experience with batch and real-time data pipelines in a DevOps environment using Ab Initio and Spark.
Work closely with Product owners, Product Managers, Program manager, scrum masters in a Scaled Agile framework.
Partner with architects to efficiently design data applications with scalability, resiliency and speed.
Profile data to assist with defining the data elements, propose business term mappings, and define data quality rules
Work with the Data Office to ensure that data dictionaries for all ingested and created data sets are properly documented in Collibra and any other data dictionary repository
Ensure the lineage of all data assets are properly documented in the appropriate enterprise metadata repositories
Assist with the creation and implementation of data quality rules
Ensure the proper identification of sensitive data elements and critical data elements
Create source-to-target data mapping documents
Test current processes and identify deficiencies
Investigate program quality to make improvements to achieve better data accuracy.
Apply technical knowledge, industry experience, expertise, and insights to contribute to the development & execution of Engineering capabilities.
Stays up-to-date on latest trends in data engineering, recommends best practices, develops innovative frameworks to avoid redundancy by promoting automation.
Required Skills/Knowledge:
Bachelor's degree in Computer Science or similar technical field of study and a minimum of 6+ years of work experience or in lieu of a degree 8+ years of work experience.
Minimum of 6+ years of experience in managing large scale data platforms (Data warehouse/Data Lake/Cloud) environments.
Minimum of 6+ years’ of programming experience in ETL tools - Ab Initio or Informatica and Data Lake Technologies - Hadoop, Spark, HDFS, Hive, Kafka.
Hands on experience with ETL tools - Ab Initio or Informatica and data lake technologies - Hadoop, Hive, Spark, Kafka.
Working knowledge of cloud platforms such as S3, Redshift, Snowflake, etc.
Familiar with scheduling tools like Stonebranch.
Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution.
Hands on experience in writing shell scripts and complex SQL queries.
Familiar with data management tools (i.e. Collibra)
Proficient with databases such as MySQL, Oracle, Teradata.
Desired Skills/Knowledge:
Demonstrated ability to work effectively in an agile team environment.
Experience with batch and real-time data pipelines in a DevOps environment.
Must be willing to work in a fast-paced environment with globally located Agile teams working in different shifts.
Ability to develop and maintain strong collaborative relationships at all levels across IT and Business Stakeholders.
Excellent written and oral communication skills. Adept and presenting complex topics, influencing, and executing with timely /actionable follow-through.
Experience in designing ETL pipelines to enable automated data load into AWS S3 & Redshift.
Prior work experience in a Credit Card/Banking/Fin Tech company.
Experience dealing with sensitive data in a highly regulated environment.
Demonstrated implementation of complex and innovative solutions.
Nice to have AWS Solution Architect/Data Engineer certification.
Eligibility Criteria:
Bachelor's degree in Computer Science or similar technical field of study and a minimum of 6+ years of work experience or in lieu of a degree 8+ years of work experience.
Minimum of 6+ years of experience in managing large scale data platforms (Data warehouse/Data Lake/Cloud) environments.
Minimum of 6+ years’ of programming experience in ETL tools - Ab Initio or Informatica and Data Lake Technologies - Hadoop, Spark, HDFS, Hive, Kafka.
Work Timings: 3PM to 12 AM IST
(WORK TIMINGS: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details.)
For Internal Applicants:
Understand the criteria or mandatory skills required for the role, before applying
Inform your manager and HRM before applying for any role on Workday
Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format)
Must not be any corrective action plan (First Formal/Final Formal, PIP)
L9+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible.
L09+ Employees can apply
Grade/Level: 11
Job Family Group:
Information Technology0 applies
2 views
Other Jobs from Synchrony Ventures
VP, Business Intelligence Process Lead (L12)
AVP, Principal Data Engineer (L11)
Web Engineer II - Apply and Buy Digital apps (L09)
AVP, Front-End Engineering Lead
VP, Data Engineering Solution Architect (L12)
Similar Jobs
AVP, Principal Data Engineer (L11)
AVP, Principal Data Engineer (L11)
Java Backend - Senior Engineer
Software Engineer
Staff Engineer
There are more than 50,000 engineering jobs:
Subscribe to membership and unlock all jobs
Engineering Jobs
60,000+ jobs from 4,500+ well-funded companies
Updated Daily
New jobs are added every day as companies post them
Refined Search
Use filters like skill, location, etc to narrow results
Become a member
🥳🥳🥳 401 happy customers and counting...
Overall, over 80% of customers chose to renew their subscriptions after the initial sign-up.
To try it out
For active job seekers
For those who are passive looking
Cancel anytime
Frequently Asked Questions
- We prioritize job seekers as our customers, unlike bigger job sites, by charging a small fee to provide them with curated access to the best companies and up-to-date jobs. This focus allows us to deliver a more personalized and effective job search experience.
- We've got about 70,000 jobs from 5,000 vetted companies. No fake or sleazy jobs here!
- We aggregate jobs from 5,000+ companies' career pages, so you can be sure that you're getting the most up-to-date and relevant jobs.
- We're the only job board *for* software engineers, *by* software engineers… in case you needed a reminder! We add thousands of new jobs daily and offer powerful search filters just for you. 🛠️
- Every single hour! We add 2,000-3,000 new jobs daily, so you'll always have fresh opportunities. 🚀
- Typically, job searches take 3-6 months. EchoJobs helps you spend more time applying and less time hunting. 🎯
- Check daily! We're always updating with new jobs. Set up job alerts for even quicker access. 📅
What Fellow Engineers Say