Duties and Responsibilities - Architecture Design
- Plan, design, and evolve data platform solutions within a Data Mesh architecture, ensuring decentralized data ownership and scalable, domain-oriented data pipelines.
- Apply Domain-Driven Design (DDD) principles to model data, services, and pipelines around business domains, promoting clear boundaries and alignment with domain-specific requirements.
- Collaborate with stakeholders to translate business needs into robust, sustainable data architecture patterns.
Duties and Responsibilities - Software Development & DevOps
- Develop and maintain production-level applications primarily using Python (Pandas, PySpark, SnowPark), with the option to leverage other languages (e.g., C#) as needed.
- Implement and optimize DevOps workflows, including Git/GitHub, CI/CD pipelines , and infrastructure-as-code (Terraform), to streamline development and delivery processes.
- Containerize and deploy data and application workloads on Kubernetes leveraging KEDA for event-driven autoscaling and ensuring reliability, efficiency, and high availability.
Duties and Responsibilities - Big Data Processing
- Handle enterprise-scale data pipelines and transformations, with a strong focus on Snowflake, or comparable technologies such as Databricks or BigQuery.
- Optimize data ingestion, storage, and processing performance to ensure high-throughput and fault-tolerant systems.
Duties and Responsibilities - Data Stores
- Manage and optimize SQL/NoSQL databases, Blob storage, Delta Lake, and other large-scale data store solutions.
- Evaluate, recommend, and implement the most appropriate storage technologies based on performance, cost, and scalability requirements.
Duties and Responsibilities - Data Orchestration & Event-Driven Architecture
- Build and orchestrate data pipelines across multiple technologies (e.g., dbt, Spark), employing tools like Airflow, Prefect, or Azure Data Factory for macro-level scheduling and dependency management.
- Design and integrate event-driven architectures (e.g., Kafka, RabbitMQ) to enable real-time and asynchronous data processing across the enterprise.
- Leverage Kubernetes & KEDA to orchestrate containerized jobs in response to events, ensuring scalable, automated operations for data processing tasks.
Duties and Responsibilities - Scrum Methodologies
- Participate fully in Scrum ceremonies, leveraging tools like JIRA and Confluence to track progress and collaborate with the team.
- Provide input on sprint planning, refinement, and retrospectives to continuously improve team efficiency and product quality.
Duties and Responsibilities - Cloud
- Deploy and monitor data solutions in Azure, leveraging its native services for data and analytics.
Duties and Responsibilities - Collaboration & Communication
- Foster a team-oriented environment by mentoring peers, offering constructive code reviews, and sharing knowledge across the organization.
- Communicate proactively with technical and non-technical stakeholders, ensuring transparency around progress, risks, and opportunities.
- Take ownership of deliverables, driving tasks to completion and proactively suggesting improvements to existing processes.
Duties and Responsibilities - Problem Solving
- Analyze complex data challenges, propose innovative solutions, and drive them through implementation.
- Maintain high-quality standards in coding, documentation, and testing to minimize defects and maintain reliability.
- Exhibit resilience under pressure by troubleshooting critical issues and delivering results within tight deadlines.
Required Education and Experience
- Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field (or equivalent professional experience).
- Proven experience with Snowflake (native Snowflake application development is essential).
- Proficiency in Python for data engineering tasks and application development.
- Experience deploying and managing containerized applications using Kubernetes (preferably on Azure Kubernetes Services).
- Understanding of event-driven architectures and hands-on experience with event buses (e.g., Kafka, RabbitMQ).
- Familiarity with data orchestration and choreography concepts, including the use of scheduling/orchestration tools (e.g., Airflow, Prefect) and using eventual consistency/distributed systems patterns to avoid centralised orchestration at the platform level.
- Hands-on experience with cloud platforms (Azure preferred) for building and operating data pipelines.
- Solid knowledge of SQL and database fundamentals.
- Strong ability to work in a collaborative environment, including cross-functional teams in DevOps, software engineering, and analytics.
Preferred Education and Experience
- Master’s degree in a relevant technical field.
- Certifications in Azure, Snowflake, Databricks (e.g., Microsoft Certified: Azure Data Engineer, SnowPro, Databricks Certified: Data Engineer).
- Experience implementing CI/CD pipelines for data-related projects.
- Working knowledge of infrastructure-as-code tools (e.g., Terraform, ARM templates).
- Exposure to real-time data processing frameworks (e.g., Spark Streaming, Flink).
- Familiarity with data governance and security best practices (e.g., RBAC, data masking, encryption).
- Demonstrated leadership in data engineering best practices or architecture-level design.
Supervisory Responsibilities
- This position may lead project-based teams or mentor junior data engineers, but typically does not include direct, ongoing management of staff.
- Collaboration with stakeholders (Data Architects, DevOps engineers, Data Product Managers) to set technical direction and ensure high-quality deliverables.
Other Jobs from Enable
Sr. Software Engineer II
Integrations Project Lead
Sr. Software Engineer II
Senior Software Engineer (Design Systems)
Tech Lead Platform Engineer - AI & ML Ops
Similar Jobs
Principal Software Development Engineer for Machine Learning
Senior Software Development Engineer (US Federal)
Senior Machine Learning Engineer
Senior Software Development Engineer
Senior Product Security Engineer
There are more than 50,000 engineering jobs:
Subscribe to membership and unlock all jobs
Engineering Jobs
60,000+ jobs from 4,500+ well-funded companies
Updated Daily
New jobs are added every day as companies post them
Refined Search
Use filters like skill, location, etc to narrow results
Become a member
🥳🥳🥳 452 happy customers and counting...
Overall, over 80% of customers chose to renew their subscriptions after the initial sign-up.
To try it out
For active job seekers
For those who are passive looking
Cancel anytime
Frequently Asked Questions
- We prioritize job seekers as our customers, unlike bigger job sites, by charging a small fee to provide them with curated access to the best companies and up-to-date jobs. This focus allows us to deliver a more personalized and effective job search experience.
- We've got about 70,000 jobs from 5,000 vetted companies. No fake or sleazy jobs here!
- We aggregate jobs from 5,000+ companies' career pages, so you can be sure that you're getting the most up-to-date and relevant jobs.
- We're the only job board *for* software engineers, *by* software engineers… in case you needed a reminder! We add thousands of new jobs daily and offer powerful search filters just for you. 🛠️
- Every single hour! We add 2,000-3,000 new jobs daily, so you'll always have fresh opportunities. 🚀
- Typically, job searches take 3-6 months. EchoJobs helps you spend more time applying and less time hunting. 🎯
- Check daily! We're always updating with new jobs. Set up job alerts for even quicker access. 📅
What Fellow Engineers Say