Optiver

Site Reliability Engineer - Kafka

Sydney, Australia
Python Scala Kafka Streaming AWS Bash Java Terraform Ansible
This job is closed! Check out or
Description

WHO WE ARE

Optiver is a tech-driven trading firm and leading global market maker. For over 35 years, Optiver has been improving financial markets around the world, making them more transparent and efficient for all participants. With more than 1,600 employees in offices around the world, we’re united in our commitment to improve the market through competitive pricing, execution and thorough risk management. By providing liquidity on multiple exchanges across the world, we actively trade on 70+ exchanges, where we’re trusted to always provide accurate buy and sell pricing – no matter the market conditions. 

Optiver’s Sydney office is one of the primary players within Asian markets, trading a range of products. Established in 1996, we're an active participant on the Hong Kong, Korea, Singapore, Taiwan and Japan exchanges, and act as Optiver’s APAC head office. 

WHAT YOU'LL DO

We are seeking a skilled Site Reliability/Data Engineer with expertise in Kafka to join our team. In this role, you will focus on developing consumer/producer libraries and maintaining our data ingestion pipelines that support our mission-critical applications. You will collaborate cross-functionally, working closely with our Data Science and Engineering teams to ensure the high quality, performance, and security of our data pipelines.

As a Sire Reliability/Data Engineer focused on Kafka, you will, and play a crucial role in the design, development management of our Kafka-based data ingestion pipelines, ensuring they meet our organization's evolving needs while adhering to best practices. Your strong technical background in Apache Kafka, combined with your excellent communication and problem-solving skills, will contribute to the overall success of our projects and help drive our company's growth.

The ideal candidate will have a deep understanding of data pipeline architecture, ETL processes, and data quality assurance. You will not be expected to manage the deployment and maintenance of the hardware, but you should have an understanding of the underlying infrastructure. Your primary focus will be to build robust, efficient, and maintainable data pipelines to support our data-driven initiatives. You will play a critical role in helping us make data-driven decisions and meeting our company's strategic objectives.

Other responsibilities include: 

  • Design, build, and maintain data pipelines using Kafka for high availability, scalability, and performance.
  • Develop and manage consumer/producer libraries around our Kafka ecosystem.\Monitor Kafka pipelines and data ingestion health and performance, and optimize configurations to ensure high throughput and data quality.
  • Collaborate with teams to integrate data streaming applications and ensure optimal Kafka performance.
  • Develop and implement data quality assurance strategies around the pipelines.
  • Design strategies to deliver data to AWS S3 and support ingestion into our lakehouse infrastructure.
  • Participate in data security and access control policies for Kafka pipelines.
  • Stay current with industry best practices and evolving technology related to Kafka and data engineering.
  • Document data pipeline design, configurations, and operational procedures.
  • Provide training and support on Kafka-related technologies and best practices.
  • Evaluate and select new technologies and tools to enhance data streaming capabilities.


WHO YOU ARE

  • An Australian or New Zealand Citizen, Australian Permanent Resident or able to provide evidence of full working rights.
  • You have a strong background in managing and maintaining complex Kafka clusters in large-scale production environments, with a focus on high availability, scalability, and performance.
  • You are comfortable with C++ and proficient in at least one scripting or other programming language, such as Python, Bash, Java, or Scala, and have experience with Infrastructure-as-Code (IaC) tools like Terraform, CloudFormation or Ansible.
  • You are passionate about solving challenges related to data streaming, distributed systems, and optimising Kafka infrastructure.
  • You have an excellent understanding of computer science fundamentals, such as networking, operating systems, data structures, and algorithms.
  • Proven ability to work through the full DevOps lifecycle, from designing and implementing infrastructure, to monitoring, troubleshooting, and continuous improvement.
  • Experience making architectural recommendations for Kafka cluster management, including security best practices, performance optimisation, and disaster recovery planning.
  • Strong collaboration and communication skills, enabling you to work effectively with cross-functional teams, and provide guidance and training on Kafka-related technologies and best practices.


WHAT YOU'LL GET

  • The chance to work alongside diverse and intelligent peers in a rewarding environment.
  • Competitive remuneration, including an attractive bonus structure and additional leave entitlements.
  • Training, mentorship and personal development opportunities.
  • Gym membership, plus weekly in-house chair massages.
  • Daily breakfast, lunch and an in-house barista.
  • Regular social events, including an annual company trip.
  • A work-from-home allowance and support.

As an intentionally flat organisation, we believe that great ideas and impact can come from everyone. We are passionate about empowering individuals and creating diverse teams that thrive. Every person at Optiver should feel included, valued and respected, because we believe our best work is done together.

Our commitment to diversity and inclusion is hardwired through every stage of our hiring process. We encourage applications from candidates from any and all backgrounds, and we welcome requests for reasonable adjustments during the process to ensure that you can best demonstrate your abilities.

There are more than 50,000 engineering jobs:

Subscribe to membership and unlock all jobs

Engineering Jobs

50,000+ jobs from 4,500+ well-funded companies

Updated Daily

New jobs are added every day as companies post them

Refined Search

Use filters like skill, location, etc to narrow results

Become a member

🥳🥳🥳 216 happy customers and counting...

Overall, over 80% of customers chose to renew their subscriptions after the initial sign-up.

Cancel anytime / Money-back guarantee

Wall of love from fellow engineers