Telstra

Data Engineer

Spark Python Hadoop AWS Streaming Cassandra Scala Azure SQL Git MongoDB Java
This job is closed! Check out or
Description

Employment Type

Permanent

Closing Date

29 June 2024 11:59pm

Job Title

Data Engineer

Job Summary

As a Data Specialist you thrive on finding patterns in large datasets and translating business requirements into structured datasets. You collaborate with your colleagues to drive data led decision making across Telstra. By providing the right insights, in the right way at the right time you're central to our success.

Job Description

Key Responsibilities:

Data Engineer role is to coordinate, and execute all activities related to the requirements interpretation, design and implementation of Data Analytics applications. This individual will apply proven industry and technology experience as well as communication skills, problem-solving skills, and knowledge of best practices to issues related to design, development, and deployment of mission-critical systems with a focus on quality application development and delivery. 

This role is key to the success of the Data Engineering capability at Telstra and will be responsible and accountable for the following: 

  • Design, develop, and maintain data pipelines using Spark, Python, Scala, and related technologies on Hadoop/Cloudera Platform. 
  • Work with high volume data and ensure data quality and accuracy. 
  • Implement data security and privacy best practices to protect sensitive data.
  • Develop and maintain documentation on data pipeline architecture, data models, and data workflows.
  • Monitor and troubleshoot data pipelines to ensure they are performing optimally. 
  • Stay up to date with the latest developments in Azure, AWS, Spark, Python, Scala, and related technologies and apply them to solve business problems 
  • Optimize data pipelines for cost and performance.
  • Automate data processing tasks and workflows to reduce manual intervention. 
  • Ability to work in Agile Feature teams.
  • Provide training and educate other team members around core capabilities and helps them deliver high quality solutions and deliverables/documentation.
  • Self-Motivator to perform Design / Develop user requirements, test and deploy the changes into production.

Technical Skills (Essential) 

  • Hands-on experience in the following on Spark Core, Spark SQL, SQL/Hive/Impala 
  • Exposure on Hadoop Ecosystem (HDP/Cloudera/MapR/EMR etc)  
  • Experience of working on File formats (Parquet/ORC/AVRO/Delta/Hudi etc.)  
  • Experience with high volume data processing and data streaming technologies 
  • Experience of using Orchestration tools like Control-m 
  • Strong experience in data modelling, schema design, and ETL development using SQL and related technologies.
  • Familiarity with data security and privacy best practices 
  • Good exposure on TDD 
  • Exposure on using CI tools like Git, Bitbucket, GitHub, Gitlab, Azure DevOps 
  • Exposure on using CD tools like Jenkins, Bamboo, Azure DevOps 
  • Cloud exposure (Hadoop)  
  • Experience and knowledgeable on the following: Azure data offerings - ADF, ADLS2, Azure Databricks, Azure Synapse, EventHub’s, Cosmos DB etc, Presto/Athena  
  • Exposure of working on Power BI  
  • Prior experience in building or working in team building reusable frameworks. 
  • Good understanding of Data Architecture and design principles. (Delta/Kappa/Lambda architecture)  
  • Exposure to Code Quality - Static and Dynamic code scans  
  • Good knowledge of NoSQL Databases/ HBase/ MongoDB / Cassandra / Cosmos DB 
  • Good knowledge of GraphDB(Neo4J) 
  • Experience with enterprise data management, Datawarehouse, data modelling, Business Intelligence, data integration. 
  • Expertise in SQL and stored procedure. 
  • Experience in designing solutions for multiple large data warehouses with a good understanding of cluster and parallel architecture as well as high-scale or distributed RDBMS and/or knowledge on NoSQL platforms. 
  • Experience in working on Azure SQL Data warehouse (Synapse) and Azure analysis service or Redshift or Synapse 
  • Propose best practices/standards. 
  • Translate, load and present disparate datasets in multiple formats/sources including JSON, XML etc. 
  • Should be able to provide scalable and robust solution architecture depending on the business needs.  
  • Should be able to compare tools and technologies and recommend a tool or technology.  
  • Should be well versed with overall IT landscape, technologies and should be able to analyse how different technologies integrates with each other. 

Programming / Databases (Tools) - Java /Python/Scala/ SQL Procedure, Multi tenanted databases / Spark

Frameworks & Methodologies - Scaled Agile, DevOps, High Performance Data Engineering 

Organizational skills 

Proven high level of initiative, drive and enthusiasm – essential. 

Time management and an ability to adhere to set schedules. 

Communication Skills

Demonstrated high level of written and oral communication skills.

Excellence in customer service and consulting skills. 

Strong verbal communication skills. 

Strong listening & questioning skills. 

Telstra
Telstra
Android iOS Mobile Public Relations Telecommunications

0 applies

59 views

There are more than 50,000 engineering jobs:

Subscribe to membership and unlock all jobs

Engineering Jobs

50,000+ jobs from 4,500+ well-funded companies

Updated Daily

New jobs are added every day as companies post them

Refined Search

Use filters like skill, location, etc to narrow results

Become a member

🥳🥳🥳 241 happy customers and counting...

Overall, over 80% of customers chose to renew their subscriptions after the initial sign-up.

Cancel anytime / Money-back guarantee

Wall of love from fellow engineers