Senior Data Engineer

  • Canberra
  • Estimated start date 3/3/2025
  • 4+12 month contract
  • Baseline clearance is required for this role
  • This role is Canberra Based and candidates must be in the office a minimum of 3 days per week

The Department is looking for a Data Engineer to join the Digital Transformation Program  to work across several data and analytics platforms. The candidate will develop and optimise data pipelines in Azure Databricks, with a strong focus on Python and SQL. The candidate will have expertise in Azure Data Factory, Azure DevOps, CI/CD, and Git version control, as well as a deep understanding of Kimball dimensional modelling and Medallion architecture. This role requires strong collaboration skills to translate business requirements into effective technical solutions.

Key Responsibilities:

  • Develop, optimise, and maintain data pipelines using Python and SQL within Azure Databricks Notebooks.
  • Design and implement ETL/ELT workflows in Azure Data Factory, ensuring efficient data transformation and loading.
  • Apply Kimball dimensional modelling and Medallion architecture best practices for scalable and structured data solutions.
  • Collaborate with team members and business stakeholders to understand data requirements and translate them into technical solutions.
  • Implement and maintain CI/CD pipelines using Azure DevOps, ensuring automated deployments and version control with Git.
  • Monitor, troubleshoot, and optimise Databricks jobs and queries for performance and efficiency.
  • Work closely with data analysts and business intelligence teams to provide well-structured, high-quality datasets for reporting and analytics.
  • Ensure compliance with data governance, security, and privacy best practices.
  • Contribute to code quality improvement through peer reviews, best practices, and knowledge sharing.

Preferred Skills & Experience:

  • Strong proficiency in Python for data transformation, automation, and pipeline development.
  • Advanced SQL skills for query optimisation and performance tuning in Databricks Notebooks.
  • Hands-on experience with Azure Databricks for large-scale data processing.
  • Expertise in Azure Data Factory for orchestrating and automating data workflows.
  • Experience with Azure DevOps, including setting up CI/CD pipelines and managing code repositories with Git.
  • Strong understanding of Kimball dimensional modelling (fact and dimension tables, star/snowflake schemas) for enterprise data warehousing.
  • Knowledge of Medallion architecture for structuring data lakes with bronze, silver, and gold layers.
  • Familiarity with data modelling best practices for analytics and business intelligence.
  • Strong analytical and problem-solving skills with a proactive approach to identifying and resolving issues.
  • Excellent collaboration and communication skills, with the ability to engage both technical and business stakeholders effectively.

Apply Now

Senior Data Engineer