Share this
Location: Remote, United Kingdom
Duration: 6 months
Rate: Up to £550 per day (DOE)
IR35 Status: Outside IR35
We are looking for a highly skilled Senior Data Engineer to join our client's team for an initial 6-month contract role. This position is Outside IR35. You will be involved in building and optimising data pipelines and architecture, supporting data transformation, integration, and delivery in a consultancy environment. This role is ideal for someone with a strong analytical mindset, a deep understanding of data engineering practices, and an ability to adapt to complex client needs.
Key Responsibilities:
- Design, develop and maintain scalable and high-performance data pipelines for structured and unstructured data.
- Implement data integration, extraction, transformation, and loading processes using Apache Spark and Python.
- Develop and maintain dataset documentation and data modelling standards.
- Work with stakeholders to understand business requirements and translate them into technical data solutions.
- Ensure system performance through query optimisation, partitioning, and indexing strategies.
- Contribute to the development and deployment of Power BI dashboards and reports, ensuring appropriate data access and Row-Level Security.
- Follow DevOps and CI/CD practices, maintaining source control using Git and implementing pull-request workflows.
- Strong proficiency in SQL, with deep knowledge of indexing, data partitioning, and performance tuning for large datasets.
- Proven recent experience working with MS Fabric will be essential
- Proven expertise in Python with a focus on data libraries such as Pandas, PySpark, and PyArrow.
- Comprehensive experience working with Apache Spark, including structured streaming, batch processing, and Delta Lake architecture.
- Advanced understanding of Power BI visualisation tools, including DAX, data modelling best practices, and implementation of Row-Level Security.
- Hands-on experience with cloud platforms, preferably Azure, including Azure Data Factory, Lake Storage Gen2, Synapse Analytics, and Databricks. Knowledge of AWS or GCP is also acceptable.
- Experience using version control systems such as Git and applying CI/CD pipelines in data engineering projects.
Share this