Senior Data Engineer

Türkiye Radyo Televizyon Kurumu / İstanbul / Turkey
Publish Date: 7.4.2022

TRT Network

As TRT Network, we believe in reaching our audiences on their preferred platforms through innovative formats that make us relatable to our community. Be it in our websites, social channels like Facebook and Twitter or in our very own digital entertainment platforms,we’re looking for a candidate who understands and shares our goals to create a unique user/audience experience for each platform.

Data & Analytics Team

The Digital Data & Analytics Team consists of Data Science, Data Engineering and Machine Learning teams which work very closely with each other as well as external business units from content and product teams. Our role is to collect, transfer, transform,aggregate and use data to provide proactive services to business units with both holistic and detail-oriented approaches. We visualise, model and analyse data to extract useful insights that are extremely valuable in strategic decision-making partnerships.

This Role: Senior Data Engineer

The Senior Data Engineer reports to the Data Engineering Team Lead. He/She is self-motivated and responsible for development and maintenance of our ETL pipelines. A large volume of data that is streaming in real-time from our users constitutes the main source, but we have all different kinds of third party data integrations tasks as well. A fastand robust data pipeline and available storage for real-time and bulk analyses must be maintained for the rest of the organisation. The successful candidate must have demonstrated experience in complex ETL tasks with distributed computing technologies across a mix of on-prem and cloud platforms like AWS and GCP. We also expect excellence in microservice architecture patterns as we expect you to design and create robust and high-performing REST APIs that serve queries for batch or real-time operations.

KEY RESPONSIBILITIES

He/She will be responsible for:

●     Mentoring young talents towards their speciality

●     Design, build and maintain our data architecture to address demanding requirements

●     Delivering well-designed, robust and efficient implementation of data pipelines for complex ETL requirements

●     Monitoring the quality of deliverables

●     Create and update documentation around all services

●     Continuous monitoring and be alerted on any interruption

●     Working for the availability of real-time streaming data as well as historical bulk data either in structured or unstructured forms

●     Aggregating data from public and private APIs as well scraping unstructured data sources

●     Contributing to continuous improvement within the organisation by sharing your knowledge and experience

KEY SKILLS, EXPERIENCE AND EDUCATION

Although an academic background in a related subject would be appreciated, what more important is a passion for creating robust workflows that work flawlessly with complex requirements. We believe all team members must carry a problem-solving personality with an openmind. Familiarity with Data Analytics and Machine Learning objectives is nice to have for this position.

Required Qualifications;

●     Excellent analytical skills and strategic thinking, creative problem solver capabilities

●     Strong intellectual curiosity and proactive thinking

●     A critical thinker with very strong attention to detail

●     Degree in a computational field related to mathematics, computer science or other related data-centric areas

●     More than 5 years of hands-on experience in data engineering tasks

●     Excellent command of Python and appreciation of clean coding practices

●     Good command of software architecture knowledge is required

●     Coding experience in Java is a plus

●     Excellent programming skills, computational complexity knowledge and computer architecture awareness

●     Working experience with big data using distributed systems such as Spark, EMR, Hadoop

●     Application experience with real-time streaming data using technologies like Redis, Kafkaand Elastic Search

●     Expertise in implementing data infrastructures by using GCP(Pubsub, BigQuery, Dataflow,Dataproc etc), AWS (Kinesis, Glue, Redshift, Lambda, Athena, ECS, EC2etc) or MSAzure equivalents (with an automation tool, preferably Terraform)

●     Production-level scalable deployment experience using REST APIs via Kubernetes containers

●     Data warehousing experience using Redshift, BigQuery, etc.

●     Experience working in Agile teams (Scrum, Kanban)

●     Proven experience with development processes (GIT, Kanban, DevOps, CI/CD)

●     Proficiency in Linux environment

●     Strong interpersonal credibility, reliability, and service mentality with high ethical standards

●     Good written and oral communication skills in English


Paylaş
HRPeak