As TRT Network, we believe in reaching our audiences on their preferred platforms through innovative formats that make us relatable to our community. Be it in our websites, social channels like Facebook and Twitter or in our very own digital entertainment platforms,we’re looking for a candidate who understands and shares our goals to create a unique user/audience experience for each platform.
Data & Analytics Team
The Digital Data & Analytics Team consists of Data Science, Data Engineering and Machine Learning teams which work very closely with each other as well as external business units from content and product teams. Our role is to collect, transfer, transform,aggregate and use data to provide proactive services to business units with both holistic and detail-oriented approaches. We visualise, model and analyse data to extract useful insights that are extremely valuable in strategic decision-making partnerships.
This Role: Data Engineer - Team Lead
The Data Engineering Team Lead reports to the Data & Analytics Manager. He/She is self-motivated and responsible for developing and managing our data aggregation, transformation,and creation of a robust environment for the rest of the organisation to address the questions that our digital content and product strategy teams drive. The successful candidate must have demonstrated experience in managing and leading end-to-end ETL tasks using distributed computing technologies across a mix of on-prem and cloud platforms like AWS and GCP. We expect excellence in microservice architecture patterns through robust and high-performing REST APIs and experience in bulk, batch and real-time streaming operations on structured and unstructured data. He/She preferably has taken a leadership position before in the industry.
He/She will be responsible for:
● Team building and being a motivational leader for a growing team
● Mentoring young talents towards their speciality
● Design, build and maintain our data architecture to address demanding requirements
● Delivering well-designed, robust and efficient implementation of data pipelines for complex ETL requirements
● Monitoring the quality of deliverables
● Create and update documentation around all services
● Continuous monitoring and be alerted on any interruption
● Working for the availability of real-time streaming data as well as historical bulk data either in structured or unstructured forms
● Aggregating data from public and private APIs as well scraping unstructured data sources
● Contributing to continuous improvement within the organisation by sharing your knowledge and experience
KEY SKILLS, EXPERIENCE AND EDUCATION
Although an academic background in a related subject would be appreciated, what more important is a passion for creating robust workflows that work flawlessly with complex requirements. As being a leadership role, we also care for a problem-solving personality with motivational leadership and mentoring capabilities. Familiarity with Data Analytics and Machine Learning objectives is nice to have for this position.
● Outstanding interpersonal and communication skills — you’ll be meeting with stakeholders to address their needs using your data science skills
● Previous experience in technical team management is preferred
● Excellent analytical skills and strategic thinking, creative problem solver capabilities
● Strong intellectual curiosity and proactive thinking
● A critical thinker with very strong attention to detail
● Degree in a computational field related to mathematics, computer science or other related data-centric areas
● More than 7 years of hands-on experience in data engineering tasks
● Excellent command of Python and appreciation of clean coding practices
● Good command of software architecture knowledge is required
● Coding experience in Java is a plus
● Excellent programming skills, computational complexity knowledge and computer architecture awareness
● Working experience with big data using distributed systems such as Spark, EMR, Hadoop
● Application experience with real-time streaming data using technologies like Redis, Kafkaand Elastic Search
● Expertise in implementing data infrastructures by using GCP(Pubsub, BigQuery, Dataflow,Dataproc etc), AWS (Kinesis, Glue, Redshift, Lambda, Athena, ECS, EC2etc) or MSAzure equivalents (with an automation tool, preferably Terraform)
● Production-level scalable deployment experience using REST APIs via Kubernetes containers
● Data warehousing experience using Redshift, BigQuery, etc.
● Experience working in Agile teams (Scrum, Kanban)
● Proven experience with development processes (GIT, Kanban, DevOps, CI/CD)
● Proficiency in Linux environment
● Strong interpersonal credibility, reliability, and service mentality with high ethical standards
● Good written and oral communication skills in English