Are you looking for a challenge that is going to provide you experience in agile/scrum environments? Are you eager to work with frontend and backend cutting edge technologies? Do you want to learn about the latest technologies involved in the OTT video streaming ecosystem? Do you want to work in an international environment? STARZ PLAY offers you all of these challenges and more!
STARZ PLAY is a subscription video on demand service headquartered in Dubai and available in 19 countries in the MENA region. Our service streams thousands of blockbuster Hollywood movies, TV shows, documentaries, kids’ entertainment and dedicated Arabic content to subscribers in the region. This is what makes us the fastest-growing SVOD service in the region.
Due to huge success with expansion into new markets, we are opening an R&D Center in Lahore, Pakistan. Come join us and help to build great new technology in Lahore!
We are looking for a savvy Data Engineer responsible for the maintenance, improvement, cleaning, and manipulation of data in the business’s operational and analytics databases. The Data Engineer works with the software engineers, data analysts, data scientists, and data warehouse engineers in order to understand and aid in the implementation of database requirements, analyze performance, and troubleshoot any existent issues. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.
The Data Engineer has to be an expert in SQL development further providing support to the Data and Analytics in database design, data flow and analysis activities. The position of the Data Engineer also plays a key role in the development and deployment of innovative big data platforms for advanced analytics and data processing. He is flexible and adaptable to a dynamic environment, willing to learn new technologies and build PoCs on his own if necessary.
Data Engineer requirements
· Delivery. The Data Engineer is tasked with designing and developing scalable ETL packages from the business source systems and the development of ETL routines in order to populate databases from sources and also to create aggregates.
· He will be responsible of enabling and running historical data migrations across different databases and different servers, for example, data migration from MySQL servers to Amazon Redshift.
· The Data Engineer will proactively implement data quality controls and validations to ensure the accuracy of the data.
· Support/Collaborative Role: The Data Engineer plays a collaborative role where he develops and implements scripts for database maintenance, monitoring and performance tuning to be applied across the business. The Data Engineer plays a supporting role to various departments across the business and technology teams.
· Analytics: The Data Engineer plays an analytical role where he performs ad-hoc analyses of data stored in the Redshift / MySQL/ Cassandra databases and writes SQL scripts, stored procedures, functions, and views. In this position, the Data Engineer troubleshoots data issues within the business and across the business and presents solutions to these issues. He proactively analyzes and evaluates the business’s databases in order to identify and recommend improvements and optimization.
· Knowledge: It is also the Data Engineers duty to keep up with industry trends and best practices, advising on new and improved data engineering strategies that will drive departmental performance in data governance.
For all the candidates
· 3+ years of experience developing ETL processes. Talend Open Studio is a plus.
· 3+ years Database / Data warehouse technologies, with strong SQL knowledge. Experience in MySQL / Redshift preferred.
· 3+ years developing Java code applications. Having used Microservices and queues is a bonus.
· LINUX/UNIX shell scripts, configuring cron jobs or similar job scheduling tools is desirable.
· Experience of Amazon framework is desired (S3, EC2, Glacier, Redshift…)