Primary Responsibilities:

Candidates should possess strong knowledge and interest across big data technologies and have a strong background in data engineering.

  • Build data pipeline frameworks to automate high-volume batch and real-time data delivery
  • Continuously integrate and ship code into our cloud production environments
  • Work directly with Product Owners and customers to deliver data products in a collaborative and agile environment
  • Assist in creating architectures using cloud-native technologies 

Your Responsibilities Will Include Developing sustainable data driven solutions with current new generation data technologies to drive our business and technology strategies

  • Design AWS data ingestion frameworks and pipelines based on the specific needs driven by the Product Owners and user stories…
  • Experience building Data Lake using AWS and Hands-on experience in S3, EKS, ECS, AWS Glue, AWS KMS, AWS Firehose, EMR
  • Build robust, scalable, production-ready data pipelines
  • Unit test pipelines to ensure high quality
  • Leverage capabilities of Databricks Delta Lake functionality as needed
  • Leverage capabilities of Databricks Lakehouse functionality as needed to build Common/Conformed layers within the data lake
  • Building data APIs and data delivery services to support critical operational and analytical applications
  • Contributing to the design of robust systems with an eye on the long-term maintenance and support of the application
  • Leveraging reusable code modules to solve problems across the team and organization
  • Handling multiple functions and roles for the projects and Agile teams
  • Defining, executing and continuously improving our internal software architecture processes
  • Experience working on NoSQL Databases such as Cassandra, HBase, and Elastic Search
  • Hands on experience with leveraging CI/CD to rapidly build & test application code
  • Expertise in Data governance and Data Quality
  • Experience working with PCI Data and working with data scientists is a plus
  • hands-on experience with any of the following programming languages: PySpark, Python,R, Scala

Job Details

Total Positions:
5 Posts
Job Shift:
Second Shift (Afternoon)
Job Type:
Job Location:
Gender:
No Preference
Minimum Education:
Bachelors
Career Level:
Experienced Professional
Minimum Experience:
5 Years
Apply Before:
Oct 25, 2021
Posting Date:
Sep 25, 2021

AirCod Technologies

Information Technology · 51-100 employees - Lahore

pAirCod Technologies is a leading multinational software house working in prominent market technologies to serve our valued clients with optimal solutions and products created to meet their business needs. At AirCod we develop efficient and responsive business models through proactive and multi-dimensional solutions. We empower enterprises with our automated framework approaches and business intelligence. AirCod is a state of the art technology oriented software house focused on quality of service, automation, efficiency, maintainability and reliability of the product. Specialties: Web Development OS Development App Development Graphic Solutions Technical Consultancy Data Scientist/p

What is your Competitive Advantage?

Get quick competitive analysis and professional insights about yourself
Talk to our expert team of counsellors to improve your CV!
Try Rozee Premium

Similar Job Titles

Data Scientist

Intellalytics Technologies, Karachi, Pakistan
Posted Apr 15, 2024

Data Scientist - Revenue Cycle Management

Cedrus Group, Multiple Cities, Pakistan
Posted Apr 18, 2024
I found a job on Rozee!