This internship offers a valuable opportunity to gain practical experience in data engineering by working on the development, testing, and maintenance of data processing pipelines and workflows. You will work closely with senior engineers and collaborate with data scientists and analysts to understand data requirements and assist in designing data models and schemas. The role focuses on optimizing data pipelines, storage systems, and analytical workflows, while providing exposure to big data tools and cloud platforms, especially within the Azure and Hadoop ecosystems. This position is ideal for candidates eager to build foundational skills in a dynamic and collaborative environment.

Key Responsibilities

- Assist in developing, testing, and maintaining data processing pipelines and workflows under the guidance of senior engineers.
- Collaborate with data scientists and analysts to understand data needs and support the design of data models and schemas.
- Help optimize and tune existing data pipelines, storage systems, and analytical workflows to improve efficiency.
- Monitor data processing jobs, identify issues, and assist in troubleshooting performance and data quality problems.
- Continuously learn and stay updated on big data tools, technologies, and cloud platforms, with a focus on Azure and Hadoop components.
- Work with cross-functional teams to gather technical requirements and contribute to documentation efforts.
- Prepare and maintain documentation such as design notes, data flow diagrams, and standard operating procedures.
- Support senior engineers in following best practices, development standards, and data engineering processes.

Required Qualifications

- Currently pursuing or recently completed a Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field.
- Basic understanding of Azure services including Azure Data Lake, Azure Data Factory, and Azure Databricks, or a strong willingness to learn these technologies.
- Familiarity with Hadoop ecosystem concepts such as HDFS, Hive, Spark, or MapReduce through academic coursework or project experience.
- Knowledge of programming languages like Python, Java, or Scala at an academic or project level.
- Basic understanding of SQL; familiarity with NoSQL concepts is a plus.
- Understanding of ETL processes, data modeling, or data warehousing concepts is preferred.
- Strong analytical and problem-solving skills with attention to detail.
- Ability to work effectively in a fast-paced, collaborative team environment.
- Good communication skills, able to explain ideas clearly to both technical and non-technical team members.

Preferred Qualifications and Benefits

- A strong desire to learn and work with big data frameworks such as Kafka, Spark, and Azure Synapse/HDInsight.
- This is a paid internship with the potential to transition into a full-time role upon successful completion.
- The position is full-time and requires in-person attendance.
- A Bachelor’s degree is preferred as the educational qualification.

This internship serves as a solid foundation for a career in data engineering, combining hands-on experience with mentorship and opportunities for professional growth.

Job Details

Total Positions:
1 Post
Job Shift:
First Shift (Day)
Job Type:
Job Location:
Gender:
No Preference
Age:
18 - 65 Years
Minimum Education:
Bachelors
Career Level:
Entry Level
Maximum Experience:
2 Years
Apply Before:
Dec 27, 2025
Posting Date:
Nov 26, 2025

Metis international Pvt

· 11-50 employees - Islamabad

What is your Competitive Advantage?

Get quick competitive analysis and professional insights about yourself
Talk to our expert team of counsellors to improve your CV!
Try Rozee Premium

Similar Job Titles

Data Analytics Intern

Telerelation, Lahore, Pakistan
Posted Nov 28, 2025
I found a job on Rozee!