108 Reporting Engineer jobs in Pakistan

Data Engineer

Lahore, Punjab Dubizzle Labs

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

You will be participating in exciting projects covering the end-to-end data lifecycle – from raw data integrations with primary and third-party systems, through advanced data modeling, to state-of-the-art data visualization and development of innovative data products.

You will have the opportunity to learn how to build and work with both batch and real-time data processing pipelines. You will work in a modern cloud-based data warehousing environment alongside a team of diverse, intense, and interesting co-workers. You will liaise with other departments – such as product & tech, the core business verticals, trust & safety, finance, and others – to enable them to be successful.

Your responsibilities

  • Design, implement and support data warehousing;
  • Raw data integrations with primary and third-party systems
  • Data warehouse modeling for operational & application data layers
  • Development in Amazon Redshift cluster
  • SQL development as part of agile team workflow
  • ETL design and implementation in Matillion ETL
  • Real-time data pipelines and applications using serverless and managed AWS services such as Lambda, Kinesis, API Gateway, etc.
  • Design and implementation of data products enabling data-driven features or business solutions
  • Building data dashboards and advanced visualizations in Sisense for data cloud teams (formerly Periscope Data) with a focus on UX, simplicity, and usability
  • Working with other departments on data products – i.e. product & technology, marketing & growth, finance, core business, advertising, and others
  • Being part and contributing towards a strong team culture and ambition to be on the cutting edge of big data
  • Evaluate and improve data quality by implementing test cases, alerts, and data quality safeguards
  • Living the team values: Simpler. Better. Faster.
  • Strong desire to learn

Required minimum experience (must)

  • 1- 2 years experience in data processing, analysis, and problem-solving with large amounts of data;
  • Good SQL skills across a variety of relational data warehousing technologies especially in cloud data warehousing (e.g. Amazon Redshift, Google BigQuery, Snowflake, Vertica, etc.)
  • 1+ years of experience with one or more programming languages, especially Python
  • Ability to communicate insights and findings to a non-technical audience
  • Written and verbal proficiency in English
  • Entrepreneurial spirit and ability to think creatively; highly-driven and self-motivated; strong curiosity and strive for continuous learning
  • Top of class University technical degree such as computer science, engineering, math, physics.

Additional experience (strong plus)

  • Experience working with customer-centric data at big data-scale, preferably in an online / e-commerce context
  • Experience with modern big data ETL tools (e.g. Matillion)
  • Experience with AWS data ecosystem (or other cloud providers)
  • Track record in business intelligence solutions, building and scaling data warehouses, and data modeling
  • Tagging, Tracking, and reporting with Google Analytics 360
  • Knowledge of modern real-time data pipelines (e.g. serverless framework, lambda, kinesis, etc.)
  • Experience with modern data visualization platforms such as Periscope, Looker, Tableau, Google Data Studio, etc.
  • Linux, bash scripting, Javascript, HTML, XML
  • Docker Containers and Kubernete


#LI-TM1

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Sindh, Sindh NorthBay - Pakistan

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

Job Title: Data Engineer

Job Title: Data Engineer

Location: Karachi, Lahore , Islamabad (Hybrid)

Experience: 5+ Years

Job Type: Full-Time

Job Overview

We are looking for a highly skilled and experienced Data Engineer with a strong foundation in Big Data, distributed computing, and cloud-based data solutions . This role demands a strong understanding of end-to-end Data pipelines, data modeling, and advanced data engineering practices across diverse data sources and environments. You will play a pivotal role in building, deploying, and optimizing data infrastructure and pipelines in a scalable cloud-based architecture.

Key Responsibilities

  • Design, develop, and maintain large-scale Data pipelines using modern big data technologies and cloud-native tools.
  • Build scalable and efficient distributed data processing systems using Hadoop, Spark, Hive, and Kafka.
  • Work extensively with cloud platforms (preferably AWS) and services like EMR, Glue, Lambda, Athena, S3.
  • Design and implement data integration solutions pulling from multiple sources into a centralized data warehouse or data lake.
  • Develop pipelines using DBT (Data Build Tool) and manage workflows with Apache Airflow or Step Functions.
  • Write clean, maintainable, and efficient code using Python, PySpark, or Scala for data transformation and processing.
  • Build and manage relational and columnar data stores such as PostgreSQL, MySQL, Redshift, Snowflake, HBase, ClickHouse.
  • Implement CI/CD pipelines using Docker, Jenkins, and other DevOps tools.
  • Collaborate with data scientists, analysts, and other engineering teams to deploy data models into production.
  • Drive data quality, integrity, and consistency across systems.
  • Participate in Agile/Scrum ceremonies and utilize JIRA for task management.
  • Provide mentorship and technical guidance to junior team members.
  • Contribute to continuous improvement by making recommendations to enhance data engineering processes and architecture.

Required Skills & Experience

  • 5+ years of hands-on experience as a Data Engineer
  • Deep knowledge of Big Data technologies – Hadoop, Spark, Hive, Kafka.
  • Expertise in Python, PySpark and/or Scala.
  • Proficient with data modeling, SQL scripting, and working with large-scale datasets.
  • Experience with distributed storage like HDFS and cloud storage (e.g., AWS S3).
  • Hands-on with data orchestration tools like Apache Airflow or StepFunction.
  • Experience working in AWS environments with services such as EMR, Glue, Lambda, Athena.
  • Familiarity with data warehousing concepts and experience with tools like Redshift, Snowflake (preferred).
  • Exposure to tools like Informatica, AbInitio, Apache Iceberg is a plus.
  • Knowledge of Docker, Jenkins, and other CI/CD tools.
  • Strong problem-solving skills, initiative, and a continuous learning mindset.

Preferred Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
  • Experience with open table formats such as Apache Iceberg.
  • Hands-on with AbInitio (GDE, Collect > IT) or Informatica tools.
  • Knowledge of Agile methodology, working experience in JIRA.

Soft Skills

  • Self-driven, proactive, and a strong team player.
  • Excellent communication and interpersonal skills.
  • Passion for data and technology innovation.
  • Ability to work independently and manage multiple priorities in a fast-paced environment.

Seniority level
  • Seniority level Mid-Senior level
Employment type
  • Employment type Full-time
Job function
  • Job function Information Technology
  • Industries IT Services and IT Consulting

Referrals increase your chances of interviewing at NorthBay - Pakistan by 2x

Software Engineering & Development Intern Intermediate Full-Stack Software Developer

Karachi Division, Sindh, Pakistan 5 months ago

Karachi Division, Sindh, Pakistan 9 months ago

Karachi Division, Sindh, Pakistan 7 months ago

Karachi Division, Sindh, Pakistan 1 year ago

Karachi Division, Sindh, Pakistan 2 months ago

Karachi Division, Sindh, Pakistan 1 day ago

Karachi Division, Sindh, Pakistan 4 months ago

Karachi Division, Sindh, Pakistan 3 months ago

Karachi Division, Sindh, Pakistan 1 day ago

Karachi East District, Sindh, Pakistan 5 months ago

Backend .Net Core Developer (Wallet - Fintech)

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Lahore, Punjab ACE Money Transfer

Posted 13 days ago

Job Viewed

Tap Again To Close

Job Description

Job Title: Data Engineer

Location: Lahore / Kharian

Position Type: Full-Time

Job Overview:

This role focuses on developing and implementing data warehouse solutions across the organization while managing large sets of structured and unstructured data. The Data Engineer will analyse complex customer requirements, define data transformation rules, and oversee their implementation. The ideal candidate should have a solid understanding of data acquisition, integration, and transformation, with the ability to evaluate and recommend optimal architecture and approaches.

Key Responsibilities:

Design Data Pipelines : Develop robust data pipelines capable of handling both structured and unstructured data effectively.

Build Data Ingestion and ETL Processes : Create efficient data ingestion pipelines and ETL processes, including low-latency data acquisition and stream processing using tools like Kafka or Glue.

Develop Integration Procedures : Design and implement processes to integrate data warehouse solutions into operational IT environments.

Data Lake Management: Manage and optimize the data lake on AWS, ensuring efficient data storage, retrieval, and transformation. Implement best practices for organizing and managing raw, processed, and curated data, with scalability and future growth in mind.

Optimize SQL and Shell Scripts: Write, optimize, and maintain complex SQL queries and shell scripts to ensure efficient data processing.

Monitor and Optimize Performance: Continuously monitor system performance and recommend necessary configurations or infrastructure improvements.

Document and Present Workflows : Prepare detailed documentation, collaborate with cross functional teams, present complete data workflows to teams, and maintain an up-to-date knowledge base.

Governance & Quality : Develop and maintain data quality checks to ensure data in the lake and warehouse remains accurate, consistent, and reliable.

Collaboration with Stakeholders : Work closely with CTO, PMOs, business and data analysts to gather requirements and ensure alignment with project goals.

Scope and Manage Projects : Collaborate with project managers to scope projects, create detailed work breakdown structures, and conduct risk assessments.

Research & Development (R&D) : Keep up with the latest technological trends and identify innovative solutions to address customer challenges and company priorities.

Skills and Qualifications:

•Bachelor’s or Master’s degree in Engineering, Computer Science, or equivalent experience.

•At least 4+ years of relevant experience as a Data Engineer.

•Hands-on experience with cloud platforms such as AWS, Azure, or GCP, and familiarity with respective cloud services.

•Hands-on experience with one or more ETL tools such as Glue, Spark, Kafka, Informatica, DataStage, Talend, Azure Data Factory (ADF).

•Strong understanding of dimensional modeling techniques, including Star and Snowflake schemas.

•Experience in creating semantic models and reporting mapping documents.

•Solid concepts and experience in designing and developing ETL architectures.

•Strong understanding of RDBMS concepts and proficiency in SQL development.

•Proficiency in data modeling and mapping techniques.

•Experience integrating data from multiple sources.

•Experience working in distributed environments, including clustering and sharding.

•Knowledge of Big Data tools like Pig, Hive or NiFi would be a plus.

•Experience with Hadoop distributions like Cloudera or Hortonworks would be a plus.

•Excellent communication and presentation skills, both verbal and written.

•Ability to solve problems using a creative and logical approach.

•Self-motivated, analytical, detail-oriented, and organized, with a commitment to excellence.

•Experience in the financial services sector is a plus.

ACE Money Transfer Profile:
#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Lahore, Punjab Burq, Inc.

Posted 13 days ago

Job Viewed

Tap Again To Close

Job Description

About Burq

Burq started with an ambitious mission: how can we turn the complex process of offering delivery into a simple turnkey solution.

We started with building the largest network of delivery networks, partnering with some of the biggest delivery companies. We then made it extremely easy for businesses to plug into our network and start offering delivery to their customers. Now, we’re powering deliveries for some of the fastest-growing companies from retailers to startups.

It’s a big mission and now we want you to join us to make it even bigger!

We’re already backed by some of the Valley's leading venture capitalists, including Village Global, the fund whose investors include Bill Gates, Jeff Bezos, Mark Zuckerberg, Reid Hoffman, and Sara Blakely. We have assembled a world-class team all over the U.S.

We operate at scale, but we're still a small team relative to the opportunity. We have a staggering amount of work ahead. That means you have an unprecedented opportunity to grow while doing the most important work of your career.

We want people who are unafraid to be wrong and support decisions with numbers and narrative.

Responsibilities
  • Design, build, and maintain efficient and scalable data pipelines using tools like Airbyte, Airflow, and dbt, with a focus on integrating with Snowflake.
  • Manage and optimize data warehousing solutions using Snowflake, ensuring data is organized, secure, and accessible for analysis.
  • Develop and implement automations and workflows to streamline data processing and integration, ensuring seamless data flow across systems.
  • Collaborate with cross-functional teams to set up and maintain data infrastructure that supports both engineering and analytical needs.
  • Utilize Databricks and Spark for big data processing, ensuring data is processed, stored, and analyzed efficiently.
  • Monitor data streams and processes using Kafka and Monte Cristo, ensuring data quality and integrity.
  • Work closely with data analysts and other stakeholders to create, maintain, and optimize visualizations and reports using Tableau, ensuring data-driven decision-making.
  • Ensure the security and compliance of data systems, implementing best practices and leveraging tools like Terraform for infrastructure management.
  • Continuously evaluate and improve data processes, staying current with industry best practices and emerging technologies, with a strong emphasis on data analytics and visualization.
Requirements
  • Proficiency in SQL and experience with data visualization tools such as Tableau.
  • Hands-on experience with data engineering tools and platforms including Snowflake, Airbyte, Airflow, and Terraform.
  • Strong programming skills in Python and experience with data transformation tools like dbt.
  • Familiarity with big data processing frameworks such as Databricks and Apache Spark.
  • Knowledge of data streaming platforms like Kafka.
  • Experience with data observability and quality tools like Monte Cristo.
  • Solid understanding of data warehousing, data pipelines, and database management, with specific experience in Snowflake.
  • Ability to design and implement automation, workflows, and data infrastructure.
  • Strong analytical skills with the ability to translate complex data into actionable insights, particularly using Tableau.
  • Excellent problem-solving abilities and attention to detail.
Benefits

Investing in you

  • Competitive salary
  • Medical
  • Educational courses

Generous Time Off

At Burq, we value diversity. We are an equal opportunity employer: we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Islamabad, Islamabad NorthBay Solutions

Posted 13 days ago

Job Viewed

Tap Again To Close

Job Description

Job Title: Data Engineer

Location: Karachi, Lahore , Islamabad (Hybrid)
Experience: 5+ Years
Job Type: Full-Time

Job Overview:

We are looking for a highly skilled and experienced Data Engineer with a strong foundation in Big Data, distributed computing, and cloud-based data solutions . This role demands a strong understanding of end-to-end Data pipelines, data modeling, and advanced data engineering practices across diverse data sources and environments. You will play a pivotal role in building, deploying, and optimizing data infrastructure and pipelines in a scalable cloud-based architecture.


Key Responsibilities:
  • Design, develop, and maintain large-scale Data pipelines using modern big data technologies and cloud-native tools.

  • Build scalable and efficient distributed data processing systems using Hadoop, Spark, Hive, and Kafka .

  • Work extensively with cloud platforms (preferably AWS) and services like EMR, Glue, Lambda, Athena, S3 .

  • Design and implement data integration solutions pulling from multiple sources into a centralized data warehouse or data lake.

  • Develop pipelines using DBT (Data Build Tool) and manage workflows with Apache Airflow or Step Functions .

  • Write clean, maintainable, and efficient code using Python, PySpark, or Scala for data transformation and processing.

  • Build and manage relational and columnar data stores such as PostgreSQL, MySQL, Redshift, Snowflake, HBase, ClickHouse .

  • Implement CI/CD pipelines using Docker, Jenkins , and other DevOps tools.

  • Collaborate with data scientists, analysts, and other engineering teams to deploy data models into production.

  • Drive data quality , integrity, and consistency across systems.

  • Participate in Agile/Scrum ceremonies and utilize JIRA for task management.

  • Provide mentorship and technical guidance to junior team members.

  • Contribute to continuous improvement by making recommendations to enhance data engineering processes and architecture.


Required Skills & Experience:
  • 5+ years of hands-on experience as a Data Engineer

  • Deep knowledge of Big Data technologies – Hadoop, Spark, Hive, Kafka.

  • Expertise in Python, PySpark and/or Scala .

  • Proficient with data modeling , SQL scripting , and working with large-scale datasets .

  • Experience with distributed storage like HDFS and cloud storage (e.g., AWS S3).

  • Hands-on with data orchestration tools like Apache Airflow or StepFunction.

  • Experience working in AWS environments with services such as EMR, Glue, Lambda, Athena.

  • Familiarity with data warehousing concepts and experience with tools like Redshift, Snowflake (preferred).

  • Exposure to tools like Informatica , AbInitio , Apache Iceberg is a plus.

  • Knowledge of Docker , Jenkins , and other CI/CD tools.

  • Strong problem-solving skills, initiative, and a continuous learning mindset.


Preferred Qualifications:
  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.

  • Experience with open table formats such as Apache Iceberg.

  • Hands-on with AbInitio (GDE, Collect > IT) or Informatica tools.

  • Knowledge of Agile methodology , working experience in JIRA .


Soft Skills:
  • Self-driven, proactive, and a strong team player.

  • Excellent communication and interpersonal skills.

  • Passion for data and technology innovation.

  • Ability to work independently and manage multiple priorities in a fast-paced environment.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Lahore, Punjab Ili.Digital Ag

Posted 13 days ago

Job Viewed

Tap Again To Close

Job Description

Data Engineer with 2+ years experience in AI, Python, SQL, Spark, Azure, Databricks. Skilled in scalable data infrastructure & pipeline design. English required.

  • Bachelor’s degree in computer science, Engineering, or a related field.
  • Minimum of 2 years of experience in data engineering, specifically in large-scale AI projects and production applications.
  • Strong proficiency in Python, SQL, Spark, Cloud Architect, Data & Solution Architect, API, Databricks, and Azure.
  • Deep understanding of designing and building scalable and robust data infrastructure.
  • Experience with at least one public cloud provider.
  • Strong understanding of system architecture and data pipeline construction.
  • Familiarity with machine learning models and data processing requirements.
  • Team player with analytical and problem-solving skills.
  • Good communication skills in English.
  • Optional:
  • Expertise in distributed systems like Kafka and orchestration systems like Kubernetes.
  • Basic knowledge in Data Lake/Data Warehousing/Big Data tools, Apache Spark, RDBMS, NoSQL, Knowledge Graph.
Requirements
  • Design, build, and manage data pipelines, integrating multiple data sources to support company goals.
  • Develop and maintain large-scale data processing systems and machine learning pipelines, ensuring data availability and quality.
  • Implement systems for data quality and consistency, collaborating with Data Science and IT teams.
  • Ensure compliance with security guidelines and SDLC processes.
  • Collaborate with Data Science team to provide necessary data infrastructure.
  • Lead and manage large-scale AI projects, working with cloud platforms for deployment.
  • Maintain and optimize databases, design schemas, and improve data flow across the organization.
About ILI Digital

“To be ILI” means traveling far to reinvigorate innovation. It symbolizes vocation, commitment, and passion. We are a dedicated taskforce focused on sensing and initiating innovation.

Like a lion in the wild, constantly hunting and alert, we stay focused and agile, unlike a lion in captivity which loses its hunting instinct due to guaranteed food supply.

We offer a steep learning curve, development opportunities, and a flexible, positive work environment.

Modern office with design furniture, rooftop terrace with city view.

International Team

Supportive, open-minded teammates. Enjoy after-work events, barbecues, excursions, and team gatherings.

Fresh fruits, professional coffee, smoothies, on-site gym, and yoga room for relaxation.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Punjab, Punjab ACE Money Transfer

Posted 13 days ago

Job Viewed

Tap Again To Close

Job Description

Job Title: Data Engineer

Location: Lahore / Kharian

Position Type: Full-Time

Job Overview:

This role focuses on developing and implementing data warehouse solutions across the organization while managing large sets of structured and unstructured data. The Data Engineer will analyse complex customer requirements, define data transformation rules, and oversee their implementation. The ideal candidate should have a solid understanding of data acquisition, integration, and transformation, with the ability to evaluate and recommend optimal architecture and approaches.

Key Responsibilities:

Design Data Pipelines : Develop robust data pipelines capable of handling both structured and unstructured data effectively.

Build Data Ingestion and ETL Processes : Create efficient data ingestion pipelines and ETL processes, including low-latency data acquisition and stream processing using tools like Kafka or Glue.

Develop Integration Procedures : Design and implement processes to integrate data warehouse solutions into operational IT environments.

Data Lake Management: Manage and optimize the data lake on AWS, ensuring efficient data storage, retrieval, and transformation. Implement best practices for organizing and managing raw, processed, and curated data, with scalability and future growth in mind.

Optimize SQL and Shell Scripts: Write, optimize, and maintain complex SQL queries and shell scripts to ensure efficient data processing.

Monitor and Optimize Performance: Continuously monitor system performance and recommend necessary configurations or infrastructure improvements.

Document and Present Workflows : Prepare detailed documentation, collaborate with cross functional teams, present complete data workflows to teams, and maintain an up-to-date knowledge base.

Governance & Quality : Develop and maintain data quality checks to ensure data in the lake and warehouse remains accurate, consistent, and reliable.

Collaboration with Stakeholders : Work closely with CTO, PMOs, business and data analysts to gather requirements and ensure alignment with project goals.

Scope and Manage Projects : Collaborate with project managers to scope projects, create detailed work breakdown structures, and conduct risk assessments.

Research & Development (R&D) : Keep up with the latest technological trends and identify innovative solutions to address customer challenges and company priorities.

Skills and Qualifications:

•Bachelor’s or Master’s degree in Engineering, Computer Science, or equivalent experience.

•At least 4+ years of relevant experience as a Data Engineer.

•Hands-on experience with cloud platforms such as AWS, Azure, or GCP, and familiarity with respective cloud services.

•Hands-on experience with one or more ETL tools such as Glue, Spark, Kafka, Informatica, DataStage, Talend, Azure Data Factory (ADF).

•Strong understanding of dimensional modeling techniques, including Star and Snowflake schemas.

•Experience in creating semantic models and reporting mapping documents.

•Solid concepts and experience in designing and developing ETL architectures.

•Strong understanding of RDBMS concepts and proficiency in SQL development.

•Proficiency in data modeling and mapping techniques.

•Experience integrating data from multiple sources.

•Experience working in distributed environments, including clustering and sharding.

•Knowledge of Big Data tools like Pig, Hive or NiFi would be a plus.

•Experience with Hadoop distributions like Cloudera or Hortonworks would be a plus.

•Excellent communication and presentation skills, both verbal and written.

•Ability to solve problems using a creative and logical approach.

•Self-motivated, analytical, detail-oriented, and organized, with a commitment to excellence.

•Experience in the financial services sector is a plus.

ACE Money Transfer Profile:
#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Reporting engineer Jobs in Pakistan !

Data Engineer

Lahore, Punjab NorthBay Solutions

Posted 13 days ago

Job Viewed

Tap Again To Close

Job Description

Job Title: Data Engineer

Location: Karachi, Lahore , Islamabad (Hybrid)
Experience: 5+ Years
Job Type: Full-Time

Job Overview:

We are looking for a highly skilled and experienced Data Engineer with a strong foundation in Big Data, distributed computing, and cloud-based data solutions . This role demands a strong understanding of end-to-end Data pipelines, data modeling, and advanced data engineering practices across diverse data sources and environments. You will play a pivotal role in building, deploying, and optimizing data infrastructure and pipelines in a scalable cloud-based architecture.


Key Responsibilities:
  • Design, develop, and maintain large-scale Data pipelines using modern big data technologies and cloud-native tools.

  • Build scalable and efficient distributed data processing systems using Hadoop, Spark, Hive, and Kafka .

  • Work extensively with cloud platforms (preferably AWS) and services like EMR, Glue, Lambda, Athena, S3 .

  • Design and implement data integration solutions pulling from multiple sources into a centralized data warehouse or data lake.

  • Develop pipelines using DBT (Data Build Tool) and manage workflows with Apache Airflow or Step Functions .

  • Write clean, maintainable, and efficient code using Python, PySpark, or Scala for data transformation and processing.

  • Build and manage relational and columnar data stores such as PostgreSQL, MySQL, Redshift, Snowflake, HBase, ClickHouse .

  • Implement CI/CD pipelines using Docker, Jenkins , and other DevOps tools.

  • Collaborate with data scientists, analysts, and other engineering teams to deploy data models into production.

  • Drive data quality , integrity, and consistency across systems.

  • Participate in Agile/Scrum ceremonies and utilize JIRA for task management.

  • Provide mentorship and technical guidance to junior team members.

  • Contribute to continuous improvement by making recommendations to enhance data engineering processes and architecture.


Required Skills & Experience:
  • 5+ years of hands-on experience as a Data Engineer

  • Deep knowledge of Big Data technologies – Hadoop, Spark, Hive, Kafka.

  • Expertise in Python, PySpark and/or Scala .

  • Proficient with data modeling , SQL scripting , and working with large-scale datasets .

  • Experience with distributed storage like HDFS and cloud storage (e.g., AWS S3).

  • Hands-on with data orchestration tools like Apache Airflow or StepFunction.

  • Experience working in AWS environments with services such as EMR, Glue, Lambda, Athena.

  • Familiarity with data warehousing concepts and experience with tools like Redshift, Snowflake (preferred).

  • Exposure to tools like Informatica , AbInitio , Apache Iceberg is a plus.

  • Knowledge of Docker , Jenkins , and other CI/CD tools.

  • Strong problem-solving skills, initiative, and a continuous learning mindset.


Preferred Qualifications:
  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.

  • Experience with open table formats such as Apache Iceberg.

  • Hands-on with AbInitio (GDE, Collect > IT) or Informatica tools.

  • Knowledge of Agile methodology , working experience in JIRA .


Soft Skills:
  • Self-driven, proactive, and a strong team player.

  • Excellent communication and interpersonal skills.

  • Passion for data and technology innovation.

  • Ability to work independently and manage multiple priorities in a fast-paced environment.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer

ieng Group

Posted 13 days ago

Job Viewed

Tap Again To Close

Job Description

Get AI-powered advice on this job and more exclusive features.

Direct message the job poster from ieng Group

Connecting People with Purpose | Global Talent Acquisition | CSR | Employer Branding

Job Title:

Reports To:

Role and Responsibilities

  • Design, develop, and manage scalable, reliable, and efficient data pipelines for ingesting and transforming structured and unstructured data.
  • Work with diverse data formats such as Parquet, JSON, and CSV for data ingestion, processing, and transformation.
  • Clean, validate, and transform data from multiple sources to support business intelligence, analytics, and machine learning use cases.
  • Ensure data quality, consistency, and security across pipelines through validation rules, logging, and monitoring mechanisms.
  • Collaborate with cross-functional teams to understand business requirements and deliver high-quality datasets for reporting and analytics.
  • Develop and optimize ETL/ELT workflows using industry-standard tools and languages such as Python and SQL.
  • Monitor, debug, and enhance the performance, reliability, and scalability of data pipelines and infrastructure.
  • Participate in code reviews, adhere to best practices, and contribute to internal documentation and process automation.
  • Take full ownership of assigned tasks, ensuring timely and accurate delivery with accountability.

OH&S/QMS Role and Responsibilities

  • Reports any hazards or risks in addition to Accidents/Incident to QHSE department.
  • Be aware of and complies to i engineering’s IMS Policy.
  • Abides by i engineering’s Local Legal and Client Requirements.
  • Attends and engages in IMS Awareness Sessions.
  • Ensures that all IMS procedures are regularly followed and raises the issue when they are not

Qualifications Requirements

  • 4–5 years of hands-on experience in data engineering or backend software development.
  • Strong proficiency in SQL and experience optimizing queries in Amazon Redshift, PostgreSQL, or similar.
  • Proficiency in Python (or another scripting language) for data processing and workflow automation.
  • Experience building and scheduling workflows using orchestration tools such as Apache Airflow.
  • Experience integrating with REST APIs or streaming data sources (e.g., Kafka) is a strong advantage.
  • Familiarity with data lake architectures, S3, and working with distributed file formats like Parquet or ORC.
  • Experience with version control (e.g., Git) and CI/CD practices.

Education Requirements

  • Bachelor's degree in Computer Science, Software Engineering, or a related technical field.
Seniority level
  • Seniority level Mid-Senior level
Employment type
  • Employment type Full-time
Job function
  • Job function Information Technology
  • Industries Telecommunications

Referrals increase your chances of interviewing at ieng Group by 2x

Sign in to set job alerts for “Data Engineer” roles. Intermediate Full-Stack Software Developer Full Stack Engineer- Node.js, React,js and Firebase

Pakistan $60,000.00-$120,000.00 1 month ago

Full Stack Developer (React.js & Node.js) Frontend Engineer II (with Webflow expertise)

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Islamabad, Islamabad NorthBay Solutions LLC

Posted 13 days ago

Job Viewed

Tap Again To Close

Job Description

Job Title: Data Engineer

Location: Karachi, Lahore , Islamabad (Hybrid)
Experience: 5+ Years
Job Type: Full-Time

Job Overview:

We are looking for a highly skilled and experienced Data Engineer with a strong foundation in Big Data, distributed computing, and cloud-based data solutions . This role demands a strong understanding of end-to-end Data pipelines, data modeling, and advanced data engineering practices across diverse data sources and environments. You will play a pivotal role in building, deploying, and optimizing data infrastructure and pipelines in a scalable cloud-based architecture.


Key Responsibilities:
  • Design, develop, and maintain large-scale Data pipelines using modern big data technologies and cloud-native tools.

  • Build scalable and efficient distributed data processing systems using Hadoop, Spark, Hive, and Kafka .

  • Work extensively with cloud platforms (preferably AWS) and services like EMR, Glue, Lambda, Athena, S3 .

  • Design and implement data integration solutions pulling from multiple sources into a centralized data warehouse or data lake.

  • Develop pipelines using DBT (Data Build Tool) and manage workflows with Apache Airflow or Step Functions .

  • Write clean, maintainable, and efficient code using Python, PySpark, or Scala for data transformation and processing.

  • Build and manage relational and columnar data stores such as PostgreSQL, MySQL, Redshift, Snowflake, HBase, ClickHouse .

  • Implement CI/CD pipelines using Docker, Jenkins , and other DevOps tools.

  • Collaborate with data scientists, analysts, and other engineering teams to deploy data models into production.

  • Drive data quality , integrity, and consistency across systems.

  • Participate in Agile/Scrum ceremonies and utilize JIRA for task management.

  • Provide mentorship and technical guidance to junior team members.

  • Contribute to continuous improvement by making recommendations to enhance data engineering processes and architecture.


Required Skills & Experience:
  • 5+ years of hands-on experience as a Data Engineer

  • Deep knowledge of Big Data technologies – Hadoop, Spark, Hive, Kafka.

  • Expertise in Python, PySpark and/or Scala .

  • Proficient with data modeling , SQL scripting , and working with large-scale datasets .

  • Experience with distributed storage like HDFS and cloud storage (e.g., AWS S3).

  • Hands-on with data orchestration tools like Apache Airflow or StepFunction.

  • Experience working in AWS environments with services such as EMR, Glue, Lambda, Athena.

  • Familiarity with data warehousing concepts and experience with tools like Redshift, Snowflake (preferred).

  • Exposure to tools like Informatica , AbInitio , Apache Iceberg is a plus.

  • Knowledge of Docker , Jenkins , and other CI/CD tools.

  • Strong problem-solving skills, initiative, and a continuous learning mindset.


Preferred Qualifications:
  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.

  • Experience with open table formats such as Apache Iceberg.

  • Hands-on with AbInitio (GDE, Collect > IT) or Informatica tools.

  • Knowledge of Agile methodology , working experience in JIRA .


Soft Skills:
  • Self-driven, proactive, and a strong team player.

  • Excellent communication and interpersonal skills.

  • Passion for data and technology innovation.

  • Ability to work independently and manage multiple priorities in a fast-paced environment.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Reporting Engineer Jobs