906 Data Engineer jobs in Pakistan
Data Engineer
Posted today
Job Viewed
Job Description
Job Title:
Data Engineer
Job Code:
Section:
O&M
Grade:
Reports To:
AI/ML Team Lead
Role and Responsibilities
· Design, develop, and manage scalable, reliable, and efficient data pipelines for ingesting and transforming structured and unstructured data.
· Work with diverse data formats such as Parquet, JSON, and CSV for data ingestion, processing, and transformation.
· Clean, validate, and transform data from multiple sources to support business intelligence, analytics, and machine learning use cases.
· Ensure data quality, consistency, and security across pipelines through validation rules, logging, and monitoring mechanisms.
· Collaborate with cross-functional teams to understand business requirements and deliver high-quality datasets for reporting and analytics.
· Develop and optimize ETL/ELT workflows using industry-standard tools and languages such as Python and SQL.
· Monitor, debug, and enhance the performance, reliability, and scalability of data pipelines and infrastructure.
· Participate in code reviews, adhere to best practices, and contribute to internal documentation and process automation.
· Take full ownership of assigned tasks, ensuring timely and accurate delivery with accountability.
OH&S/QMS Role and Responsibilities
· Reports any hazards or risks in addition to Accidents/Incident to QHSE department.
· Be aware of and complies to i engineering's IMS Policy.
· Abides by i engineering's Local Legal and Client Requirements.
· Attends and engages in IMS Awareness Sessions.
· Ensures that all IMS procedures are regularly followed and raises the issue when they are not
Qualifications Requirements
· 4–5 years of hands-on experience in data engineering or backend software development.
· Strong proficiency in SQL and experience optimizing queries in Amazon Redshift, PostgreSQL, or similar.
· Proficiency in Python (or another scripting language) for data processing and workflow automation.
· Experience building and scheduling workflows using orchestration tools such as Apache Airflow.
· Experience integrating with REST APIs or streaming data sources (e.g., Kafka) is a strong advantage.
· Familiarity with data lake architectures, S3, and working with distributed file formats like Parquet or ORC.
· Experience with version control (e.g., Git) and CI/CD practices.
Education Requirements
· Bachelor's degree in Computer Science, Software Engineering, or a related technical field.
Data Engineer
Posted today
Job Viewed
Job Description
Comprehensive Rehab Consultants focuses on partnering with skilled nursing facilities to meet their individual needs. We are passionate about healthcare technology, innovation, and delivery. CRC is widely considered as visionaries in the post-acute space, designing new care models to improve efficiency, decrease hospitalizations, and improve clinical outcomes.
We are searching for a Data Engineer to join our team. Your primary responsibility will be to design, build, and maintain the data pipelines that power CRC's enterprise Lakehouse and self-service analytics platform. You will work closely with our Databricks engineer, data scientists, and analysts to ensure high-quality, reliable, and auditable data flows.
General Requirements
- Strong communication and documentation skills
- Native/Fluent English, spoken and written
- Able to work independently and collaboratively in a fast-moving environment
- Available M–F 9:00 AM to 7:00 PM (Central Time) and some weekends
Roles and Responsibilities
- Design, build, and maintain scalable pipelines to load, transform, and validate healthcare data into the Fabric Lakehouse
- Develop connectors for APIs and external systems (e.g., EMR, AirTable, CMS)
- Partner with data governance teams to enforce provenance, UID, and lineage standards
- Support nightly/near-real-time refreshes for clinical, financial, and operational datasets
- Collaborate with data scientists on feature engineering and model-ready datasets
- Participate in rotational QA of pipelines, ensuring reliability and accuracy
- Document workflows, schemas, and dependencies
- Provide redundancy with the Databricks Engineer to ensure no single point of failure
Education & Experience Requirements
- Bachelor's degree in Computer Science, Engineering, or Information Technology preferred but not required
- Equivalent certifications and proven project work accepted in lieu of a formal degree
- Relevant certifications strongly considered:
Microsoft Certified: Azure Data Engineer Associate/Fabric Associate
Databricks Certified Data Engineer (Associate or Professional)
Azure Fundamentals (AZ-900) or equivalent cloud certifications
- 3–6 years of hands-on experience in data engineering, with at least 2+ years in cloud ETL/ELT development (Databricks, Fabric, Synapse, or equivalent)
- Demonstrated ability to build and maintain production-grade pipelines with error handling, logging, and lineage
- Experience integrating multiple data sources (EHR, APIs, CMS, operational systems) preferred
Preferred Skills:
- Expertise with ETL/ELT pipeline development using Databricks, Fabric Lakehouse; certification in Databricks and/or Fabric Engineering preferred
- Strong SQL and familiarity with data modeling for analytics
- Experience with Python and PySpark for pipeline development
- Proficiency in integrating diverse sources (EHR, AirTable, CMS, APIs)
- Familiarity with data governance practices: lineage, UID enforcement, provenance tracking
- Basic understanding of healthcare data structures (claims, EHR, FHIR)
- CI/CD for data pipelines (GitHub, DevOps, or similar)
Job Type: Full-time
Pay: Rs5,000, Rs8,400,000.00 per year
Education:
- Bachelor's (Required)
Experience:
- data engineering: 3 years (Required)
- cloud ETL/ELT development: 2 years (Required)
Work Location: Remote
Data Engineer
Posted today
Job Viewed
Job Description
Job Title:
Data Engineer
Location:
Onsite – Green Forts 2, Lahore
Working Hours:
5:00 PM – 2:00 AM
Experience Required:
2–3 years
Must-Have Skills:
- Proficiency in
Google Cloud Platform (GCP) - Strong hands-on experience with
DBT (Data Build Tool) - Solid understanding of
SQL
and database management
- Experience building and maintaining
data pipelines
Nice-to-Have Skills:
- Familiarity with
ETL/ELT processes - Experience with
data warehousing
and
analytics solutions
- Knowledge of
BigQuery
or other distributed query engines
Responsibilities:
- Design, build, and optimize scalable data pipelines.
- Develop and maintain data models using DBT.
- Work with stakeholders to understand data requirements and deliver insights.
- Ensure data quality, reliability, and performance in GCP environments.
Drop cv at or
Data Engineer
Posted today
Job Viewed
Job Description
Role: Data Engineer
Location: Egypt, Uzbekistan, and Pakistan (Remote)
Work Week: Sunday – Thursday
Work Timings: 9:00 AM – 6:00 PM (Saudi Arabian Time Zone)
Overview:
We're seeking a Data Engineer to design, build, and maintain the data infrastructure that underpins our analytics, ML models, and decision-making processes. You'll be responsible for building scalable data pipelines, integrating diverse data sources, and ensuring data quality, reliability, and accessibility across the organization. Working closely with data scientists, analysts, and product teams, you'll enable data-driven insights while optimizing for performance and scalability. This is a great opportunity to have a direct impact on how data is leveraged across a fast-growing company.
Role & Responsibilities:- Data Pipeline Development & Optimization:
- Design, build, and maintain scalable and reliable data pipelines to support analytics, ML models, and business reporting.
- Collaborate with data scientists and analysts to ensure data is available, clean, and optimized for downstream use.
- Implement data quality checks, monitoring, and validation processes.
- Data Architecture & Integration:
- Work with cross-functional teams to design efficient ETL/ELT workflows using modern data tools.
- Integrate data from multiple sources (databases, APIs, third-party tools) into centralized storage solutions (data lakes/warehouses).
- Support cloud-based infrastructure for data storage and retrieval.
- Performance & Scalability:
- Monitor, troubleshoot, and optimize existing data pipelines to handle large-scale, real-time data flows.
- Implement best practices for query optimization and cost-efficient data storage.
- Ensure data is available and accessible for business-critical operations.
- Collaboration & Documentation:
- Partner with product, engineering, and business stakeholders to understand data requirements.
- Document data workflows, schemas, and best practices.
- Support a culture of data reliability, governance, and security.
- Strong understanding of ETL/ELT processes, data warehousing, and data modeling.
- Hands-on experience with cloud platforms (AWS, GCP, or Azure) and data storage solutions (BigQuery, Redshift, Snowflake, etc.).
- Familiarity with data orchestration tools (Airflow, dbt, Prefect, or similar).
- Experience with containerization & deployment tools (Docker, Kubernetes) is a plus.
- Knowledge of data governance, security, and best practices for handling sensitive data.
- 2+ years in data engineering, building and maintaining data pipelines.
- 2+ years in SQL and Python development for production environments.
- Experience working in fast-growing startup environments is a plus.
- Exposure to real-time data processing frameworks (Kafka, Spark, Flink) is a plus.
Data Engineer
Posted today
Job Viewed
Job Description
Job Title:
Data Engineer
Schedule:
On-site Full-time
Work Location:
I-10/3 Islamabad, Pakistan
Are you passionate about building scalable data pipelines, managing large datasets, and enabling advanced analytics? Do you enjoy working with cutting-edge technologies to support AI, ML, and IoT-driven solutions? If so, then PLC Group wants you
As a Data Engineer, you will play a critical role in designing, developing, and maintaining data infrastructure that powers PLC Group's AI/ML and IoT platforms. You will work closely with data scientists, software engineers, and product managers to ensure reliable, efficient, and secure data flow for mission-critical applications.
Responsibilities:
- Design, build, and maintain scalable data pipelines and ETL processes for structured and unstructured data.
- Develop and optimize data models, databases, and data warehouses to support analytics and reporting.
- Integrate data from multiple sources, including IoT devices, APIs, and external systems.
- Ensure data quality, consistency, and security across all platforms.
- Collaborate with AI/ML teams to prepare data for model training and real-time processing.
- Implement data governance, monitoring, and validation frameworks.
- Optimize query performance and storage for large-scale datasets.
- Stay updated on emerging data engineering tools and best practices to continuously improve PLC solutions.
- Document processes, workflows, and data architectures for knowledge sharing and future reference.
Qualifications:
- Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field.
- Minimum 1–4 years of experience as a Data Engineer or in a similar role.
- Strong proficiency in SQL and database technologies (MySQL, PostgreSQL, MongoDB, etc.).
- Hands-on experience with big data tools and frameworks (Apache Spark, Hadoop, Kafka).
- Proficiency in Python or Java for data engineering tasks.
- Experience with ETL tools and workflow orchestration (Airflow, Luigi, etc.).
- Knowledge of cloud platforms (AWS, Azure, or GCP) and data services.
- Familiarity with IoT data, time-series data, and real-time data streaming is a plus.
- Strong problem-solving and analytical skills.
- Excellent collaboration and communication abilities.
Benefits:
- Competitive salary.
- Opportunity to work on data-driven projects powering advanced AI/ML and IoT solutions.
- Collaborative, innovative, and supportive work environment.
- Professional growth and learning opportunities in data engineering, big data, and cloud technologies.
About PLC Group
PLC Group is a Canadian company that provides a portfolio of Artificial Intelligence based Remote Monitoring & Control IoT solutions and Facilities Modular Planning & Capacity Management Solutions to Mission Critical Facilities, Cable Landing Stations, Cable Operators, Data Centers, Telecom Companies, and Telecom Tower Sharing companies.
PLC portfolio is designed with the objective of reducing facilities CAPEX & OPEX through Predictive Maintenance, "right-sizing" cooling and energy capacity in facilities, and faster turnaround on facilities design & capacity planning using real-time sensor data and machine learning.
Data Engineer
Posted today
Job Viewed
Job Description
You will be participating in exciting projects covering the end-to-end data lifecycle – from raw data integrations with primary and third-party systems, through advanced data modeling, to state-of-the-art data visualization and development of innovative data products.
You will have the opportunity to learn how to build and work with both batch and real-time data processing pipelines. You will work in a modern cloud-based data warehousing environment alongside a team of diverse, intense, and interesting co-workers. You will liaise with other departments – such as product & tech, the core business verticals, trust & safety, finance, and others – to enable them to be successful.
Your Responsibilities
- Design, implement and support data warehousing;
- Raw data integrations with primary and third-party systems
- Data warehouse modeling for operational & application data layers
- Development in Amazon Redshift cluster
- SQL development as part of agile team workflow
- ETL design and implementation in Matillion ETL
- Real-time data pipelines and applications using serverless and managed AWS services such as Lambda, Kinesis, API Gateway, etc.
- Design and implementation of data products enabling data-driven features or business solutions
- Building data dashboards and advanced visualizations in Sisense for data cloud teams (formerly Periscope Data) with a focus on UX, simplicity, and usability
- Working with other departments on data products – i.e. product & technology, marketing & growth, finance, core business, advertising, and others
- Being part and contributing towards a strong team culture and ambition to be on the cutting edge of big data
- Evaluate and improve data quality by implementing test cases, alerts, and data quality safeguards
- Living the team values: Simpler. Better. Faster.
- Strong desire to learn
Required Minimum Experience (must)
- 1-3 years experience in data processing, analysis, and problem-solving with large amounts of data;
- Good SQL skills across a variety of relational data warehousing technologies especially in cloud data warehousing (e.g. Amazon Redshift, Google BigQuery, Snowflake, Vertica, etc.)
- 1+ year of experience with one or more programming languages, especially Python
- Ability to communicate insights and findings to a non-technical audience
- Written and verbal proficiency in English
- Entrepreneurial spirit and ability to think creatively; highly-driven and self-motivated; strong curiosity and strive for continuous learning
- Top of class University technical degree such as computer science, engineering, math, physics.
Additional Experience (strong Plus)
- Experience working with customer-centric data at big data-scale, preferably in an online / e-commerce context
- Experience with modern big data ETL tools (e.g. Matillion)
- Experience with AWS data ecosystem (or other cloud providers)
- Track record in business intelligence solutions, building and scaling data warehouses, and data modeling
- Tagging, Tracking, and reporting with Google Analytics 360
- Knowledge of modern real-time data pipelines (e.g. serverless framework, lambda, kinesis, etc.)
- Experience with modern data visualization platforms such as Periscope, Looker, Tableau, Google Data Studio, etc.
- Linux, bash scripting, Javascript, HTML, XML
- Docker Containers and Kubernete
Data Engineer
Posted today
Job Viewed
Job Description
Position:
Data Engineer (Azure Platform)
Location:
Lahore (on-site)
Job Description
• Design and develop scalable data warehouse solutions to support analytics and reporting.
• Build and maintain efficient
Azure Data Pipelines
using
Azure Data Factory
for ETL processes.
• Develop and optimize Spark jobs using
Databricks
and
PySpark
for large-scale data transformations.
• Coordinate with onsite and offshore teams to ensure seamless project execution and business continuity.
• Develop and maintain dashboards and reports using BI tools such as Power BI, Tableau, MicroStrategy, or Qlik.
• Manage the presentation and access layers to ensure secure and efficient data access.
• Work directly with clients and stakeholders to understand requirements and deliver actionable data solutions
Qualifications
• BS in Computer Science or a related field with
4-5 years of experience
in BI and data management projects.
• Strong expertise in
database
and
data warehouse design
, support, and maintenance.
• Proficiency in
writing complex SQL
and
SparkSQL queries
.
• Experience in
Python
and its data-related frameworks.
• Hands-on experience with
Databricks
and
PySpark
for data engineering tasks.
• Strong proficiency in Power BI; experience with other BI tools is a plus.
• Experience with Azure Data Platform
(Data Factory, Azure SQL, Synapse, Data Lake)
is preferred.
• Ability to generate and maintain comprehensive technical documentation.
• Excellent communication and interpersonal skills.
• Strong analytical and problem-solving abilities.
• Willingness to learn and adapt to new technologies and project requirements.
• Experience in data analysis is a plus.
• Ability to work independently and handle critical situations confidently.
• A collaborative, team-oriented mindset.
Be The First To Know
About the latest Data engineer Jobs in Pakistan !
Data Engineer
Posted today
Job Viewed
Job Description
Customer Data Platform Data Engineer
We are looking for a Customer Data Specialist Data Engineer to join our data team for a major European retail company. The ideal candidate has experience working as a Data Engineer and Customer Data Platform Engineer ( ideally Tealium) and is passionate about leveraging data to drive personalization, segmentation, and customer insights. You will be working closely with data engineers, data scientist and marketing teams to ensure smooth data integration, transformation, and activation across platforms.
Key Qualifications
- 2+ years of experience coding in Python, with solid computer science fundamentals including data structures and algorithms.
- Experience working with Customer Data Platforms (CDPs) preferably Tealium.
- 2+ years of experience with Apache Spark or PySpark; if not, strong proficiency in Pandas for data manipulation and analysis.
- Experience working with cloud data platforms, ideally Microsoft Azure (Azure Data Lake, Event Hub, or similar).
- Experience working in notebook environments, ideally Azure Databricks.
- Strong understanding of data pipelines, ETL/ELT processes, and data integration between systems.
- Ability to collaborate with cross-functional teams to translate business requirements into scalable technical solutions.
Nice to Have
- 2+ years of hands-on experience in PySpark & Azure cloud data services (e.g., Data Factory, Data Lake Gen2, Databricks).
- Experience in deployments automation using CI/CD pipelines and Azure DevOps.
- Familiarity with DevOps principles and best practices for continuous integration and delivery.
- Practical understanding of data governance, and privacy compliance (e.g., GDPR).
- Hands-on experience with Databricks workflows for data processing and orchestration.
- Knowledge of data quality testing, monitoring, and data lineage tracking.
- Experience working with event based data.
What's in it for you
- Competitive salary
- Employees' Provident Fund, medical and other incentives
- Unique working environment where you communicate and work directly with international clients
- Self-development opportunities
Data Engineer
Posted today
Job Viewed
Job Description
Requirements:
- Minimum 5 years of hands-on experience in EHR data engineering.
- Experience with Athena (required); Epic and eClinicalWorks experience is a plus.
- Deep knowledge of FHIR (priority), HL7, and HIPAA compliance.
- Strong proficiency in SQL, including designing and optimizing queries.
- Experience with lakehouse concepts and building normalized data models.
- Hands-on experience managing Airbyte configurations (bulk and incremental ingestion).
- Ability to articulate and implement -ilities such as observability, maintainability, availability, and scalability.
- Familiarity with ETL/ELT pipelines and performance tuning.
- Strong problem-solving and documentation skills with the ability to work cross-functionally.
Responsibilities:
- Act as the data backbone owner for the Proof of Value (PoV) project.
- Collaborate with the Integration Engineer on ingestion, Business Analyst on requirements, and Data Analyst on analytics.
- Design, develop, and maintain robust data pipelines for ingestion, transformation, and delivery.
- Ensure data quality, security, and compliance with HIPAA and other regulatory standards.
- Continuously optimize data workflows for performance, scalability, and reliability.
Data Engineer
Posted today
Job Viewed
Job Description
This is a full-time remote contract position for a Data Engineer based in Pakistan. The role involves working on data engineering, data modeling, ETL processes, data warehousing, and data analytics. The Senior Data Engineer will play a key role in designing and managing reliable data systems to support business decisions on a daily basis.
Key Responsibilities:
- Build and maintain ETL pipelines to collect and process large volumes of data.
- Design and optimize data models for efficient storage and retrieval.
- Develop and manage data warehouses for reporting and analytics.
- Work closely with teams to provide clean and reliable data for decision-making.
- Ensure high data quality, performance, and security.
- Troubleshoot and solve issues related to data processes.
Required Qualifications:
- 5 to 10 years of experience as a Data Engineer.
- Strong skills in data engineering, data modeling, ETL, and data warehousing.
- Hands-on experience with Snowflake and Databricks (must-have).
- Good understanding of data analytics concepts.
- Strong problem-solving and analytical skills.
- Bachelor's or Master's degree in Computer Science, Data Science, or a related field.
Job Type: Full-time
Application Question(s):
- Are you comfortable working remotely?
Work Location: Remote