32 Data Engineering jobs in Islamabad
Data Engineer
Posted 9 days ago
Job Viewed
Job Description
At Edge
We’re on a mission to eliminate geographic borders as barriers to full-time employment and fair wages. We’re creating a global HR platform ecosystem that seamlessly connects exceptional talent worldwide with North American businesses. By making global hiring easier than local hiring, we provide businesses access to a broader talent pool and accelerate their hiring process. Spread across four continents, we’re a global team disrupting how people work together.
Role Overview
We are looking for a skilled Data Engineer to join our team. The ideal candidate will have strong expertise in designing, building, and maintaining scalable data pipelines and warehouses, ensuring data accuracy, quality, and accessibility across the organization.
Key Responsibilities
- Design, develop, and maintain ETL/ELT pipelines for efficient data processing.
- Build and optimize data models and warehouses to support analytics and reporting.
- Implement data governance and quality frameworks to ensure reliable insights.
- Integrate data from various sources (APIs, databases, third-party systems).
- Collaborate with cross-functional teams (engineering, product, analytics) to enable data-driven decision-making.
Required Skills
- Proven experience in ETL/ELT pipeline development .
- Strong knowledge of data modeling and data warehousing .
- Proficiency in BigQuery, SQL, and NoSQL databases .
- Experience working with APIs for data integration.
- Strong programming skills in Python for data engineering tasks and automation.
Preferred Skills
- Experience with data governance and data quality frameworks .
- Familiarity with cloud-based data platforms and modern data stack tools.
Qualifications
- 3+ years of experience as a Data Engineer or in a related role.
- Strong problem-solving and analytical skills.
Why Join Edge?
Edge is at a pivotal growth point, offering you the rare opportunity to shape the future of global employment. Your work will directly impact business growth, enable global opportunities, and transform how people work across borders.
We’re not just offering a job — we’re inviting you to be part of a revolution.
Ready to leave a global footprint and change lives? Edge is where your vision becomes reality.
#J-18808-LjbffrData Engineer
Posted 7 days ago
Job Viewed
Job Description
Data Engineer
to join our team. The ideal candidate will have strong expertise in designing, building, and maintaining scalable data pipelines and warehouses, ensuring data accuracy, quality, and accessibility across the organization. Key Responsibilities Design, develop, and maintain
ETL/ELT pipelines
for efficient data processing. Build and optimize
data models and warehouses
to support analytics and reporting. Implement
data governance and quality frameworks
to ensure reliable insights. Integrate data from various sources (APIs, databases, third-party systems). Collaborate with cross-functional teams (engineering, product, analytics) to enable data-driven decision-making. Required Skills Proven experience in
ETL/ELT pipeline development . Strong knowledge of
data modeling and data warehousing . Proficiency in
BigQuery, SQL, and NoSQL databases . Experience working with
APIs
for data integration. Strong programming skills in
Python
for data engineering tasks and automation. Preferred Skills Experience with
data governance and data quality frameworks . Familiarity with cloud-based data platforms and modern data stack tools. Qualifications 3+ years of experience as a Data Engineer or in a related role. Strong problem-solving and analytical skills. Why Join Edge? Edge is at a pivotal growth point, offering you the rare opportunity to shape the future of global employment. Your work will directly impact business growth, enable global opportunities, and transform how people work across borders. We’re not just offering a job — we’re inviting you to be part of a revolution. Ready to leave a global footprint and change lives? Edge is where your vision becomes reality.
#J-18808-Ljbffr
Principal Data Engineer
Posted today
Job Viewed
Job Description
Experience Range: Minimum 7+ years experience required in desire technologies.
Data Engineering (Mandatory): Strong SQL skills; experience with NoSQL databases; proven ability to design and build robust data pipelines using orchestration tools (e.g., Airflow, cloud-native services); experience with data modeling, ETL/ELT processes; experience handling large volumes of structured and unstructured data.
Programming (Mandatory): Proficiency in Python.
Cloud (Mandatory): Experience with cloud data warehousing, data lakes, and relevant cloud data services (storage, databases, processing).
Desired: Experience specifically preparing data for ML/GenAI use cases; familiarity with data governance and quality frameworks. #J-18808-LjbffrSenior Data Engineer
Posted today
Job Viewed
Job Description
Job Location: Islamabad I-9/3 (Onsite)
Experience : Over 5 years
Responsibilities:
- Able to work on complex data intensive projects.
- Good understating of Python, Spark best practices and commonly used modules based on work experience and creating self-contained, reusable, and testable modules and components.
- Responsible for prototyping, developing, and troubleshooting software in the user interface or service layers.
- Participate in collaborative technical discussions that focus on software user experience, design, architecture, and development.
- Keep up to date with technology and apply new knowledge.
- Manage Github PRs and act as release manager for some of the Data Engineering projects.
- Experience with technical project documentation.
- Work with onsite/offshore teams and help the team in clearing blockers.
- Ability to follow established coding standards.
Requirements
- Bachelor’s degree in computer science or software engineering
- Relational database experience is required.
- Reporting experience is required (Jet, Power BI or Tableau).
- ETL experience and programming skills in at least one of the following languages:
o Java
o Python
- The individual should be from a development background in data engineering.
- Experience working with onsite and offshore teams.
- Ability to work under minimal supervision, relying on experience, research, and judgment to plan and accomplish assigned goals.
- Understanding of data archival strategy.
- Strong complex problem solving and troubleshooting skills.
- Ability to learn quickly and manage time effectively.
- Proven written and oral communication skills.
Benefits
- Employee stock option plan (ESOP)
- Medical insurance
- Annual Increments
- Company gadgets
- Competitive salary and benefits package.
- Opportunities for professional development and growth.
- Collaborative and innovative work environment.
- Chance to work on cutting-edge cloud projects.
- Supportive and inclusive company culture.
#J-18808-Ljbffr
Lead Data Engineer
Posted 2 days ago
Job Viewed
Job Description
Grade Level (for internal use):
11The Role: Lead Data Engineer
The Location: Islamabad, Pakistan
The Team: Our team is responsible for the design, architecture, and development of our Content applications using a variety of tools that are regularly updated as new technologies emerge. You will have the opportunity every day to work with people from a wide variety of backgrounds and will be able to develop a close team dynamic with coworkers from around the globe.
The Impact: We are in search of a highly motivated and skilled Software Engineer who is ready to take their career to the next level in a fast-growing company. Do you love working on Technical Projects as well as getting in the trenches and working with the team to get the work done?
What’s in it for you:
- Build a career with a global company
- Work on applications that fuel the global financial markets of the future
- Grow and improve your skills by working alongside highly-motivated individuals on enterprise level product and cutting-edge technologies
- Be at the forefront of innovation in FinTech industry
- Competitive employee benefits
- Open and dynamic work culture
Responsibilities:
- Architect, design, and develop solutions within a multi-functional Agile team to support key business needs.
- Design, and implement software components for content systems.
- Perform analysis and articulate solutions. Design underlying engineering for use in multiple product offerings supporting a large volume of end-users.
- Manage and improve existing solutions. Solve a variety of complex problems and figure out possible solutions, weighing the costs and benefits.
- Engineer components, and common services based on standard corporate development models, languages, and tools.
- Apply software engineering best practices while also leveraging automation across all elements of solution delivery.
- Collaborate effectively with technical and non-technical stakeholders. Must be able to document and demonstrate technical solutions by developing documentation, diagrams, code comments, etc.
What We’re Looking For:
Basic Required Qualifications:
- Bachelor's or master's degree in computer science, Information Systems, or a related field.
- Minimum 8+ years of strong backend development experience.
- Advanced SQL programming skills with experience in database performance tuning for large datasets.
- Proficiency in relational database management systems (MS SQL, PostgreSQL, or similar).
- Exposure to Big Data technologies such as Databricks, Spark/Scala, EMR, Kafka or ETL processes.
- Fluency in English both written and spoken is required to effectively communicate with global team members.
Additional Preferred Qualifications:
- Strong understanding of cloud computing environments such as AWS, Azure, or GCP.
- Hands-on experience with Docker and containerized deployments is a plus.
- Understanding of financial industry fundamentals is highly preferred.
- Proficiency with at least one programming languages like Java or C#
#LI-USA
About S&P Global Market Intelligence
At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, andmake decisions with conviction.
For more information, visit .
What’s In It For You?
Our Purpose:
Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world.
Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress.
Our People:
We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all.
From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference.
Our Values:
Integrity, Discovery, Partnership
At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals.
Benefits:
We take care of you, so you cantake care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global.
Our benefits include:
Health & Wellness: Health care coverage designed for the mind and body.
Flexible Downtime: Generous time off helps keep you energized for your time on.
Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills.
Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs.
Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families.
Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference.
For more information on benefits by country visit:
Global Hiring and Opportunity at S&P Global:
At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets.
---
Equal Opportunity Employer
S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment.
If you need an accommodation during the application process due to a disability, please send an email to: and your request will be forwarded to the appropriate person.
US Candidates Only: The EEO is the Law Poster describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - English_formattedESQA508c.pdf
---
20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) #J-18808-LjbffrLead Data Engineer
Posted 2 days ago
Job Viewed
Job Description
Fusemachines is a leading AI strategy, talent, and education services provider. Founded by Sameer Maskey Ph.D., Adjunct Associate Professor at Columbia University, Fusemachines has a core mission of democratizing AI. With a presence in 4 countries (Nepal, United States, Canada, and Dominican Republic) and more than 450 employees, Fusemachines seeks to bring its global expertise in AI to transform companies around the world.
LocationRemote (Full-time)
Role OverviewThis is a remote full-time position, responsible for designing, building, testing, optimizing and maintaining the infrastructure and code required for data integration, storage, processing, pipelines and analytics (BI, visualization and Advanced Analytics) from ingestion to consumption, implementing data flow controls, and ensuring high data quality and accessibility for analytics and business intelligence purposes. This role requires a strong foundation in programming, and a keen understanding of how to integrate and manage data effectively across various storage systems and technologies.
We're looking for someone who can quickly ramp up, contribute right away and lead the work in Data & Analytics, helping from backlog definition, to architecture decisions, and lead technical the rest of the team with minimal oversight.
Responsibilities- Design, implement, deploy, test and maintain highly scalable and efficient data architectures, defining and maintaining standards and best practices for data management independently with minimal guidance
- Ensuring the scalability, reliability, quality and performance of data systems
- Mentoring and guiding junior/mid-level data engineers
- Collaborating with Product, Engineering, Data Scientists and Analysts to understand data requirements and develop data solutions, including reusable components
- Evaluating and implementing new technologies and tools to improve data integration, data processing and analysis
- Design architecture, observability and testing strategies, and building reliable infrastructure and data pipelines
- Takes ownership of storage layer, data management tasks, including schema design, indexing, and performance tuning
- Swiftly address and resolve complex data engineering issues, incidents and resolve bottlenecks in SQL queries and database operations
- Conduct Discovery on existing Data Infrastructure and Proposed Architecture
- Evaluate and implement cutting-edge technologies and methodologies and continue learning and expanding skills in data engineering and cloud platforms, to improve and modernize existing data systems
- Evaluate, design, and implement data governance solutions: cataloging, lineage, quality and data governance frameworks that are suitable for a modern analytics solution, considering industry-standard best practices and patterns.
- Define and document data engineering architectures, processes and data flows
- Assess best practices and design schemas that match business needs for delivering a modern analytics solution (descriptive, diagnostic, predictive, prescriptive)
- Be an active member of our Agile team, participating in all ceremonies and continuous improvement activities
- Must have a full-time Bachelor's degree in Computer Science Information Systems, Engineering, or a related field
- 5+ years of real-world data engineering development experience in AWS and GCP (certifications preferred). Strong expertise in Python, SQL, PySpark and AWS in an Agile environment, with a proven track record of building and optimizing data pipelines, architectures, and datasets, and proven experience in data storage, modeling, management, lake, warehousing, processing/transformation, integration, cleansing, validation and analytics
- Senior person who can understand requirements and design end to end solutions with minimal oversight
- Strong programming Skills in one or more languages such as Python, Scala, and proficient in writing efficient and optimized code for data integration, storage, processing and manipulation
- Strong knowledge SDLC tools and technologies, including project management software (Jira or similar), source code management (GitHub or similar), CI/CD system (GitHub actions, AWS CodeBuild or similar) and binary repository manager (AWS CodeArtifact or similar)
- Good understanding of Data Modeling and Database Design Principles. Being able to design and implement efficient database schemas that meet the requirements of the data architecture to support data solutions
- Strong SQL skills and experience working with complex data sets, Enterprise Data Warehouse and writing advanced SQL queries. Proficient with Relational Databases (RDS, MySQL, Postgres, or similar) and NonSQL Databases (Cassandra, MongoDB, Neo4j, etc.)
- Skilled in Data Integration from different sources such as APIs, databases, flat files, event streaming.
- Strong experience in implementing data pipelines and efficient ELT/ETL processes, batch and real-time, in AWS and using open source solutions, being able to develop custom integration solutions as needed, including Data Integration from different sources such as APIs (PoS integrations is a plus), ERP (Oracle and Allegra are a plus), databases, flat files, Apache Parquet, event streaming, including cleansing, transformation and validation of the data
- Strong experience with scalable and distributed Data Technologies such as Spark/PySpark, DBT and Kafka, to be able to handle large volumes of data
- Experience with stream-processing systems: Storm, Spark-Streaming, etc. is a plus
- Strong experience in designing and implementing Data Warehousing solutions in AWS with Redshift. Demonstrated experience in designing and implementing efficient ELT/ETL processes that extract data from source systems, transform it (DBT), and load it into the data warehouse
- Strong experience in Orchestration using Apache Airflow
- Expert in Cloud Computing in AWS, including deep knowledge of a variety of AWS services like Lambda, Kinesis, S3, Lake Formation, EC2, EMR, ECS/ECR, IAM, CloudWatch, etc
- Good understanding of Data Quality and Governance, including implementation of data quality checks and monitoring processes to ensure that data is accurate, complete, and consistent
- Good understanding of BI solutions including Looker and LookML (Looker Modeling Language)
- Strong knowledge and hands-on experience of DevOps principles, tools and technologies (GitHub and AWS DevOps) including continuous integration, continuous delivery (CI/CD), infrastructure as code (IaC – Terraform), configuration management, automated testing, performance tuning and cost management and optimization
- Good Problem-Solving skills: being able to troubleshoot data processing pipelines and identify performance bottlenecks and other issues
- Possesses strong leadership skills with a willingness to lead, create Ideas, and be assertive
- Strong project management and organizational skills
- Excellent communication skills to collaborate with cross-functional teams, including business users, data architects, DevOps/DataOps/MLOps engineers, data analyst, data scientists, developers, and operations teams. Essential to convey complex technical concepts and insights to non-technical stakeholders effectively
Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.
#J-18808-LjbffrLead Data Engineer
Posted 2 days ago
Job Viewed
Job Description
About Fusemachines
Fusemachines is a leading AI strategy, talent, and education services provider. Founded by Sameer Maskey Ph.D., Adjunct Associate Professor at Columbia University, Fusemachines has a core mission of democratizing AI. With a presence in 4 countries (Nepal, United States, Canada, and Dominican Republic and more than 450 employees). Fusemachines seeks to bring its global expertise in AI to transform companies around the world.
Location: Remote (Full-time)
About the roleThis is a remote full-time position, responsible for designing, building, testing, optimizing and maintaining the infrastructure and code required for data integration, storage, processing, pipelines and analytics (BI, visualization and Advanced Analytics) from ingestion to consumption, implementing data flow controls, and ensuring high data quality and accessibility for analytics and business intelligence purposes. This role requires a strong foundation in programming, and a keen understanding of how to integrate and manage data effectively across various storage systems and technologies.
We're looking for someone who can quickly ramp up, contribute right away and lead the work in Data & Analytics, helping from backlog definition, to architecture decisions, and lead technical the rest of the team with minimal oversight.
Qualifications / Skill Set- Must have a full-time Bachelor's degree in Computer Science Information Systems, Engineering, or a related field.
- 5+ years of real-world data engineering development experience in AWS and GCP (certifications preferred).
- Strong expertise in Python, SQL, PySpark and AWS in an Agile environment, with a proven track record of building and optimizing data pipelines, architectures, and datasets, and proven experience in data storage, modeling, management, lake, warehousing, processing/transformation, integration, cleansing, validation and analytics.
- Senior person who can understand requirements and design end to end solutions with minimal oversight.
- Strong programming skills in Python, Scala, and proficient in writing efficient and optimized code for data integration, storage, processing and manipulation.
- Strong knowledge of SDLC tools and technologies, including project management software (Jira or similar), source code management (GitHub or similar), CI/CD system (GitHub actions, AWS CodeBuild or similar) and binary repository manager (AWS CodeArtifact or similar).
- Good understanding of Data Modeling and Database Design Principles. Able to design and implement efficient database schemas that meet the requirements of the data architecture to support data solutions.
- Strong SQL skills and experience working with complex data sets, Enterprise Data Warehouse and writing advanced SQL queries. Proficient with Relational Databases (RDS, MySQL, Postgres, or similar) and NonSQL Databases (Cassandra, MongoDB, Neo4j, etc.).
- Skilled in Data Integration from different sources such as APIs, databases, flat files, event streaming.
- Strong experience in implementing data pipelines and efficient ELT/ETL processes, batch and real-time, in AWS and using open source solutions, being able to develop custom integration solutions as needed, including Data Integration from different sources such as APIs (PoS integrations is a plus), ERP (Oracle and Allegra are a plus), databases, flat files, Apache Parquet, event streaming, including cleansing, transformation and validation of the data.
- Strong experience with scalable and distributed Data Technologies such as Spark/PySpark, DBT and Kafka, to be able to handle large volumes of data.
- Experience with stream-processing systems: Storm, Spark-Streaming, etc. is a plus.
- Strong experience in designing and implementing Data Warehousing solutions in AWS with Redshift. Demonstrated experience in designing and implementing efficient ELT/ETL processes that extract data from source systems, transform it (DBT), and load it into the data warehouse.
- Strong experience in Orchestration using Apache Airflow.
- Expert in Cloud Computing in AWS, including deep knowledge of a variety of AWS services like Lambda, Kinesis, S3, Lake Formation, EC2, EMR, ECS/ECR, IAM, CloudWatch, etc.
- Good understanding of Data Quality and Governance, including implementation of data quality checks and monitoring processes to ensure that data is accurate, complete, and consistent.
- Good understanding of BI solutions including Looker and LookML (Looker Modeling Language).
- Strong knowledge and hands-on experience of DevOps principles, tools and technologies (GitHub and AWS DevOps) including continuous integration, continuous delivery (CI/CD), infrastructure as code (IaC – Terraform), configuration management, automated testing, performance tuning and cost management and optimization.
- Good problem-solving skills: able to troubleshoot data processing pipelines and identify performance bottlenecks and other issues.
- Possesses strong leadership skills with a willingness to lead, create ideas, and be assertive.
- Strong project management and organizational skills.
- Excellent communication skills to collaborate with cross-functional teams, including business users, data architects, DevOps/DataOps/MLOps engineers, data analysts, data scientists, developers, and operations teams. Essential to convey complex technical concepts and insights to non-technical stakeholders effectively.
- Ability to document processes, procedures, and deployment configurations.
- Design, implement, deploy, test and maintain highly scalable and efficient data architectures, defining and maintaining standards and best practices for data management independently with minimal guidance.
- Ensuring the scalability, reliability, quality and performance of data systems.
- Mentoring and guiding junior/mid-level data engineers.
- Collaborating with Product, Engineering, Data Scientists and Analysts to understand data requirements and develop data solutions, including reusable components.
- Evaluating and implementing new technologies and tools to improve data integration, data processing and analysis.
- Design architecture, observability and testing strategies, and building reliable infrastructure and data pipelines.
- Takes ownership of storage layer, data management tasks, including schema design, indexing, and performance tuning.
- Swiftly address and resolve complex data engineering issues, incidents and resolve bottlenecks in SQL queries and database operations.
- Conduct Discovery on existing Data Infrastructure and Proposed Architecture.
- Evaluate and implement cutting-edge technologies and methodologies and continue learning and expanding skills in data engineering and cloud platforms, to improve and modernize existing data systems.
- Evaluate, design, and implement data governance solutions: cataloging, lineage, quality and data governance frameworks that are suitable for a modern analytics solution, considering industry-standard best practices and patterns.
- Define and document data engineering architectures, processes and data flows.
- Assess best practices and design schemas that match business needs for delivering a modern analytics solution (descriptive, diagnostic, predictive, prescriptive).
- Be an active member of our Agile team, participating in all ceremonies and continuous improvement activities.
Equal Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.
#J-18808-LjbffrBe The First To Know
About the latest Data engineering Jobs in Islamabad !
Data Engineer - Informatica
Posted 2 days ago
Job Viewed
Job Description
Our Company
At Teradata, we believe that people thrive when empowered with better information. That’s why we built the most complete cloud analytics and data platform for AI. By delivering harmonized data, trusted AI, and faster innovation, we uplift and empower our customers—and our customers’ customers—to make better, more confident decisions. The world’s top companies across every major industry trust Teradata to improve business performance, enrich customer experiences, and fully integrate data across the enterprise.
What You’ll Do
In this role you will be working on different migration projects. Be accountable code quality and functional validations. Work with Technical Leads to ensure the solution is implemented as per the design specifications. Provide inputs to the Technical Leads on alternate solutions in case of performance or technical feasibility issues.
You will also be working on Design and develop migration strategies for Teradata Vantage migration projects. Understand the customer’s existing ecosystem including all technology stack components involving batch and real time data pipelines. Understand the current implementation of different technology components. Understand proposed target architecture leveraging the modern data stack components such as Teradata Vantage ecosystem including object store like S3, Azure blob and other cloud native services. Leverage existing migration tools and methodology to efficiently convert technical stack to the Teradata Vantage ecosystem. Work closely with customer SMEs and Technical Leads on understanding the migration strategy and solution. Report any issues or risks to Technical Leads and Project Manager.
Who You’ll Work With
You will report to, and work under direction Project Manager and Technical Leads to ensure smooth delivery of the projects. Work with customer SMEs and other stakeholders to ensure high quality deliverables as per requirements.
Minimum Requirements
Minimum 2+ years of IT work experience
Minimum 2 years of Data Analytics related experience.
Hands on experience with at least one ETL tool like Informatica, DataStage, Talend etc.
Strong RDBMS concepts and SQL writing skills
Knowledge of Hadoop components such as HDFS, Hive, Spark etc.
Knowledge of Teradata product is a major plus.
What You’ll Bring
ETL tool and SQL expertise
Ability to encapsulate knowledge for sharing across multiple projects.
#LI-NM1
#J-18808-LjbffrPrincipal Data Engineer
Posted 14 days ago
Job Viewed
Job Description
Islamabad, Pakistan | Posted on 07/22/2025
CloudPSO is a Information Technology Outsourcing (ITO) company that assists in the acquisition of qualified staff to address complex digital problems in order to increase efficiency, reduce costs, and maintain compliance.
CloudPSO was founded in 2017 with an aim to provide businesses with a competent and skilled workforce at any given point in time and from any geographic region.
We are a US-based company with headquarters in Dallas (Texas) and a center of excellence in Pakistan. We have over 200 facility seats with an additional Work-From-Home facility. CloudPSO has skillful in-house software development teams with state-of-the-art tools, the latest VOIP technology platform, and secure infrastructure.
Our core values consist of client satisfaction, commitment, quality, and transparency.
We, at CloudPSO, hunt, analyze, recruit, train, and retain top-notch talent for you to help achieve your business goals. Optimizing mission-critical and day-to-day enterprise IT operations, CloudPSO enables businesses to transform, innovate and scale.
Job timings: Mon - Fri 6 PM - 3 AM Pakistan time
Job Location: Islamabad I-9/3 (Onsite)
Experience : Over 5 years
Responsibilities:
- Able to work on complex data intensive projects.
- Good understating of Python, Spark best practices and commonly used modules based on work experience and creating self-contained, reusable, and testable modules and components.
- Responsible for prototyping, developing, and troubleshooting software in the user interface or service layers.
- Participate in collaborative technical discussions that focus on software user experience, design, architecture, and development.
- Keep up to date with technology and apply new knowledge.
- Manage Github PRs and act as release manager for some of the Data Engineering projects.
- Experience with technical project documentation.
- Work with onsite/offshore teams and help the team in clearing blockers.
- Ability to follow established coding standards.
- Bachelor’s degree in computer science or software engineering
- Relational database experience is required.
- Reporting experience is required (Jet, Power BI or Tableau).
- ETL experience and programming skills in at least one of the following languages:
o Java
o Python
- The individual should be from a development background in data engineering.
- Experience working with onsite and offshore teams.
- Ability to work under minimal supervision, relying on experience, research, and judgment to plan and accomplish assigned goals.
- Understanding of data archival strategy.
- Strong complex problem solving and troubleshooting skills.
- Ability to learn quickly and manage time effectively.
- Proven written and oral communication skills.
- Employee stock option plan (ESOP)
- Medical insurance
- Annual Increments
- Company gadgets
- Competitive salary and benefits package.
- Opportunities for professional development and growth.
- Collaborative and innovative work environment.
- Chance to work on cutting-edge cloud projects.
- Supportive and inclusive company culture.
Senior Data Engineer
Posted 25 days ago
Job Viewed
Job Description
Islamabad, Pakistan | Posted on 05/20/2025
DPL is one of the leading software development and IT companies worldwide. Established in 2003, DPL serves clients across major regions, with a focus on Europe, the Middle East, and the Americas. The company is headquartered in Islamabad, Pakistan, with regional offices in the USA and Sweden.
DPL pioneered Agile practices and fosters an innovation-driven culture in Pakistan. Recognized globally for its workplace environment, the company promotes a flat organizational culture and holacratic approach to encourage employee engagement and innovation.
Our diverse client portfolio includes industries such as Healthcare, Fintech, Automotive, Mobility, Telco, Education, Media, and E-commerce. Our services encompass Digital Transformation, Product Engineering, IT Strategy & Consulting, and Custom Software Development.
Job DescriptionWe are seeking a Data Engineer who recognizes that each data point represents human decision-making. Your role involves building and maintaining data infrastructure that supports our lending models and operational processes, primarily on AWS . You will also assist in the machine learning lifecycle by helping data scientists deploy, monitor, and retrain models with appropriate data at optimal times.
This position offers a unique opportunity to contribute to building the financial backbone of one of Africa’s most ambitious digital banks.
Key Responsibilities
- Build & Maintain Pipelines: Develop and operate ETL/ELT pipelines using AWS Glue, Lambda, Athena, and Step Functions . Your work will support reporting and real-time decision-making.
- Curate a Robust Data Lakehouse: Structure and maintain our data lake with proper partitioning, schema evolution, metadata tagging, and access control across multiple jurisdictions.
- Support MLOps Lifecycle: Collaborate with data scientists to deploy models using SageMaker Pipelines , update the Feature Store , and set up triggers for model retraining.
- Ensure Precision & Integrity: Monitor pipeline outputs and data quality dashboards to ensure all data is accurate, traceable, and reproducible .
- Automate, Audit & Secure: Use infrastructure-as-code tools like Pulumi or Terraform to build reproducible infrastructure, implementing logging, versioning, and KMS-encrypted security practices .
- Collaborate with Impact: Work across analytics, engineering, and credit teams to understand operational needs and develop technical pipelines and products.
Required Skills & Experience
- Ownership mindset (willing to take responsibility)
- Willingness to learn
- 4+ years in data engineering or cloud data architecture roles
- Solid experience with AWS data stack : S3, Glue, Athena, Lambda, Step Functions
- Proficiency in SQL , Python , and PySpark
- Experience with data lake architecture and handling semi-structured data (Parquet, JSON)
- Experience with MLOps , model deployment pipelines, and monitoring
- Exposure to infrastructure-as-code (Pulumi preferred, Terraform acceptable)
- Knowledge of secure data handling and anonymization best practices
Nice to Have
- Experience with event-driven data flows (e.g., Kafka, EventBridge)
- Familiarity with SageMaker Feature Store and SageMaker Pipelines
- Background in financial services, credit scoring, or mobile money ecosystems
- Passion for building ethical, inclusive financial systems