54 Etl Processes jobs in Pakistan

Data Engineering Lead

Karachi, Sindh Techlogix

Posted today

Job Viewed

Tap Again To Close

Job Description

Core Skills:

  • Data engineer with a solid technical background and banking experience in data-intensive systems (>7 years)
  • Strong experience in Big Data, Hadoop Ecosystem, Spark Streaming, Kafka, Python, SQL, Hive, NIFI, Airflow
  • Proficient with Azure Cloud services such as Azure Data Factory (ADF), Databricks, ADLS, Azure Synapse, Logic Apps, Azure Functions. Or similar data stack knowledge within Google/AWS cloud services
  • Proficiency in relational SQL, Graph and NoSQL databases.
  • Proficiency in Elastic Search and Couchbase databases
  • In-depth skills in developing and maintaining ETL/ELT data pipelines.
  • Experience in data modelling techniques such as Kimball star schema, 3NF, vault modelling etc.
  • Experience in workflow management tools such as Airflow, Oozie and CI/CD tools
  • Data streaming solution in Kafka or Confluent Kafka
  • Hands on experience in Google Big Query, Google Analytics & Clickstream Data Model
  • Reporting knowledge in Power Bl, Tableau, Qlik etc.
  • Sound in Data Management Fundamentals and Data Architect, Modelling, Governance
  • Strong Domain Knowledge in Banking/Finance area like understanding of the various processes, products, and services within the banking industry, core functions, regulations, and operational aspects of banking institutions.
  • Hands on knowledge in Hadoop and Azure/AWS cloud ecosystem and ETL jobs migration
  • Knowledge of Advance Analytics and Al tools
Requirements

Minimum BS Degree in CS, IT or Engineering

Banking Domain Knowledge will be a Plus.

Benefits

Competitive Salary, Medical OPD & IPD, Life Insurance, EOBI

This advertiser has chosen not to accept applicants from your region.

Lead Data Engineering

Hyderabad, Punjab UST

Posted today

Job Viewed

Tap Again To Close

Job Description

5 - 7 Years

3 Openings

Bengaluru, Chennai, Hyderabad, Kochi, Trivandrum

Role description

AI/ML Engineer - Required Skills

  • Hands-on experience with Agentic Layer A2A frameworks and MCP Protocol.

  • Expertise in AI/ML engineering, specifically vector embeddings, prompt engineering, and context engineering.

  • Strong programming skills in at least two of the following: Python, Java, Go.

  • Proficiency in deploying solutions on Azure Cloud.

  • Experience with databases such as Azure AI Search, Redis, and Cosmos DB (Blob Storage and Iceberg are plus).

  • Proven ability to design and manage Azure Functions and Azure Container Apps.

  • Strong understanding of cloud-native architecture, scalability, and performance optimization.

Skills

Pyspark,Aws Cloud,Machine Learning

About UST

UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world's best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients' organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

This advertiser has chosen not to accept applicants from your region.

Managing Consultant Data Engineering

Karachi, Sindh Systems Limited

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Summary:

We are looking to hire an experienced data engineer with expertise in SQL and SSIS background and having familiarity with Azure environment.

Job Requirements:

  • Expertise in SQL, SQL Server, SSIS background.
  • Strong Azure data engineering experience – ADF, Data Fabric, Databricks, PySpark.
  • Experience with Power BI is a plus.
  • Understand complex SQL Server based data warehouse environments.
  • Strong analytical and problem-solving skills.
  • Experience with Finance Data Warehouse a big plus.
  • Excellent communication skills.
  • Ability to independently function and be proactive.
This advertiser has chosen not to accept applicants from your region.

Managing Consultant Data Engineering

Karachi, Sindh Systems Limited

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Role: Data Ops Support Engineer

Job Responsibilities:

  • Implement and manage continuous integration and deployment pipelines for various applications and services.
  • Proactively monitor data pipelines, system performance and troubleshoot any issues to maintain high availability and reliability of the infrastructure.
  • Collaborate with development and business teams to design and implement scalable and resilient solutions.
  • Automate routine tasks and processes to streamline operations and improve efficiency.
  • Conduct in-depth reviews of code and debug issues in data pipelines across multiple applications in production environments, with the ability to perform detailed code analysis and troubleshoot complex issues effectively.
  • Implement security best practices and ensure compliance with relevant regulations.
  • Participate in on-call rotation and respond to incidents promptly to minimize downtime.
  • Document processes and procedures to maintain an up-to-date knowledge base.
  • Stay updated with emerging technologies and industry trends to drive innovation and continuous improvement.

Job Requirements:

  • Minimum one year development experience on big data tools with at least 2 years' experience in DevOps role or managed services operations role.
  • Knowledge and understanding of big data tools and AWS cloud services like MWAA, S3, Athena and Lambda is must.
  • Experience with Apache NiFi is must.
  • Proficiency in scripting languages like Python, Bash, or PowerShell is preferable.
  • Familiarity of containerization technologies like Docker and orchestration tools like Kubernetes.
  • Familiarity with monitoring and logging tools such as Yarn, CloudWatch etc.
  • Experience with version control systems like Git/Bitbucket.
  • Excellent communication and collaboration skills.
  • Ability to work independently and prioritize tasks effectively in a fast-paced environment.
  • Flexibility to operate across various time zones and schedules, including weekend shifts, is required.
This advertiser has chosen not to accept applicants from your region.

Big Data - Engineering Manager

Islamabad, Islamabad Confiz

Posted 10 days ago

Job Viewed

Tap Again To Close

Job Description

Join to apply for the Big Data Platform Owner (Engineering Manager) role at Confiz

2 days ago Be among the first 25 applicants

Join to apply for the Big Data Platform Owner (Engineering Manager) role at Confiz

Get AI-powered advice on this job and more exclusive features.

We are looking for an experienced and driven Big Data Platform Owner (Engineering Manager) to lead our Big Data engineering team. This role is perfect for a technical leader with a strong background in Azure Big Data technologies who can drive strategy, foster team growth, and ensure the successful delivery of high-quality data solutions. You will be responsible for the end-to-end management of our Big Data platform on Azure, encompassing strategic planning, team leadership, project execution, and stakeholder communication.

Key Responsibilities

  • Define and execute the Big Data platform's vision and roadmap, aligning with business goals and leveraging Azure Big Data services. Lead capacity planning and contribute to PI planning for effective resource allocation.
  • Lead, mentor, and develop a high-performing Big Data engineering team. Facilitate all Agile ceremonies (Scrum, Planning, Retrospective, Review) and ensure efficient task assignment and blocker resolution. Oversee onboarding and offboarding processes.
  • Ensure the reliable operation and continuous improvement of our Azure Big Data pipelines and platform. Drive project execution from backlog grooming to release management, including daily pipeline monitoring and P2 issue resolution.
  • as a key liaison for business and technical stakeholders, providing clear communication on project progress, performance, and addressing queries effectively.
  • Champion data quality, governance, and security best practices within the Azure Big Data environment. Ensure adherence to code review guidelines and contribute to robust release processes.

Qualifications

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
  • Overall experience 8+ years with a mix of hands-on experience in Big Data engineering and leadership or management role.
  • Proven hands-on experience and architectural expertise with Azure Big Data services (e.g., Data Lake Storage, Data Factory, Databricks, Synapse Analytics).
  • Strong understanding of Agile methodologies and DevOps practices.
  • Excellent communication, leadership, and problem-solving skills.

We have an amazing team of 700+ individuals working on highly innovative enterprise projects & products. Our customer base includes Fortune 5 retail and CPG companies, leading store chains, fast-growth fintech, and multiple Silicon Valley startups.

What makes Confiz stand out is our focus on processes and culture. Confiz is ISO 9001:2015 (QMS), ISO 27001:2022 (ISMS), ISO 2000-1:2018 (ITSM) and ISO 14001:2015 (EMS) Certified. We have a vibrant culture of learning via collaboration and making workplace fun.

People who work with us work with cutting-edge technologies while contributing success to the company as well as to themselves.

To know more about Confiz Limited, visit:

Seniority level
  • Seniority level Mid-Senior level
Employment type
  • Employment type Full-time
Job function
  • Job function Other
  • Industries Utilities

Referrals increase your chances of interviewing at Confiz by 2x

Sign in to set job alerts for “Owner” roles. Head of Business Development and Partnership - Pakistan

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Big Data - Engineering Manager

Islamabad, Islamabad Confiz

Posted 10 days ago

Job Viewed

Tap Again To Close

Job Description

Join to apply for the

Big Data Platform Owner (Engineering Manager)

role at

Confiz 2 days ago Be among the first 25 applicants Join to apply for the

Big Data Platform Owner (Engineering Manager)

role at

Confiz Get AI-powered advice on this job and more exclusive features. We are looking for an experienced and driven Big Data Platform Owner (Engineering Manager) to lead our Big Data engineering team. This role is perfect for a technical leader with a strong background in Azure Big Data technologies who can drive strategy, foster team growth, and ensure the successful delivery of high-quality data solutions. You will be responsible for the end-to-end management of our Big Data platform on Azure, encompassing strategic planning, team leadership, project execution, and stakeholder communication.

Key Responsibilities

Define and execute the Big Data platform's vision and roadmap, aligning with business goals and leveraging Azure Big Data services. Lead capacity planning and contribute to PI planning for effective resource allocation. Lead, mentor, and develop a high-performing Big Data engineering team. Facilitate all Agile ceremonies (Scrum, Planning, Retrospective, Review) and ensure efficient task assignment and blocker resolution. Oversee onboarding and offboarding processes. Ensure the reliable operation and continuous improvement of our Azure Big Data pipelines and platform. Drive project execution from backlog grooming to release management, including daily pipeline monitoring and P2 issue resolution. as a key liaison for business and technical stakeholders, providing clear communication on project progress, performance, and addressing queries effectively. Champion data quality, governance, and security best practices within the Azure Big Data environment. Ensure adherence to code review guidelines and contribute to robust release processes.

Qualifications

Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Overall experience 8+ years with a mix of hands-on experience in Big Data engineering and leadership or management role. Proven hands-on experience and architectural expertise with Azure Big Data services (e.g., Data Lake Storage, Data Factory, Databricks, Synapse Analytics). Strong understanding of Agile methodologies and DevOps practices. Excellent communication, leadership, and problem-solving skills.

We have an amazing team of 700+ individuals working on highly innovative enterprise projects & products. Our customer base includes Fortune 5 retail and CPG companies, leading store chains, fast-growth fintech, and multiple Silicon Valley startups.

What makes Confiz stand out is our focus on processes and culture. Confiz is

ISO 9001:2015

(QMS),

ISO 27001:2022

(ISMS),

ISO 2000-1:2018

(ITSM) and

ISO 14001:2015

(EMS) Certified. We have a vibrant culture of learning via collaboration and making workplace fun.

People who work with us work with cutting-edge technologies while contributing success to the company as well as to themselves.

To know more about Confiz Limited, visit: Seniority level

Seniority level Mid-Senior level Employment type

Employment type Full-time Job function

Job function Other Industries Utilities Referrals increase your chances of interviewing at Confiz by 2x Sign in to set job alerts for “Owner” roles.

Head of Business Development and Partnership - Pakistan

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Lead Software Engineer Data Engineering AWS Data bricks

Sindh, Sindh JP Morgan Chase

Posted 23 days ago

Job Viewed

Tap Again To Close

Job Description

We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible.


As a Lead Software Engineer at JPMorgan Chase within Corporate Technology, you play a vital role in an agile team dedicated to enhancing, building, and delivering reliable, market-leading technology products in a secure, stable, and scalable manner. As a key technical contributor, you are tasked with implementing essential technology solutions across diverse technical domains, supporting various business functions to achieve the firm's strategic goals.


Job responsibilities


  • Develop appropriate level designs and ensure consensus from peers where necessary.
  • Collaborate with software engineers and cross-functional teams to design and implement deployment strategies using AWS Cloud and Databricks pipelines.
  • Work with software engineers and teams to design, develop, test, and implement solutions within applications.
  • Engage with technical experts, key stakeholders, and team members to resolve complex problems effectively.
  • Understand leadership objectives and proactively address issues before they impact customers.
  • Design, develop, and maintain robust data pipelines to ingest, process, and store large volumes of data from various sources.
  • Implement ETL (Extract, Transform, Load) processes to ensure data quality and integrity using tools like Apache Spark and PySpark.
  • Monitor and optimize the performance of data systems and pipelines.
  • Implement best practices for data storage, retrieval, and processing
  • Maintain comprehensive documentation of data systems, processes, and workflows.
  • Ensure compliance with data governance and security policies

Required qualifications, capabilities, and skills


  • Formal training or certification on software engineering concepts and 5+ years applied experience
  • Formal training or certification in AWS/Databricks with 10+ years of applied experience.
  • Expertise in programming languages such as Python and PySpark.
  • 10+ years of professional experience in designing and implementing data pipelines in a cloud environment.
  • Proficient in design, architecture, and development using AWS Services, Databricks, Spark, Snowflake, etc.
  • Experience with continuous integration and continuous delivery tools like Jenkins, GitLab, or Terraform.
  • Familiarity with container and container orchestration technologies such as ECS, Kubernetes, and Docker.
  • Ability to troubleshoot common Big Data and Cloud technologies and issues.
  • Practical cloud native experience

Preferred qualifications, capabilities, and skills


  • 5+ years of experience in leading and developing data solutions in the AWS cloud.
  • 10+ years of experience in building, implementing, and managing data pipelines using Databricks on Spark or similar cloud technologies

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Etl processes Jobs in Pakistan !

ETL Developer

OnStak

Posted today

Job Viewed

Tap Again To Close

Job Description

Company Description

OnStak is a strategic solution advisory and enterprise AI services implementation partner with a focus on digital transformation using Machine Learning, Deep Learning, Blockchain, and IoT architecture solutions. We support customers in Retail, Healthcare, and Social networking domains by enabling their practice verticals through custom training and service offerings.

Job Summary:

We are looking for a motivated and detail-oriented ETL Developer with 2–3 years of hands-on experience in managing end-to-end ETL processes. The ideal candidate will be responsible for the design, development, deployment, and maintenance of scalable ETL pipelines to support our data integration and warehousing needs.

Responsibilities:

  • Design, develop, and optimize ETL workflows and data pipelines to extract, transform, and load data from various sources into data warehouses or data lakes.
  • Collaborate with data architects, analysts, and business stakeholders to understand data requirements and deliver robust ETL solutions.
  • Monitor ETL jobs and troubleshoot performance issues, data quality problems, and pipeline failures.
  • Ensure data integrity, consistency, and availability across all ETL processes.
  • Implement logging, error handling, and alerting mechanisms within ETL workflows.
  • Maintain documentation of data flows, ETL processes, and job schedules.
  • Support migration of legacy ETL processes to modern platforms/cloud-based tools (optional based on organization).
  • Contribute to process improvements and automation initiatives.

Qualifications:

  • Bachelor's degree in Computer Science, Information Systems, Engineering, or related field.
  • 2–3 years of professional experience in ETL development and solutions management.
  • Hands-on experience with ETL tools such as Informatica, Talend, Apache NiFi, SSIS, or similar.
  • Strong SQL skills and familiarity with relational databases (e.g., SQL Server, Oracle, MySQL, PostgreSQL).
  • Understanding of data warehousing concepts and data modelling.
  • Familiarity with scripting languages such as Python, Shell, or Bash is a plus.
  • Experience in performance tuning of ETL processes and query optimization.
  • Ability to work independently and in a team-oriented, collaborative environment.
  • Exposure to cloud platforms such as AWS (Glue, Redshift), Azure (Data Factory), or GCP (Dataflow).
  • Basic understanding of data governance, security, and compliance.
  • Experience with version control tools (e.g., Git) and CI/CD pipelines.

What We Offer:

  • Competitive salary and benefits package.
  • Opportunity to work on cutting-edge technologies and impactful projects.
  • Collaborative and innovative work environment.
  • Professional development and growth opportunities.
This advertiser has chosen not to accept applicants from your region.

ETL Developer – Informatica BDM/DEI

Calimala

Posted today

Job Viewed

Tap Again To Close

Job Description

About the Role

is seeking an experienced ETL Developer with expertise in Informatica BDM/DEI to join our innovative team in the telecom sector. In this capacity, you will be at the forefront of designing, developing, and optimizing scalable ETL pipelines that integrate data from diverse sources. This role requires a deep technical foundation in ETL development and a passion for turning complex data challenges into reliable, high-performance solutions.

Responsibilities

As an ETL Developer, you will work closely with data architects, analysts, and other team members to understand mapping specifications and implement efficient data workflows. Key responsibilities include:

  • Design and develop complex ETL pipelines using Informatica BDM/DEI.
  • Integrate data from various sources, including big data environments like Hive and Spark.
  • Optimize mappings and workflows to ensure high performance and reliability.
  • Collaborate with cross-functional teams to align data integration strategies with business needs.
  • Document and maintain best practices in ETL and data governance.

Requirements

Candidates must bring at least five years of hands-on experience in ETL development and demonstrable expertise using Informatica BDM/DEI. A strong technical background in SQL, Hive, and Spark is essential along with proven experience in performance tuning and data integration. Additionally, familiarity with the telecom industry will serve as a significant advantage.

What We Offer

At , we provide a dynamic work environment where innovation meets real-world data challenges. In addition to a competitive salary we offer opportunities for professional growth and learning. Join our team to play a pivotal role in transforming data integration processes and driving business success.

This advertiser has chosen not to accept applicants from your region.

Azure Data Integration Engineer (DP600/700)

ITC Worldwide

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

Join to apply for the Azure Data Integration Engineer (DP600/700) role at ITC Worldwide .

Overview

The role involves building and managing data pipelines, troubleshooting issues, and ensuring data accuracy across platforms such as Azure Synapse Analytics, Azure Data Lake Gen2, and SQL environments. This position requires extensive SQL experience and a strong background in PySpark development.

Responsibilities
  • Data Engineering: Work with Azure Synapse Pipelines and PySpark for data transformation and pipeline management.
  • Perform data integration and schema updates in Delta Lake environments, ensuring smooth data flow and accurate reporting.
  • Collaborate with our Azure DevOps team on CI/CD processes for deployment of Infrastructure as Code (IaC) and Workspace artifacts.
  • Develop custom solutions for our customers defined by our Data Architect and assist in improving our data solution patterns over time.
Documentation
  • Document ticket resolutions, testing protocols, and data validation processes.
  • Collaborate with other stakeholders to provide specifications and quotations for enhancements requested by customers.
Ticket Management
  • Monitor the Jira ticket queue and respond to tickets as they are raised.
  • Understand ticket issues, utilizing extensive SQL, Synapse Analytics, and other tools to troubleshoot them.
  • Communicate effectively with customer users who raised the tickets and collaborate with other teams (e.g., FinOps, Databricks) as needed to resolve issues.
Troubleshooting And Support
  • Handle issues related to ETL pipeline failures, Delta Lake processing, or data inconsistencies in Synapse Analytics.
  • Provide prompt resolution to data pipeline and validation issues, ensuring data integrity and performance.
Qualifications

Seeking a candidate with 5+ years of Dynamics 365 ecosystem experience with a strong PySpark development background. While various profiles may apply, we highly value a strong person-organization fit.

  • Extensive experience with SQL, including query writing and troubleshooting in Azure SQL, Synapse Analytics, and Delta Lake environments.
  • Strong understanding and experience in implementing and supporting ETL processes, Data Lakes, and data engineering solutions.
  • Proficiency in using Azure Synapse Analytics, including workspace management, pipeline creation, and data flow management.
  • Hands-on experience with PySpark for data processing and automation.
  • Ability to use VPNs, MFA, RDP, jump boxes/jump hosts, etc., to operate within customers' secure environments.
  • Some experience with Azure DevOps CI/CD IaC and release pipelines.
  • Ability to communicate effectively both verbally and in writing, with strong problem-solving and analytical skills.
  • Understanding of the operation and underlying data structure of D365 Finance and Operations, Business Central, and Customer Engagement.
  • Experience with Data Engineering in Microsoft Fabric
  • Experience with Delta Lake and Azure data engineering concepts (ADLS, ADF, Synapse, AAD, Databricks).
  • Certifications in Azure Data Fabric: DP-600 / DP-700
Why Join Us?
  • Opportunity to work with innovative technologies in a dynamic environment where progressive work culture with a global perspective where your ideas truly matter, and growth opportunities are endless.
  • Work with the latest Microsoft Technologies alongside Dynamics professionals committed to driving customer success.
  • Enjoy the flexibility to work from anywhere
  • Work-life balance that suits your lifestyle.
  • Competitive salary and comprehensive benefits package.
  • Career growth and professional development opportunities.
  • A collaborative and inclusive work culture.
Seniority level
  • Mid-Senior level
Employment type
  • Full-time
Job function
  • Information Technology
Industries
  • IT Services and IT Consulting
#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Etl Processes Jobs