6 Etl Architect jobs in Pakistan

Azure Data Integration Engineer (DP600/700)

Lahore, Punjab ITC WORLDWIDE

Posted 14 days ago

Job Viewed

Tap Again To Close

Job Description

The role involves building and managing data pipelines, troubleshooting issues, and ensuring data accuracy across various platforms such as Azure Synapse Analytics, Azure Data Lake Gen2, and SQL environments.

This position requires extensive SQL experience and a strong background in PySpark development.

  • Certifications in Azure Data Fabric: DP-600 / DP-700

Responsibilities

Data Engineering:

  • Work with Azure Synapse Pipelines and PySpark for data transformation and pipeline management.
  • Perform data integration and schema updates in Delta Lake environments, ensuring smooth data flow and accurate reporting.
  • Work with our Azure DevOps team on CI/CD processes for deployment of Infrastructure as Code (IaC) and Workspace artifacts.
  • Develop custom solutions for our customers defined by our Data Architect and assist in improving our data solution patterns over time.

Documentation :

  • Document ticket resolutions, testing protocols, and data validation processes.
  • Collaborate with other stakeholders to provide specifications and quotations for enhancements requested by customers.

Ticket Management:

  • Monitor the Jira ticket queue and respond to tickets as they are raised.
  • Understand ticket issues, utilizing extensive SQL, Synapse Analytics, and other tools to troubleshoot them.
  • Communicate effectively with customer users who raised the tickets and collaborate with other teams (e.g., FinOps, Databricks ) as needed to resolve issues.

Troubleshooting and Support:

  • Handle issues related to ETL pipeline failures, Delta Lake processing, or data inconsistencies in Synapse Analytics.
  • Provide prompt resolution to data pipeline and validation issues, ensuring data integrity and performance.

Desired Skills & Requirements

Seeking a candidate with 5+ years of Dynamics 365 ecosystem experience with a strong PySpark development background. While various profiles may apply, we highly value a strong person-organization fit.

Our ideal candidate possesses the following attributes and qualifications:

  • Extensive experience with SQL, including query writing and troubleshooting in Azure SQL, Synapse Analytics, and Delta Lake environments.
  • Strong understanding and experience in implementing and supporting ETL processes, Data Lakes, and data engineering solutions.
  • Proficiency in using Azure Synapse Analytics, including workspace management, pipeline creation, and data flow management.
  • Hands-on experience with PySpark for data processing and automation.
  • Ability to use VPNs, MFA, RDP, jump boxes/jump hosts, etc., to operate within the customers secure environments.
  • Some experience with Azure DevOps CI/CD IaC and release pipelines.
  • Ability to communicate effectively both verbally and in writing, with strong problem-solving and analytical skills.
  • Understanding of the operation and underlying data structure of D365 Finance and Operations, Business Central, and Customer Engagement.
  • Experience with Data Engineering in Microsoft Fabric
  • Experience with Delta Lake and Azure data engineering concepts (e.g., ADLS, ADF, Synapse, AAD, Databricks).
  • Certifications in Azure Data Fabric: DP-600 / DP-700

Why Join Us?

  • Opportunity to work with innovative technologies in a dynamic environment where progressive work culture with a global perspective where your ideas truly matter, and growth opportunities are endless.
  • Work with the latest Microsoft Technologies alongside Dynamics professionals committed to driving customer success.
  • Enjoy the flexibility to work from anywhere
  • Work-life balance that suits your lifestyle.
  • Competitive salary and comprehensive benefits package.
  • Career growth and professional development opportunities.
  • A collaborative and inclusive work culture.
This advertiser has chosen not to accept applicants from your region.

Customer Integration Data Transformation Specialist

NielsenIQ

Posted 8 days ago

Job Viewed

Tap Again To Close

Job Description

Company Description

NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population.

For more information, visit NIQ.com

Job Description

The Customer Success function is a cornerstone of NIQ Brandbank, playing a crucial role in customer onboarding, adoption, retention, development, and satisfaction for both Brandbank and Etilize solutions.

The Customer Integration function is a vital, client-facing component of NIQ Brandbank. It is responsible for managing the suite of solutions across both Brandbank and Etilize, including associated service levels. This function ensures the efficient ingestion and distribution of all relevant digital product content to meet the needs of our internal teams, stakeholders, customers, industry, and NIQ.

Job Purpose

The Client Integration Data Transformation Specialist is responsible for supporting the successful deployment of new and updated services. This includes managing standard and custom data feeds, creating custom Excel extracts, and integrating with GS1 data platforms. The role is pivotal in ensuring seamless data transformation and integration processes that meet client needs and project specifications.

Job Responsibilities

  • Bespoke Integration Delivery:

  • Participate in the delivery of bespoke integrations, whether client-specific or GDSN (Global Data Synchronization Network).

  • Support data mapping, testing, and deployment activities to ensure accurate and efficient data integration.

  • Work with detailed data specifications and customer requirements.

  • Project Management:

  • Meet project milestones and adhere to client specifications for both inbound and outbound projects.

  • Collaborate with project managers to ensure timely and successful project delivery.

  • Knowledge Maintenance:

  • Maintain up-to-date knowledge of Nielsen Brandbank and local GDSN data models and capture rules.

  • Continuously update skills and knowledge to stay current with industry standards and best practices.

  • Understand complex data mapping and transformation procedures.

  • Collaboration with Consultants and Developers:

  • Work closely with consultants and developers to create patches and changes that support bespoke data outputs.

  • Ensure that all changes are thoroughly tested and meet quality standards before deployment.

  • Data Transformation Logic:

  • Build out data transformation logic to ensure data is accurately transformed and integrated.

  • Identify test scenarios and create test data to validate the transformation processes.

  • Experience in the field of complex data transformation/mapping exercises.

  • Data Quality Assurance:

  • Ensure data quality is maintained throughout the transformation process to the NBB Product Library, GDSN Connector, and retailer sites.

  • Implement quality control measures to detect and correct data issues.

  • Supplier/Retailer Consultancy:

  • Provide consultancy services to suppliers and retailers to maintain the relevance and quality of data coverage.

  • Advise on best practices for data integration and transformation to meet bespoke and standard integration requirements.

  • Stakeholder Communication:

  • Maintain regular and effective communication with key stakeholders, including project managers, clients, and internal teams.

  • Provide updates on project progress, address any issues, and ensure alignment with project goals.

  • Effective communicator, verbally and in writing, in local and English languages.

Additional Information

Our Benefits

  • Flexible working environment
  • Volunteer time off
  • LinkedIn Learning
  • Employee-Assistance-Program (EAP)

About NIQ

NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population.

For more information, visit NIQ.com

Want to keep up with our latest updates?

Follow us on:LinkedIn |Instagram |Twitter |Facebook

Our commitment to Diversity, Equity, and Inclusion

NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center:

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Architect

Confiz Limited

Posted 13 days ago

Job Viewed

Tap Again To Close

Job Description

We are looking for an experienced Data Architect with 12+ years of progressive experience in data engineering, architecture design, and technical leadership. You will lead the architectural strategy and execution for our real-time and batch data platforms, ensuring scalability, reliability, and performance across a modern cloud-native ecosystem. This role is suited for a visionary technologist who can drive enterprise-grade solutions, establish best practices, and mentor high-performing engineering teams.

Key Responsibilities:

  • Define and evolve the data architecture roadmap , ensuring alignment with business goals and scalability of data platforms.
  • Architect real-time streaming and batch data pipelines using Kafka, Apache Flink, and Spark Structured Streaming.
  • Lead the design of lakehouse architectures leveraging technologies such as Apache Spark (PySpark), Delta Lake, Iceberg, and Parquet.
  • Establish governance, security, and compliance frameworks across data platforms and pipelines.
  • Set standards for data orchestration using tools like Apache Airflow and Azure Data Factory.
  • Guide the containerization strategy with Docker, Kubernetes, and Helm for scalable deployment of data applications.
  • Architect and optimize analytics engines such as Trino or Presto for high-performance query execution.
  • Define and implement observability frameworks to monitor SLAs, lineage, data quality, and pipeline health.
  • Collaborate with DevOps on infrastructure-as-code (IaC) using Terraform and CI/CD automation through Azure DevOps.
  • Establish and advocate best practices for coding standards, modular development, and testing in Python and SQL.
  • Collaborate cross-functionally with product owners, data scientists, platform teams, and engineering leads.
  • Lead architecture reviews, POCs, and technology evaluations to guide technology adoption and innovation.
  • Mentor and guide senior and mid-level data engineers across solutioning, technical decisions, and design patterns.

Required Skills & Experience:

  • Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related discipline.
  • 12+ years of experience in data engineering and architecture, with at least 3+ years in a lead/architect role.
  • Must have the ability to take things to completion overcoming challenges like unavailability of stakeholders or information.
  • Expert-level knowledge in Python , SQL , and scalable distributed systems.
  • Deep expertise in streaming architectures with Kafka, Flink, and Spark Streaming.
  • Strong experience designing and implementing batch data workflows using Apache Spark (PySpark).
  • In-depth knowledge of cloud data platforms , particularly Azure (preferred).
  • Extensive experience with data orchestration tools like Airflow and Azure Data Factory.
  • Proven ability to design and implement data lakehouse architectures with formats like Parquet, Delta Lake, and Iceberg.
  • Proficiency in containerization (Docker), orchestration (Kubernetes), and deployment via Helm.
  • Solid experience implementing observability, lineage, and data quality frameworks , such as Great Expectations.
  • Strong background in infrastructure as code (Terraform) and CI/CD integrations.
  • Deep understanding of data governance , metadata management, and data privacy standards.
  • Excellent communication skills with the ability to influence stakeholders and present architectural strategies to leadership.
  • Experience working in agile development environments and fostering team-wide engineering excellence.
  • Experience with large-scale API architecture and scalable data access layers is a plus.
  • Familiarity with AI/ML-integrated data systems or tools (e.g., text-to-SQL) is a bonus.

We have an amazing team of 700+ individuals working on highly innovative enterpriseprojects & products. Our customer base includes Fortune 100 retail and CPGcompanies, leadingstore chains, fast-growth fintech, and multipleSilicon Valley startups.

What makes Confiz stand out is our focus on processes and culture. Confiz is ISO 9001:2015,27001:2013 & 2000-1:2018 certified. We have a vibrant culture of learning via collaboration and making the workplace fun.

People who work with us work with cutting-edge technologies while contributing success to the company as well as to themselves.

To know more about Confiz Limited, visit:

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Architect

Sindh, Sindh Confiz

Posted 13 days ago

Job Viewed

Tap Again To Close

Job Description

Join to apply for the Data Architect role at Confiz

4 days ago Be among the first 25 applicants

Join to apply for the Data Architect role at Confiz

Get AI-powered advice on this job and more exclusive features.

We are looking for an experienced Data Architect with 12+ years of progressive experience in data engineering, architecture design, and technical leadership. You will lead the architectural strategy and execution for our real-time and batch data platforms, ensuring scalability, reliability, and performance across a modern cloud-native ecosystem. This role is suited for a visionary technologist who can drive enterprise-grade solutions, establish best practices, and mentor high-performing engineering teams.

Key Responsibilities

  • Define and evolve the data architecture roadmap, ensuring alignment with business goals and scalability of data platforms.
  • Architect real-time streaming and batch data pipelines using Kafka, Apache Flink, and Spark Structured Streaming.
  • Lead the design of lakehouse architectures leveraging technologies such as Apache Spark (PySpark), Delta Lake, Iceberg, and Parquet.
  • Establish governance, security, and compliance frameworks across data platforms and pipelines.
  • Set standards for data orchestration using tools like Apache Airflow and Azure Data Factory.
  • Guide the containerization strategy with Docker, Kubernetes, and Helm for scalable deployment of data applications.
  • Architect and optimize analytics engines such as Trino or Presto for high-performance query execution.
  • Define and implement observability frameworks to monitor SLAs, lineage, data quality, and pipeline health.
  • Collaborate with DevOps on infrastructure-as-code (IaC) using Terraform and CI/CD automation through Azure DevOps.
  • Establish and advocate best practices for coding standards, modular development, and testing in Python and SQL.
  • Collaborate cross-functionally with product owners, data scientists, platform teams, and engineering leads.
  • Lead architecture reviews, POCs, and technology evaluations to guide technology adoption and innovation.
  • Mentor and guide senior and mid-level data engineers across solutioning, technical decisions, and design patterns.

Required Skills & Experience

  • Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related discipline.
  • 12+ years of experience in data engineering and architecture, with at least 3+ years in a lead/architect role.
  • Must have the ability to take things to completion overcoming challenges like unavailability of stakeholders or information.
  • Expert-level knowledge in Python, SQL, and scalable distributed systems.
  • Deep expertise in streaming architectures with Kafka, Flink, and Spark Streaming.
  • Strong experience designing and implementing batch data workflows using Apache Spark (PySpark).
  • In-depth knowledge of cloud data platforms, particularly Azure (preferred).
  • Extensive experience with data orchestration tools like Airflow and Azure Data Factory.
  • Proven ability to design and implement data lakehouse architectures with formats like Parquet, Delta Lake, and Iceberg.
  • Proficiency in containerization (Docker), orchestration (Kubernetes), and deployment via Helm.
  • Solid experience implementing observability, lineage, and data quality frameworks, such as Great Expectations.
  • Strong background in infrastructure as code (Terraform) and CI/CD integrations.
  • Deep understanding of data governance, metadata management, and data privacy standards.
  • Excellent communication skills with the ability to influence stakeholders and present architectural strategies to leadership.
  • Experience working in agile development environments and fostering team-wide engineering excellence.
  • Experience with large-scale API architecture and scalable data access layers is a plus.
  • Familiarity with AI/ML-integrated data systems or tools (e.g., text-to-SQL) is a bonus.

We have an amazing team of 700+ individuals working on highly innovative enterprise projects & products. Our customer base includes Fortune 100 retail and CPG companies, leading store chains, fast-growth fintech, and multiple Silicon Valley startups.

What makes Confiz stand out is our focus on processes and culture. Confiz is ISO 9001:2015, 27001:2013 & 2000-1:2018 certified. We have a vibrant culture of learning via collaboration and making the workplace fun.

People who work with us work with cutting-edge technologies while contributing success to the company as well as to themselves.

To know more about Confiz Limited, visit:

Seniority level
  • Seniority level Mid-Senior level
Employment type
  • Employment type Full-time
Job function
  • Job function Other
  • Industries Utilities

Referrals increase your chances of interviewing at Confiz by 2x

Get notified about new Data Architect jobs in Karachi Division, Sindh, Pakistan .

Karachi Division, Sindh, Pakistan 2 days ago

Karachi Division, Sindh, Pakistan 1 month ago

Karachi Division, Sindh, Pakistan 5 days ago

Karachi Division, Sindh, Pakistan 3 days ago

Karachi Division, Sindh, Pakistan 6 days ago

Karachi Division, Sindh, Pakistan 6 days ago

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Architect

Karachi, Sindh Confiz

Posted 15 days ago

Job Viewed

Tap Again To Close

Job Description

Join to apply for the

Data Architect

role at

Confiz 4 days ago Be among the first 25 applicants Join to apply for the

Data Architect

role at

Confiz Get AI-powered advice on this job and more exclusive features. We are looking for an experienced

Data Architect

with 12+ years of progressive experience in data engineering, architecture design, and technical leadership. You will lead the architectural strategy and execution for our real-time and batch data platforms, ensuring scalability, reliability, and performance across a modern cloud-native ecosystem. This role is suited for a visionary technologist who can drive enterprise-grade solutions, establish best practices, and mentor high-performing engineering teams.

Key Responsibilities

Define and evolve the data architecture roadmap, ensuring alignment with business goals and scalability of data platforms. Architect real-time streaming and batch data pipelines using Kafka, Apache Flink, and Spark Structured Streaming. Lead the design of lakehouse architectures leveraging technologies such as Apache Spark (PySpark), Delta Lake, Iceberg, and Parquet. Establish governance, security, and compliance frameworks across data platforms and pipelines. Set standards for data orchestration using tools like Apache Airflow and Azure Data Factory. Guide the containerization strategy with Docker, Kubernetes, and Helm for scalable deployment of data applications. Architect and optimize analytics engines such as Trino or Presto for high-performance query execution. Define and implement observability frameworks to monitor SLAs, lineage, data quality, and pipeline health. Collaborate with DevOps on infrastructure-as-code (IaC) using Terraform and CI/CD automation through Azure DevOps. Establish and advocate best practices for coding standards, modular development, and testing in Python and SQL. Collaborate cross-functionally with product owners, data scientists, platform teams, and engineering leads. Lead architecture reviews, POCs, and technology evaluations to guide technology adoption and innovation. Mentor and guide senior and mid-level data engineers across solutioning, technical decisions, and design patterns.

Required Skills & Experience

Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related discipline. 12+ years of experience in data engineering and architecture, with at least 3+ years in a lead/architect role. Must have the ability to take things to completion overcoming challenges like unavailability of stakeholders or information. Expert-level knowledge in Python, SQL, and scalable distributed systems. Deep expertise in streaming architectures with Kafka, Flink, and Spark Streaming. Strong experience designing and implementing batch data workflows using Apache Spark (PySpark). In-depth knowledge of cloud data platforms, particularly Azure (preferred). Extensive experience with data orchestration tools like Airflow and Azure Data Factory. Proven ability to design and implement data lakehouse architectures with formats like Parquet, Delta Lake, and Iceberg. Proficiency in containerization (Docker), orchestration (Kubernetes), and deployment via Helm. Solid experience implementing observability, lineage, and data quality frameworks, such as Great Expectations. Strong background in infrastructure as code (Terraform) and CI/CD integrations. Deep understanding of data governance, metadata management, and data privacy standards. Excellent communication skills with the ability to influence stakeholders and present architectural strategies to leadership. Experience working in agile development environments and fostering team-wide engineering excellence. Experience with large-scale API architecture and scalable data access layers is a plus. Familiarity with AI/ML-integrated data systems or tools (e.g., text-to-SQL) is a bonus.

We have an amazing team of 700+ individuals working on highly innovative enterprise projects & products. Our customer base includes Fortune 100 retail and CPG companies, leading store chains, fast-growth fintech, and multiple Silicon Valley startups.

What makes Confiz stand out is our focus on processes and culture. Confiz is ISO 9001:2015, 27001:2013 & 2000-1:2018 certified. We have a vibrant culture of learning via collaboration and making the workplace fun.

People who work with us work with cutting-edge technologies while contributing success to the company as well as to themselves.

To know more about Confiz Limited, visit: Seniority level

Seniority level Mid-Senior level Employment type

Employment type Full-time Job function

Job function Other Industries Utilities Referrals increase your chances of interviewing at Confiz by 2x Get notified about new Data Architect jobs in

Karachi Division, Sindh, Pakistan . Karachi Division, Sindh, Pakistan 2 days ago Karachi Division, Sindh, Pakistan 1 month ago Karachi Division, Sindh, Pakistan 5 days ago Karachi Division, Sindh, Pakistan 3 days ago Karachi Division, Sindh, Pakistan 6 days ago Karachi Division, Sindh, Pakistan 6 days ago We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Architect

Karachi, Sindh Confiz Limited

Posted 18 days ago

Job Viewed

Tap Again To Close

Job Description

We are looking for an experienced

Data Architect

with 12+ years of progressive experience in data engineering, architecture design, and technical leadership. You will lead the architectural strategy and execution for our real-time and batch data platforms, ensuring scalability, reliability, and performance across a modern cloud-native ecosystem. This role is suited for a visionary technologist who can drive enterprise-grade solutions, establish best practices, and mentor high-performing engineering teams. Key Responsibilities: Define and evolve the

data architecture roadmap , ensuring alignment with business goals and scalability of data platforms. Architect

real-time streaming

and

batch data pipelines

using Kafka, Apache Flink, and Spark Structured Streaming. Lead the design of

lakehouse architectures

leveraging technologies such as Apache Spark (PySpark), Delta Lake, Iceberg, and Parquet. Establish

governance, security, and compliance

frameworks across data platforms and pipelines. Set standards for

data orchestration

using tools like Apache Airflow and Azure Data Factory. Guide the containerization strategy with Docker, Kubernetes, and Helm for scalable deployment of data applications. Architect and optimize

analytics engines

such as Trino or Presto for high-performance query execution. Define and implement observability frameworks to monitor SLAs, lineage, data quality, and pipeline health. Collaborate with DevOps on

infrastructure-as-code (IaC)

using Terraform and CI/CD automation through Azure DevOps. Establish and advocate best practices for

coding standards, modular development, and testing

in Python and SQL. Collaborate cross-functionally with product owners, data scientists, platform teams, and engineering leads. Lead architecture reviews, POCs, and technology evaluations to guide technology adoption and innovation. Mentor and guide senior and mid-level data engineers across solutioning, technical decisions, and design patterns. Required Skills & Experience: Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related discipline. 12+ years of experience in data engineering and architecture, with at least 3+ years in a lead/architect role. Must have the ability to take things to completion overcoming challenges like unavailability of stakeholders or information. Expert-level knowledge in

Python ,

SQL , and scalable distributed systems. Deep expertise in

streaming architectures

with Kafka, Flink, and Spark Streaming. Strong experience designing and implementing

batch data workflows

using Apache Spark (PySpark). In-depth knowledge of

cloud data platforms , particularly Azure (preferred). Extensive experience with

data orchestration

tools like Airflow and Azure Data Factory. Proven ability to design and implement

data lakehouse architectures

with formats like Parquet, Delta Lake, and Iceberg. Proficiency in

containerization

(Docker), orchestration (Kubernetes), and deployment via Helm. Solid experience implementing

observability, lineage, and data quality frameworks , such as Great Expectations. Strong background in

infrastructure as code

(Terraform) and CI/CD integrations. Deep understanding of

data governance , metadata management, and data privacy standards. Excellent communication skills with the ability to influence stakeholders and present architectural strategies to leadership. Experience working in

agile development environments

and fostering team-wide engineering excellence. Experience with large-scale API architecture and scalable data access layers is a plus. Familiarity with AI/ML-integrated data systems or tools (e.g., text-to-SQL) is a bonus. We have an amazing team of 700+ individuals working on highly innovative enterpriseprojects & products. Our customer base includes Fortune 100 retail and CPGcompanies, leadingstore chains, fast-growth fintech, and multipleSilicon Valley startups. What makes Confiz stand out is our focus on processes and culture. Confiz is ISO 9001:2015,27001:2013 & 2000-1:2018 certified. We have a vibrant culture of learning via collaboration and making the workplace fun. People who work with us work with cutting-edge technologies while contributing success to the company as well as to themselves. To know more about Confiz Limited, visit:
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Etl architect Jobs in Pakistan !

 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Etl Architect Jobs