57 Data Pipelines jobs in Pakistan

Data Architecture

Hyderabad, Punjab Opella

Posted today

Job Viewed

Tap Again To Close

Job Description

Job title: Data Architect

  • Location: Hyderabad

Opella is the self-care challenger with the purest and third-largest portfolio in the Over-The-Counter (OTC) & Vitamins, Minerals & Supplements (VMS) market globally.

Our mission is to bring health in people's hands by making self-care as simple as it should be. For half a billion consumers worldwide – and counting.

At the core of this mission is our 100 loved brands, our 11,000-strong global team, our 13 best-in-class manufacturing sites and 4 specialized science and innovation development centers. Headquartered in France, Opella is the proud maker of many of the world's most loved brands, including Allegra, Buscopan, Doliprane, Dulcolax, Enterogermina, Essentiale and Mucosolvan.

B Corp certified in multiple markets, we are active players in the journey towards healthier people and planet. Find out more about our mission at

.

About the job:

We are seeking a strategic and detail-oriented Retail Data Architect / Data Modeler to drive the design and governance of scalable, high-performing data models that power retail insights across our global business. This role is critical to enabling data-driven decision-making across domains such as sales, pricing, inventory, POS, eCommerce, promotions, and supply chain.

He/She will collaborate cross-functionally with data engineers, analysts, and global business stakeholders to ensure our data assets are well-structured, governed, and aligned with corporate objectives. You will also be a key contributor to Opella's enterprise data governance strategy — maintaining strong documentation, lineage, and stewardship standards.

Main responsibilities:

Lead the design and standardization of enterprise data models across core retail and CPG domains, including pricing, sales, inventory, promotions, loyalty, POS, and eCommerce — ensuring models are scalable, governed, and aligned with global analytics strategies.

  • Define and evolve dimensional, star, and Data Vault schema architectures to support self-service analytics, Retail/CPG KPI tracking, and cross-functional reporting across business units and markets.
  • Serve as the data modeling authority for retail and CPG analytics, promoting semantic consistency, harmonized definitions, and reusability across Opella's global functions, affiliates, and syndicated data partners.
  • Collaborate with global commercial, supply chain, and finance teams to translate complex business KPIs and performance metrics into formalized data structures, canonical views, and shared metadata.
  • Establish and maintain logical and physical data models, reference architectures, and enterprise-wide semantic layers to support consumption by BI tools, data products, and AI/ML applications.
  • Contribute to and enforce data governance best practices, including data dictionary maintenance, lineage tracking, model documentation, and coordination with data stewards across regions.
  • Oversee model lifecycle management, version control, and change impact analysis for high-priority subject areas within the retail and CPG data landscape.
  • Partner with data platform and engineering teams to ensure architectural patterns are implemented with performance, scale, and maintainability in mind — without directly owning pipeline development.
  • Act as a strategic advisor on data architecture decisions, guiding alignment across Opella's evolving ecosystem of data products, global platforms, and business initiatives.

About you:

  • Must have 6-10 years of experience in data architecture, data modeling, or analytics engineering, ideally in retail or CPG sectors.
  • Expertise in building ELT pipelines and semantic layers using DBT and Snowflake.
  • Strong foundation in dimensional modeling, data warehousing, and schema design for large-scale analytics.
  • Proven experience in implementing and enforcing enterprise data governance frameworks.
  • Strong SQL and experience with retail datasets such as sales, pricing, POS, inventory, and loyalty.
  • Ability to work with global, cross-functional teams in a matrixed environment.

  • Exposure to data mesh, data product thinking, or domain-oriented data architectures.

  • Understanding of retailer-syndicated data (IRI, Nielsen, Circana, etc.).

Preferred Education

  • Bachelor's degree in Computer Science, Information Technology, Data Engineering, or a related field.
  • A Master's degree or cloud certifications (e.g., SnowPro, AWS Data Analytics) is a plus.

Why us?

At Opella, you will enjoy doing challenging, purposeful work, empowered to develop consumer brands with passion and creativity. This is your chance to grow new skills and be part of a bold, collaborative, and inclusive culture where people can thrive and be at their best every day.

We Are Challengers.

We are dedicated to making self-care as simple as it should be. That starts with our culture. We are challengers by nature, and this is how we do things:

All In Together: We keep each other honest and have each other's backs.

Courageous: We break boundaries and take thoughtful risks with creativity.

Outcome-Obsessed: We are personally accountable, driving sustainable impact and results with integrity.

Radically Simple: We strive to make things simple for us and simple for consumers, as it should be.

Join us on our mission. Health. In your hands.

This advertiser has chosen not to accept applicants from your region.

Cloud & Data Architecture Engineer

Rapid Labs

Posted today

Job Viewed

Tap Again To Close

Job Description

Cloud & Data Architecture EngineerRole Purpose:

As a Cloud & Data Architecture Engineer, you will design, optimize, and maintain GAIA's Azure-hosted architecture. Your goal will be to ensure scalability, reliability, and seamless integration of geospatial and machine learning workflows across the organization.

Key Responsibilities:

  • Conduct a full audit of GAIA's current data pipelines to identify inefficiencies and performance bottlenecks.
  • Architect and implement scalable, cloud-native solutions for satellite image ingestion, storage, and processing.
  • Ensure high availability, cost efficiency, and performance optimization of cloud systems.
  • Research and integrate emerging big data and geospatial architecture technologies into GAIA's workflows.
  • Work closely with DevOps and engineering teams to implement CI/CD pipelines and infrastructure-as-code practices.

Required Skills & Experience:

  • Expertise with Azure (preferred), with additional experience in AWS or GCP.
  • Strong understanding of distributed computing frameworks and scalable data pipelines.
  • Experience in building architectures for geospatial systems and ML/AI integration.
  • Solid knowledge of DevOps practices, including CI/CD, containerization (Docker, Kubernetes), and monitoring tools.
  • Ability to evaluate and implement cutting-edge cloud and data architecture strategies.

3. Computer Vision & Machine Learning EngineerRole Purpose:

As a CV/ML Engineer, you will develop and deploy state-of-the-art machine learning models for detecting whales in satellite imagery. Leveraging transfer learning and custom computer vision architectures, you will transform prototypes into production-ready systems that advance GAIA's environmental monitoring mission.Key Responsibilities:

  • Evaluate and benchmark pretrained models for object detection on satellite imagery datasets.
  • Apply transfer learning, semi-supervised learning, and advanced data augmentation to improve accuracy on limited datasets.
  • Design and train custom computer vision architectures in PyTorch or TensorFlow.
  • Generate and integrate synthetic datasets to enhance model robustness.
  • Build end-to-end workflows from model development to deployment and monitoring in production.
  • Collaborate with geospatial and atmospheric correction teams to ensure seamless integration of ML outputs.

Required Skills & Experience:

  • Strong expertise in computer vision, deep learning, and object detection techniques.
  • Hands-on experience with PyTorch and/or TensorFlow for model training and deployment.
  • Familiarity with satellite imagery analysis and geospatial data formats.
  • Experience in synthetic data generation and augmentation strategies.
  • Ability to transition ML models from research prototypes into production-ready systems.

Compensation & Benefits

This is an independent contractor role. We offer:

  • Competitive, market-based compensation aligned with expertise and experience.
  • Remote and flexible working arrangements.
  • Opportunities for long-term collaboration on advanced geospatial and AI-driven projects.
  • Exposure to cutting-edge cloud and big data technologies in Azure, AWS, and GCP environments.
  • A chance to shape the future of GAIA's cloud and data architecture strategy.

Job Type: Contract

Work Location: Remote

This advertiser has chosen not to accept applicants from your region.

Lead Data Architect- ARCHITECTURE SERVICES

Sindh, Sindh JP Morgan Chase

Posted today

Job Viewed

Tap Again To Close

Job Description

Overview

Your goal is to become a key player among other imaginative thinkers who share a common commitment to continuous improvement and meaningful impact. Don’t miss this chance to collaborate with brilliant minds and deliver premier solutions that set a new standard.

As a Lead Data Architect at JPMorganChase within the Infrastructure Platforms team, you are an integral part of a team that works to develop high-quality data architecture solutions for various software applications on modern cloud-based technologies. As a core technical contributor, you are responsible for carrying out critical data architecture solutions across multiple technical areas within various business functions in support of project goals.

Responsibilities
  • Engages technical teams and business stakeholders to discuss and propose data architecture approaches to meet current and future needs
  • Defines the data architecture target state of their product and drives achievement of the strategy
  • Implement processes and develop tools to enhance/automate data access, extraction, and analysis efficiency
  • Develop insights, methods, or tools using various analytic methods such as causal-model approaches, predictive modeling, regressions, machine learning, time series analysis, etc.
  • Participates in data architecture governance bodies
  • Evaluates recommendations and provides feedback for new technologies
  • Executes creative data architecture solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions and break down technical problems
  • Develops secure, high-quality production code and reviews and debugs code written by others
  • Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems
  • Facilitates evaluation sessions with external vendors, startups, and internal teams to drive outcomes through probing of data architectural designs, technical credentials, and applicability for use within existing systems and information architecture
  • Leads data architecture communities of practice to drive awareness and use of modern data architecture technologies
Required qualifications, capabilities, and skills
  • Formal training or certification on software engineering concepts and 5+ years applied experience
  • Hands-on practical experience delivering system design, application development, testing, and operational stability
  • Advanced knowledge of architecture and one or more programming languages
  • Proficiency in automation and continuous delivery methods
  • Proficiency in all aspects of the Software Development Life Cycle
  • Demonstrated proficiency in software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.)
  • In-depth knowledge of the financial services industry and their IT systems
  • Practical cloud native experience
  • Advanced knowledge of one or more software, application, and architecture disciplines
  • Ability to evaluate current and emerging technologies to recommend the best data architecture solutions for the future state architecture
Preferred qualifications, capabilities, and skills
  • Ability to initiate and implement ideas to solve business problems
  • Passion for learning new technologies and driving innovative solutions.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Azure Data Integration Engineer (DP600/700)

ITC Worldwide

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

Join to apply for the Azure Data Integration Engineer (DP600/700) role at ITC Worldwide .

Overview

The role involves building and managing data pipelines, troubleshooting issues, and ensuring data accuracy across platforms such as Azure Synapse Analytics, Azure Data Lake Gen2, and SQL environments. This position requires extensive SQL experience and a strong background in PySpark development.

Responsibilities
  • Data Engineering: Work with Azure Synapse Pipelines and PySpark for data transformation and pipeline management.
  • Perform data integration and schema updates in Delta Lake environments, ensuring smooth data flow and accurate reporting.
  • Collaborate with our Azure DevOps team on CI/CD processes for deployment of Infrastructure as Code (IaC) and Workspace artifacts.
  • Develop custom solutions for our customers defined by our Data Architect and assist in improving our data solution patterns over time.
Documentation
  • Document ticket resolutions, testing protocols, and data validation processes.
  • Collaborate with other stakeholders to provide specifications and quotations for enhancements requested by customers.
Ticket Management
  • Monitor the Jira ticket queue and respond to tickets as they are raised.
  • Understand ticket issues, utilizing extensive SQL, Synapse Analytics, and other tools to troubleshoot them.
  • Communicate effectively with customer users who raised the tickets and collaborate with other teams (e.g., FinOps, Databricks) as needed to resolve issues.
Troubleshooting And Support
  • Handle issues related to ETL pipeline failures, Delta Lake processing, or data inconsistencies in Synapse Analytics.
  • Provide prompt resolution to data pipeline and validation issues, ensuring data integrity and performance.
Qualifications

Seeking a candidate with 5+ years of Dynamics 365 ecosystem experience with a strong PySpark development background. While various profiles may apply, we highly value a strong person-organization fit.

  • Extensive experience with SQL, including query writing and troubleshooting in Azure SQL, Synapse Analytics, and Delta Lake environments.
  • Strong understanding and experience in implementing and supporting ETL processes, Data Lakes, and data engineering solutions.
  • Proficiency in using Azure Synapse Analytics, including workspace management, pipeline creation, and data flow management.
  • Hands-on experience with PySpark for data processing and automation.
  • Ability to use VPNs, MFA, RDP, jump boxes/jump hosts, etc., to operate within customers' secure environments.
  • Some experience with Azure DevOps CI/CD IaC and release pipelines.
  • Ability to communicate effectively both verbally and in writing, with strong problem-solving and analytical skills.
  • Understanding of the operation and underlying data structure of D365 Finance and Operations, Business Central, and Customer Engagement.
  • Experience with Data Engineering in Microsoft Fabric
  • Experience with Delta Lake and Azure data engineering concepts (ADLS, ADF, Synapse, AAD, Databricks).
  • Certifications in Azure Data Fabric: DP-600 / DP-700
Why Join Us?
  • Opportunity to work with innovative technologies in a dynamic environment where progressive work culture with a global perspective where your ideas truly matter, and growth opportunities are endless.
  • Work with the latest Microsoft Technologies alongside Dynamics professionals committed to driving customer success.
  • Enjoy the flexibility to work from anywhere
  • Work-life balance that suits your lifestyle.
  • Competitive salary and comprehensive benefits package.
  • Career growth and professional development opportunities.
  • A collaborative and inclusive work culture.
Seniority level
  • Mid-Senior level
Employment type
  • Full-time
Job function
  • Information Technology
Industries
  • IT Services and IT Consulting
#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Azure Data Integration Engineer (DP600/700)

Punjab, Punjab ITC Worldwide

Posted 9 days ago

Job Viewed

Tap Again To Close

Job Description

Join to apply for the Azure Data Integration Engineer (DP600/700) role at ITC Worldwide .

Overview

The role involves building and managing data pipelines, troubleshooting issues, and ensuring data accuracy across platforms such as Azure Synapse Analytics, Azure Data Lake Gen2, and SQL environments. This position requires extensive SQL experience and a strong background in PySpark development.

Responsibilities
  • Data Engineering: Work with Azure Synapse Pipelines and PySpark for data transformation and pipeline management.
  • Perform data integration and schema updates in Delta Lake environments, ensuring smooth data flow and accurate reporting.
  • Collaborate with our Azure DevOps team on CI/CD processes for deployment of Infrastructure as Code (IaC) and Workspace artifacts.
  • Develop custom solutions for our customers defined by our Data Architect and assist in improving our data solution patterns over time.
Documentation
  • Document ticket resolutions, testing protocols, and data validation processes.
  • Collaborate with other stakeholders to provide specifications and quotations for enhancements requested by customers.
Ticket Management
  • Monitor the Jira ticket queue and respond to tickets as they are raised.
  • Understand ticket issues, utilizing extensive SQL, Synapse Analytics, and other tools to troubleshoot them.
  • Communicate effectively with customer users who raised the tickets and collaborate with other teams (e.g., FinOps, Databricks) as needed to resolve issues.
Troubleshooting And Support
  • Handle issues related to ETL pipeline failures, Delta Lake processing, or data inconsistencies in Synapse Analytics.
  • Provide prompt resolution to data pipeline and validation issues, ensuring data integrity and performance.
Qualifications

Seeking a candidate with 5+ years of Dynamics 365 ecosystem experience with a strong PySpark development background. While various profiles may apply, we highly value a strong person-organization fit.

  • Extensive experience with SQL, including query writing and troubleshooting in Azure SQL, Synapse Analytics, and Delta Lake environments.
  • Strong understanding and experience in implementing and supporting ETL processes, Data Lakes, and data engineering solutions.
  • Proficiency in using Azure Synapse Analytics, including workspace management, pipeline creation, and data flow management.
  • Hands-on experience with PySpark for data processing and automation.
  • Ability to use VPNs, MFA, RDP, jump boxes/jump hosts, etc., to operate within customers' secure environments.
  • Some experience with Azure DevOps CI/CD IaC and release pipelines.
  • Ability to communicate effectively both verbally and in writing, with strong problem-solving and analytical skills.
  • Understanding of the operation and underlying data structure of D365 Finance and Operations, Business Central, and Customer Engagement.
  • Experience with Data Engineering in Microsoft Fabric
  • Experience with Delta Lake and Azure data engineering concepts (ADLS, ADF, Synapse, AAD, Databricks).
  • Certifications in Azure Data Fabric: DP-600 / DP-700
Why Join Us?
  • Opportunity to work with innovative technologies in a dynamic environment where progressive work culture with a global perspective where your ideas truly matter, and growth opportunities are endless.
  • Work with the latest Microsoft Technologies alongside Dynamics professionals committed to driving customer success.
  • Enjoy the flexibility to work from anywhere
  • Work-life balance that suits your lifestyle.
  • Competitive salary and comprehensive benefits package.
  • Career growth and professional development opportunities.
  • A collaborative and inclusive work culture.
Seniority level
  • Mid-Senior level
Employment type
  • Full-time
Job function
  • Information Technology
Industries
  • IT Services and IT Consulting
#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Azure Data Integration Engineer (DP600/700)

Punjab, Punjab ITC Worldwide

Posted 10 days ago

Job Viewed

Tap Again To Close

Job Description

Join to apply for the Azure Data Integration Engineer (DP600/700) role at ITC Worldwide .

Overview

The role involves building and managing data pipelines, troubleshooting issues, and ensuring data accuracy across platforms such as Azure Synapse Analytics, Azure Data Lake Gen2, and SQL environments. This position requires extensive SQL experience and a strong background in PySpark development.

Responsibilities
  • Data Engineering: Work with Azure Synapse Pipelines and PySpark for data transformation and pipeline management.
  • Perform data integration and schema updates in Delta Lake environments, ensuring smooth data flow and accurate reporting.
  • Collaborate with our Azure DevOps team on CI/CD processes for deployment of Infrastructure as Code (IaC) and Workspace artifacts.
  • Develop custom solutions for our customers defined by our Data Architect and assist in improving our data solution patterns over time.
Documentation
  • Document ticket resolutions, testing protocols, and data validation processes.
  • Collaborate with other stakeholders to provide specifications and quotations for enhancements requested by customers.
Ticket Management
  • Monitor the Jira ticket queue and respond to tickets as they are raised.
  • Understand ticket issues, utilizing extensive SQL, Synapse Analytics, and other tools to troubleshoot them.
  • Communicate effectively with customer users who raised the tickets and collaborate with other teams (e.g., FinOps, Databricks) as needed to resolve issues.
Troubleshooting And Support
  • Handle issues related to ETL pipeline failures, Delta Lake processing, or data inconsistencies in Synapse Analytics.
  • Provide prompt resolution to data pipeline and validation issues, ensuring data integrity and performance.
Qualifications

Seeking a candidate with 5+ years of Dynamics 365 ecosystem experience with a strong PySpark development background. While various profiles may apply, we highly value a strong person-organization fit.

  • Extensive experience with SQL, including query writing and troubleshooting in Azure SQL, Synapse Analytics, and Delta Lake environments.
  • Strong understanding and experience in implementing and supporting ETL processes, Data Lakes, and data engineering solutions.
  • Proficiency in using Azure Synapse Analytics, including workspace management, pipeline creation, and data flow management.
  • Hands-on experience with PySpark for data processing and automation.
  • Ability to use VPNs, MFA, RDP, jump boxes/jump hosts, etc., to operate within customers' secure environments.
  • Some experience with Azure DevOps CI/CD IaC and release pipelines.
  • Ability to communicate effectively both verbally and in writing, with strong problem-solving and analytical skills.
  • Understanding of the operation and underlying data structure of D365 Finance and Operations, Business Central, and Customer Engagement.
  • Experience with Data Engineering in Microsoft Fabric
  • Experience with Delta Lake and Azure data engineering concepts (ADLS, ADF, Synapse, AAD, Databricks).
  • Certifications in Azure Data Fabric: DP-600 / DP-700
Why Join Us?
  • Opportunity to work with innovative technologies in a dynamic environment where progressive work culture with a global perspective where your ideas truly matter, and growth opportunities are endless.
  • Work with the latest Microsoft Technologies alongside Dynamics professionals committed to driving customer success.
  • Enjoy the flexibility to work from anywhere
  • Work-life balance that suits your lifestyle.
  • Competitive salary and comprehensive benefits package.
  • Career growth and professional development opportunities.
  • A collaborative and inclusive work culture.
Seniority level
  • Mid-Senior level
Employment type
  • Full-time
Job function
  • Information Technology
Industries
  • IT Services and IT Consulting
#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Azure Data Integration Engineer (DP600/700)

Punjab, Punjab ITC Worldwide

Posted 15 days ago

Job Viewed

Tap Again To Close

Job Description

Join to apply for the Azure Data Integration Engineer (DP600/700) role at ITC Worldwide .

Overview

The role involves building and managing data pipelines, troubleshooting issues, and ensuring data accuracy across platforms such as Azure Synapse Analytics, Azure Data Lake Gen2, and SQL environments. This position requires extensive SQL experience and a strong background in PySpark development.

Responsibilities
  • Data Engineering: Work with Azure Synapse Pipelines and PySpark for data transformation and pipeline management.
  • Perform data integration and schema updates in Delta Lake environments, ensuring smooth data flow and accurate reporting.
  • Collaborate with our Azure DevOps team on CI/CD processes for deployment of Infrastructure as Code (IaC) and Workspace artifacts.
  • Develop custom solutions for our customers defined by our Data Architect and assist in improving our data solution patterns over time.
Documentation
  • Document ticket resolutions, testing protocols, and data validation processes.
  • Collaborate with other stakeholders to provide specifications and quotations for enhancements requested by customers.
Ticket Management
  • Monitor the Jira ticket queue and respond to tickets as they are raised.
  • Understand ticket issues, utilizing extensive SQL, Synapse Analytics, and other tools to troubleshoot them.
  • Communicate effectively with customer users who raised the tickets and collaborate with other teams (e.g., FinOps, Databricks) as needed to resolve issues.
Troubleshooting And Support
  • Handle issues related to ETL pipeline failures, Delta Lake processing, or data inconsistencies in Synapse Analytics.
  • Provide prompt resolution to data pipeline and validation issues, ensuring data integrity and performance.
Qualifications

Seeking a candidate with 5+ years of Dynamics 365 ecosystem experience with a strong PySpark development background. While various profiles may apply, we highly value a strong person-organization fit.

  • Extensive experience with SQL, including query writing and troubleshooting in Azure SQL, Synapse Analytics, and Delta Lake environments.
  • Strong understanding and experience in implementing and supporting ETL processes, Data Lakes, and data engineering solutions.
  • Proficiency in using Azure Synapse Analytics, including workspace management, pipeline creation, and data flow management.
  • Hands-on experience with PySpark for data processing and automation.
  • Ability to use VPNs, MFA, RDP, jump boxes/jump hosts, etc., to operate within customers' secure environments.
  • Some experience with Azure DevOps CI/CD IaC and release pipelines.
  • Ability to communicate effectively both verbally and in writing, with strong problem-solving and analytical skills.
  • Understanding of the operation and underlying data structure of D365 Finance and Operations, Business Central, and Customer Engagement.
  • Experience with Data Engineering in Microsoft Fabric
  • Experience with Delta Lake and Azure data engineering concepts (ADLS, ADF, Synapse, AAD, Databricks).
  • Certifications in Azure Data Fabric: DP-600 / DP-700
Why Join Us?
  • Opportunity to work with innovative technologies in a dynamic environment where progressive work culture with a global perspective where your ideas truly matter, and growth opportunities are endless.
  • Work with the latest Microsoft Technologies alongside Dynamics professionals committed to driving customer success.
  • Enjoy the flexibility to work from anywhere
  • Work-life balance that suits your lifestyle.
  • Competitive salary and comprehensive benefits package.
  • Career growth and professional development opportunities.
  • A collaborative and inclusive work culture.
Seniority level
  • Mid-Senior level
Employment type
  • Full-time
Job function
  • Information Technology
Industries
  • IT Services and IT Consulting
#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Data pipelines Jobs in Pakistan !

Azure Data Integration Engineer (DP600/700)

Punjab, Punjab ITC Worldwide

Posted 15 days ago

Job Viewed

Tap Again To Close

Job Description

Join to apply for the Azure Data Integration Engineer (DP600/700) role at ITC Worldwide .

Overview

The role involves building and managing data pipelines, troubleshooting issues, and ensuring data accuracy across platforms such as Azure Synapse Analytics, Azure Data Lake Gen2, and SQL environments. This position requires extensive SQL experience and a strong background in PySpark development.

Responsibilities
  • Data Engineering: Work with Azure Synapse Pipelines and PySpark for data transformation and pipeline management.
  • Perform data integration and schema updates in Delta Lake environments, ensuring smooth data flow and accurate reporting.
  • Collaborate with our Azure DevOps team on CI/CD processes for deployment of Infrastructure as Code (IaC) and Workspace artifacts.
  • Develop custom solutions for our customers defined by our Data Architect and assist in improving our data solution patterns over time.
Documentation
  • Document ticket resolutions, testing protocols, and data validation processes.
  • Collaborate with other stakeholders to provide specifications and quotations for enhancements requested by customers.
Ticket Management
  • Monitor the Jira ticket queue and respond to tickets as they are raised.
  • Understand ticket issues, utilizing extensive SQL, Synapse Analytics, and other tools to troubleshoot them.
  • Communicate effectively with customer users who raised the tickets and collaborate with other teams (e.g., FinOps, Databricks) as needed to resolve issues.
Troubleshooting And Support
  • Handle issues related to ETL pipeline failures, Delta Lake processing, or data inconsistencies in Synapse Analytics.
  • Provide prompt resolution to data pipeline and validation issues, ensuring data integrity and performance.
Qualifications

Seeking a candidate with 5+ years of Dynamics 365 ecosystem experience with a strong PySpark development background. While various profiles may apply, we highly value a strong person-organization fit.

  • Extensive experience with SQL, including query writing and troubleshooting in Azure SQL, Synapse Analytics, and Delta Lake environments.
  • Strong understanding and experience in implementing and supporting ETL processes, Data Lakes, and data engineering solutions.
  • Proficiency in using Azure Synapse Analytics, including workspace management, pipeline creation, and data flow management.
  • Hands-on experience with PySpark for data processing and automation.
  • Ability to use VPNs, MFA, RDP, jump boxes/jump hosts, etc., to operate within customers' secure environments.
  • Some experience with Azure DevOps CI/CD IaC and release pipelines.
  • Ability to communicate effectively both verbally and in writing, with strong problem-solving and analytical skills.
  • Understanding of the operation and underlying data structure of D365 Finance and Operations, Business Central, and Customer Engagement.
  • Experience with Data Engineering in Microsoft Fabric
  • Experience with Delta Lake and Azure data engineering concepts (ADLS, ADF, Synapse, AAD, Databricks).
  • Certifications in Azure Data Fabric: DP-600 / DP-700
Why Join Us?
  • Opportunity to work with innovative technologies in a dynamic environment where progressive work culture with a global perspective where your ideas truly matter, and growth opportunities are endless.
  • Work with the latest Microsoft Technologies alongside Dynamics professionals committed to driving customer success.
  • Enjoy the flexibility to work from anywhere
  • Work-life balance that suits your lifestyle.
  • Competitive salary and comprehensive benefits package.
  • Career growth and professional development opportunities.
  • A collaborative and inclusive work culture.
Seniority level
  • Mid-Senior level
Employment type
  • Full-time
Job function
  • Information Technology
Industries
  • IT Services and IT Consulting
#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Azure Data Integration Engineer (DP600/700)

Sindh, Sindh ITC Worldwide

Posted 15 days ago

Job Viewed

Tap Again To Close

Job Description

Join to apply for the Azure Data Integration Engineer (DP600/700) role at ITC Worldwide .

Overview

The role involves building and managing data pipelines, troubleshooting issues, and ensuring data accuracy across platforms such as Azure Synapse Analytics, Azure Data Lake Gen2, and SQL environments. This position requires extensive SQL experience and a strong background in PySpark development.

Responsibilities
  • Data Engineering: Work with Azure Synapse Pipelines and PySpark for data transformation and pipeline management.
  • Perform data integration and schema updates in Delta Lake environments, ensuring smooth data flow and accurate reporting.
  • Collaborate with our Azure DevOps team on CI/CD processes for deployment of Infrastructure as Code (IaC) and Workspace artifacts.
  • Develop custom solutions for our customers defined by our Data Architect and assist in improving our data solution patterns over time.
Documentation
  • Document ticket resolutions, testing protocols, and data validation processes.
  • Collaborate with other stakeholders to provide specifications and quotations for enhancements requested by customers.
Ticket Management
  • Monitor the Jira ticket queue and respond to tickets as they are raised.
  • Understand ticket issues, utilizing extensive SQL, Synapse Analytics, and other tools to troubleshoot them.
  • Communicate effectively with customer users who raised the tickets and collaborate with other teams (e.g., FinOps, Databricks) as needed to resolve issues.
Troubleshooting And Support
  • Handle issues related to ETL pipeline failures, Delta Lake processing, or data inconsistencies in Synapse Analytics.
  • Provide prompt resolution to data pipeline and validation issues, ensuring data integrity and performance.
Qualifications

Seeking a candidate with 5+ years of Dynamics 365 ecosystem experience with a strong PySpark development background. While various profiles may apply, we highly value a strong person-organization fit.

  • Extensive experience with SQL, including query writing and troubleshooting in Azure SQL, Synapse Analytics, and Delta Lake environments.
  • Strong understanding and experience in implementing and supporting ETL processes, Data Lakes, and data engineering solutions.
  • Proficiency in using Azure Synapse Analytics, including workspace management, pipeline creation, and data flow management.
  • Hands-on experience with PySpark for data processing and automation.
  • Ability to use VPNs, MFA, RDP, jump boxes/jump hosts, etc., to operate within customers' secure environments.
  • Some experience with Azure DevOps CI/CD IaC and release pipelines.
  • Ability to communicate effectively both verbally and in writing, with strong problem-solving and analytical skills.
  • Understanding of the operation and underlying data structure of D365 Finance and Operations, Business Central, and Customer Engagement.
  • Experience with Data Engineering in Microsoft Fabric
  • Experience with Delta Lake and Azure data engineering concepts (ADLS, ADF, Synapse, AAD, Databricks).
  • Certifications in Azure Data Fabric: DP-600 / DP-700
Why Join Us?
  • Opportunity to work with innovative technologies in a dynamic environment where progressive work culture with a global perspective where your ideas truly matter, and growth opportunities are endless.
  • Work with the latest Microsoft Technologies alongside Dynamics professionals committed to driving customer success.
  • Enjoy the flexibility to work from anywhere
  • Work-life balance that suits your lifestyle.
  • Competitive salary and comprehensive benefits package.
  • Career growth and professional development opportunities.
  • A collaborative and inclusive work culture.
Seniority level
  • Mid-Senior level
Employment type
  • Full-time
Job function
  • Information Technology
Industries
  • IT Services and IT Consulting
#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Azure Data Integration Engineer (DP600/700)

Sahiwal, Punjab ITC Worldwide

Posted 15 days ago

Job Viewed

Tap Again To Close

Job Description

Join to apply for the Azure Data Integration Engineer (DP600/700) role at ITC Worldwide .

Overview

The role involves building and managing data pipelines, troubleshooting issues, and ensuring data accuracy across platforms such as Azure Synapse Analytics, Azure Data Lake Gen2, and SQL environments. This position requires extensive SQL experience and a strong background in PySpark development.

Responsibilities
  • Data Engineering: Work with Azure Synapse Pipelines and PySpark for data transformation and pipeline management.
  • Perform data integration and schema updates in Delta Lake environments, ensuring smooth data flow and accurate reporting.
  • Collaborate with our Azure DevOps team on CI/CD processes for deployment of Infrastructure as Code (IaC) and Workspace artifacts.
  • Develop custom solutions for our customers defined by our Data Architect and assist in improving our data solution patterns over time.
Documentation
  • Document ticket resolutions, testing protocols, and data validation processes.
  • Collaborate with other stakeholders to provide specifications and quotations for enhancements requested by customers.
Ticket Management
  • Monitor the Jira ticket queue and respond to tickets as they are raised.
  • Understand ticket issues, utilizing extensive SQL, Synapse Analytics, and other tools to troubleshoot them.
  • Communicate effectively with customer users who raised the tickets and collaborate with other teams (e.g., FinOps, Databricks) as needed to resolve issues.
Troubleshooting And Support
  • Handle issues related to ETL pipeline failures, Delta Lake processing, or data inconsistencies in Synapse Analytics.
  • Provide prompt resolution to data pipeline and validation issues, ensuring data integrity and performance.
Qualifications

Seeking a candidate with 5+ years of Dynamics 365 ecosystem experience with a strong PySpark development background. While various profiles may apply, we highly value a strong person-organization fit.

  • Extensive experience with SQL, including query writing and troubleshooting in Azure SQL, Synapse Analytics, and Delta Lake environments.
  • Strong understanding and experience in implementing and supporting ETL processes, Data Lakes, and data engineering solutions.
  • Proficiency in using Azure Synapse Analytics, including workspace management, pipeline creation, and data flow management.
  • Hands-on experience with PySpark for data processing and automation.
  • Ability to use VPNs, MFA, RDP, jump boxes/jump hosts, etc., to operate within customers' secure environments.
  • Some experience with Azure DevOps CI/CD IaC and release pipelines.
  • Ability to communicate effectively both verbally and in writing, with strong problem-solving and analytical skills.
  • Understanding of the operation and underlying data structure of D365 Finance and Operations, Business Central, and Customer Engagement.
  • Experience with Data Engineering in Microsoft Fabric
  • Experience with Delta Lake and Azure data engineering concepts (ADLS, ADF, Synapse, AAD, Databricks).
  • Certifications in Azure Data Fabric: DP-600 / DP-700
Why Join Us?
  • Opportunity to work with innovative technologies in a dynamic environment where progressive work culture with a global perspective where your ideas truly matter, and growth opportunities are endless.
  • Work with the latest Microsoft Technologies alongside Dynamics professionals committed to driving customer success.
  • Enjoy the flexibility to work from anywhere
  • Work-life balance that suits your lifestyle.
  • Competitive salary and comprehensive benefits package.
  • Career growth and professional development opportunities.
  • A collaborative and inclusive work culture.
Seniority level
  • Mid-Senior level
Employment type
  • Full-time
Job function
  • Information Technology
Industries
  • IT Services and IT Consulting
#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Data Pipelines Jobs