122 Big Data Technologies jobs in Pakistan
Data Engineering Lead
Posted today
Job Viewed
Job Description
Core Skills:
- Data engineer with a solid technical background and banking experience in data-intensive systems (>7 years)
- Strong experience in Big Data, Hadoop Ecosystem, Spark Streaming, Kafka, Python, SQL, Hive, NIFI, Airflow
- Proficient with Azure Cloud services such as Azure Data Factory (ADF), Databricks, ADLS, Azure Synapse, Logic Apps, Azure Functions. Or similar data stack knowledge within Google/AWS cloud services
- Proficiency in relational SQL, Graph and NoSQL databases.
- Proficiency in Elastic Search and Couchbase databases
- In-depth skills in developing and maintaining ETL/ELT data pipelines.
- Experience in data modelling techniques such as Kimball star schema, 3NF, vault modelling etc.
- Experience in workflow management tools such as Airflow, Oozie and CI/CD tools
- Data streaming solution in Kafka or Confluent Kafka
- Hands on experience in Google Big Query, Google Analytics & Clickstream Data Model
- Reporting knowledge in Power Bl, Tableau, Qlik etc.
- Sound in Data Management Fundamentals and Data Architect, Modelling, Governance
- Strong Domain Knowledge in Banking/Finance area like understanding of the various processes, products, and services within the banking industry, core functions, regulations, and operational aspects of banking institutions.
- Hands on knowledge in Hadoop and Azure/AWS cloud ecosystem and ETL jobs migration
- Knowledge of Advance Analytics and Al tools
Minimum BS Degree in CS, IT or Engineering
Banking Domain Knowledge will be a Plus.
Competitive Salary, Medical OPD & IPD, Life Insurance, EOBI
Lead Data Engineering
Posted today
Job Viewed
Job Description
5 - 7 Years
3 Openings
Bengaluru, Chennai, Hyderabad, Kochi, Trivandrum
Role descriptionAI/ML Engineer - Required Skills
Hands-on experience with Agentic Layer A2A frameworks and MCP Protocol.
Expertise in AI/ML engineering, specifically vector embeddings, prompt engineering, and context engineering.
Strong programming skills in at least two of the following: Python, Java, Go.
Proficiency in deploying solutions on Azure Cloud.
Experience with databases such as Azure AI Search, Redis, and Cosmos DB (Blob Storage and Iceberg are plus).
Proven ability to design and manage Azure Functions and Azure Container Apps.
Strong understanding of cloud-native architecture, scalability, and performance optimization.
Pyspark,Aws Cloud,Machine Learning
About USTUST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world's best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients' organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Big Data - Engineering Manager
Posted 10 days ago
Job Viewed
Job Description
Join to apply for the Big Data Platform Owner (Engineering Manager) role at Confiz
2 days ago Be among the first 25 applicants
Join to apply for the Big Data Platform Owner (Engineering Manager) role at Confiz
Get AI-powered advice on this job and more exclusive features.
We are looking for an experienced and driven Big Data Platform Owner (Engineering Manager) to lead our Big Data engineering team. This role is perfect for a technical leader with a strong background in Azure Big Data technologies who can drive strategy, foster team growth, and ensure the successful delivery of high-quality data solutions. You will be responsible for the end-to-end management of our Big Data platform on Azure, encompassing strategic planning, team leadership, project execution, and stakeholder communication.
Key Responsibilities
- Define and execute the Big Data platform's vision and roadmap, aligning with business goals and leveraging Azure Big Data services. Lead capacity planning and contribute to PI planning for effective resource allocation.
- Lead, mentor, and develop a high-performing Big Data engineering team. Facilitate all Agile ceremonies (Scrum, Planning, Retrospective, Review) and ensure efficient task assignment and blocker resolution. Oversee onboarding and offboarding processes.
- Ensure the reliable operation and continuous improvement of our Azure Big Data pipelines and platform. Drive project execution from backlog grooming to release management, including daily pipeline monitoring and P2 issue resolution.
- as a key liaison for business and technical stakeholders, providing clear communication on project progress, performance, and addressing queries effectively.
- Champion data quality, governance, and security best practices within the Azure Big Data environment. Ensure adherence to code review guidelines and contribute to robust release processes.
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- Overall experience 8+ years with a mix of hands-on experience in Big Data engineering and leadership or management role.
- Proven hands-on experience and architectural expertise with Azure Big Data services (e.g., Data Lake Storage, Data Factory, Databricks, Synapse Analytics).
- Strong understanding of Agile methodologies and DevOps practices.
- Excellent communication, leadership, and problem-solving skills.
What makes Confiz stand out is our focus on processes and culture. Confiz is ISO 9001:2015 (QMS), ISO 27001:2022 (ISMS), ISO 2000-1:2018 (ITSM) and ISO 14001:2015 (EMS) Certified. We have a vibrant culture of learning via collaboration and making workplace fun.
People who work with us work with cutting-edge technologies while contributing success to the company as well as to themselves.
To know more about Confiz Limited, visit: Seniority level
- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Other
- Industries Utilities
Referrals increase your chances of interviewing at Confiz by 2x
Sign in to set job alerts for “Owner” roles. Head of Business Development and Partnership - PakistanWe’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrBig Data - Engineering Manager
Posted 10 days ago
Job Viewed
Job Description
Big Data Platform Owner (Engineering Manager)
role at
Confiz 2 days ago Be among the first 25 applicants Join to apply for the
Big Data Platform Owner (Engineering Manager)
role at
Confiz Get AI-powered advice on this job and more exclusive features. We are looking for an experienced and driven Big Data Platform Owner (Engineering Manager) to lead our Big Data engineering team. This role is perfect for a technical leader with a strong background in Azure Big Data technologies who can drive strategy, foster team growth, and ensure the successful delivery of high-quality data solutions. You will be responsible for the end-to-end management of our Big Data platform on Azure, encompassing strategic planning, team leadership, project execution, and stakeholder communication.
Key Responsibilities
Define and execute the Big Data platform's vision and roadmap, aligning with business goals and leveraging Azure Big Data services. Lead capacity planning and contribute to PI planning for effective resource allocation. Lead, mentor, and develop a high-performing Big Data engineering team. Facilitate all Agile ceremonies (Scrum, Planning, Retrospective, Review) and ensure efficient task assignment and blocker resolution. Oversee onboarding and offboarding processes. Ensure the reliable operation and continuous improvement of our Azure Big Data pipelines and platform. Drive project execution from backlog grooming to release management, including daily pipeline monitoring and P2 issue resolution. as a key liaison for business and technical stakeholders, providing clear communication on project progress, performance, and addressing queries effectively. Champion data quality, governance, and security best practices within the Azure Big Data environment. Ensure adherence to code review guidelines and contribute to robust release processes.
Qualifications
Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Overall experience 8+ years with a mix of hands-on experience in Big Data engineering and leadership or management role. Proven hands-on experience and architectural expertise with Azure Big Data services (e.g., Data Lake Storage, Data Factory, Databricks, Synapse Analytics). Strong understanding of Agile methodologies and DevOps practices. Excellent communication, leadership, and problem-solving skills.
We have an amazing team of 700+ individuals working on highly innovative enterprise projects & products. Our customer base includes Fortune 5 retail and CPG companies, leading store chains, fast-growth fintech, and multiple Silicon Valley startups.
What makes Confiz stand out is our focus on processes and culture. Confiz is
ISO 9001:2015
(QMS),
ISO 27001:2022
(ISMS),
ISO 2000-1:2018
(ITSM) and
ISO 14001:2015
(EMS) Certified. We have a vibrant culture of learning via collaboration and making workplace fun.
People who work with us work with cutting-edge technologies while contributing success to the company as well as to themselves.
To know more about Confiz Limited, visit: Seniority level
Seniority level Mid-Senior level Employment type
Employment type Full-time Job function
Job function Other Industries Utilities Referrals increase your chances of interviewing at Confiz by 2x Sign in to set job alerts for “Owner” roles.
Head of Business Development and Partnership - Pakistan
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-Ljbffr
Managing Consultant Data Engineering
Posted today
Job Viewed
Job Description
Job Summary:
We are looking to hire an experienced data engineer with expertise in SQL and SSIS background and having familiarity with Azure environment.
Job Requirements:
- Expertise in SQL, SQL Server, SSIS background.
- Strong Azure data engineering experience – ADF, Data Fabric, Databricks, PySpark.
- Experience with Power BI is a plus.
- Understand complex SQL Server based data warehouse environments.
- Strong analytical and problem-solving skills.
- Experience with Finance Data Warehouse a big plus.
- Excellent communication skills.
- Ability to independently function and be proactive.
Managing Consultant Data Engineering
Posted today
Job Viewed
Job Description
Job Role: Data Ops Support Engineer
Job Responsibilities:
- Implement and manage continuous integration and deployment pipelines for various applications and services.
- Proactively monitor data pipelines, system performance and troubleshoot any issues to maintain high availability and reliability of the infrastructure.
- Collaborate with development and business teams to design and implement scalable and resilient solutions.
- Automate routine tasks and processes to streamline operations and improve efficiency.
- Conduct in-depth reviews of code and debug issues in data pipelines across multiple applications in production environments, with the ability to perform detailed code analysis and troubleshoot complex issues effectively.
- Implement security best practices and ensure compliance with relevant regulations.
- Participate in on-call rotation and respond to incidents promptly to minimize downtime.
- Document processes and procedures to maintain an up-to-date knowledge base.
- Stay updated with emerging technologies and industry trends to drive innovation and continuous improvement.
Job Requirements:
- Minimum one year development experience on big data tools with at least 2 years' experience in DevOps role or managed services operations role.
- Knowledge and understanding of big data tools and AWS cloud services like MWAA, S3, Athena and Lambda is must.
- Experience with Apache NiFi is must.
- Proficiency in scripting languages like Python, Bash, or PowerShell is preferable.
- Familiarity of containerization technologies like Docker and orchestration tools like Kubernetes.
- Familiarity with monitoring and logging tools such as Yarn, CloudWatch etc.
- Experience with version control systems like Git/Bitbucket.
- Excellent communication and collaboration skills.
- Ability to work independently and prioritize tasks effectively in a fast-paced environment.
- Flexibility to operate across various time zones and schedules, including weekend shifts, is required.
Lead Software Engineer Data Engineering AWS Data bricks
Posted 23 days ago
Job Viewed
Job Description
We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible.
As a Lead Software Engineer at JPMorgan Chase within Corporate Technology, you play a vital role in an agile team dedicated to enhancing, building, and delivering reliable, market-leading technology products in a secure, stable, and scalable manner. As a key technical contributor, you are tasked with implementing essential technology solutions across diverse technical domains, supporting various business functions to achieve the firm's strategic goals.
Job responsibilities
- Develop appropriate level designs and ensure consensus from peers where necessary.
- Collaborate with software engineers and cross-functional teams to design and implement deployment strategies using AWS Cloud and Databricks pipelines.
- Work with software engineers and teams to design, develop, test, and implement solutions within applications.
- Engage with technical experts, key stakeholders, and team members to resolve complex problems effectively.
- Understand leadership objectives and proactively address issues before they impact customers.
- Design, develop, and maintain robust data pipelines to ingest, process, and store large volumes of data from various sources.
- Implement ETL (Extract, Transform, Load) processes to ensure data quality and integrity using tools like Apache Spark and PySpark.
- Monitor and optimize the performance of data systems and pipelines.
- Implement best practices for data storage, retrieval, and processing
- Maintain comprehensive documentation of data systems, processes, and workflows.
- Ensure compliance with data governance and security policies
Required qualifications, capabilities, and skills
- Formal training or certification on software engineering concepts and 5+ years applied experience
- Formal training or certification in AWS/Databricks with 10+ years of applied experience.
- Expertise in programming languages such as Python and PySpark.
- 10+ years of professional experience in designing and implementing data pipelines in a cloud environment.
- Proficient in design, architecture, and development using AWS Services, Databricks, Spark, Snowflake, etc.
- Experience with continuous integration and continuous delivery tools like Jenkins, GitLab, or Terraform.
- Familiarity with container and container orchestration technologies such as ECS, Kubernetes, and Docker.
- Ability to troubleshoot common Big Data and Cloud technologies and issues.
- Practical cloud native experience
Preferred qualifications, capabilities, and skills
- 5+ years of experience in leading and developing data solutions in the AWS cloud.
- 10+ years of experience in building, implementing, and managing data pipelines using Databricks on Spark or similar cloud technologies
#J-18808-Ljbffr
Be The First To Know
About the latest Big data technologies Jobs in Pakistan !
Data Science
Posted today
Job Viewed
Job Description
We're looking for a
Data Science & Analytics Consultant
to to work
for a US-based client
on a
3-month project engagement
. This is a
remote position
, aligned with
Pakistan working hours
, with
twice-weekly meetings on US Pacific time
.
About the Role
This is a high-impact role where you will
set up the foundation of Data Science & Analytics
for the client. You'll design the architecture, establish best practices, and deliver predictive models that will guide the client's decision-making for years to come.
What You'll Do
- Lay down the
data science & analytics framework
as the client's first dedicated initiative in this space. - Build and scale
cloud-based data pipelines
and
machine learning models
for demand/supply and financial reporting. - Define
data architecture, standards, and governance practices
to ensure scalability and reliability. - Move beyond traditional Excel-based analysis to deliver
superior forecasting accuracy
. - Design
interactive dashboards
(Tableau, Power BI) to empower stakeholders with real-time insights. - Partner with client leadership to shape a
long-term data strategy
, ensuring business and technical alignment. - Translate complex data concepts into clear, actionable strategies for non-technical stakeholders.
What We're Looking For
- Master's or PhD in Data Science, Computer Science, Statistics, or related field.
- 10+ years of proven experience in
data science & predictive modeling
for commercial applications. - Industry experience in
supply chain, logistics, or commodity trading
(highly desirable). - Strong expertise in Python/R, SQL, and cloud platforms (Azure & GCP).
- Proficiency with visualization tools (Tableau).
- Excellent communication skills, especially for client-facing discussions.
Contract Details
- Duration:
3 months - Remote role (Pakistan working hours)
- Twice-weekly meetings
with client on
US Pacific time
Data Science
Posted today
Job Viewed
Job Description
Devsinc is seeking motivated
Data Scientists
with
1-3 years of experience
, particularly in
Artificial Intelligence (AI)
or
Machine Learning (ML)
. This role is ideal for individuals who have built a strong foundation in ML methodologies and are eager to apply their skills to real-world business challenges. You will work closely with cross-functional teams to design, deploy, and optimize ML-driven solutions that support data-driven decision-making and innovation.
**Key Responsibilities:
Model Development:**
Design, develop, and deploy ML models for business use cases, including data preprocessing, feature engineering, model training, evaluation, and deployment.
Data Exploration:
Conduct exploratory data analysis (EDA) to uncover patterns, correlations, and insights that inform model refinement and business strategies.
Collaboration:
Work with data scientists, engineers, and business stakeholders to translate business needs into ML-driven solutions.
Visualization & Reporting:
Build clear, compelling visualizations and reports to communicate ML outcomes and insights to both technical and non-technical audiences.
Continuous Learning:
Stay updated with the latest advancements in ML algorithms, tools, and best practices, and incorporate them into projects where applicable.
Prototyping:
Develop and test prototypes for predictive and analytical models in real-world scenarios.
Communication:
Maintain clear, structured communication to articulate data needs, methodologies, and outcomes effectively.
Cross-Functional Impact:
Identify opportunities to reuse datasets, code, or models across multiple business areas.
Requirements
Education:
Bachelor's degree in Data Science, Computer Science, Engineering, Mathematics, Statistics, or a related field (with significant ML coursework/projects).
Experience:
1-3 years of experience of hands-on experience in ML or data science, supported by a portfolio of relevant projects (model development, feature engineering, data analysis).
Technical Skills:
- Proficiency in Python and ML frameworks/libraries (e.g., TensorFlow, PyTorch, scikit-learn).
- Experience with SQL for data manipulation.
- Familiarity with data visualization tools/libraries (e.g., Matplotlib, Seaborn, ggplot2).
Analytical Skills:
Strong ability to analyze large and complex datasets and derive meaningful insights.
Communication:
Ability to explain technical concepts and ML results to non-technical stakeholders.
Teamwork:
Demonstrated ability to work collaboratively, adapt to feedback, and contribute effectively in a team environment.
Data Science Intern
Posted today
Job Viewed
Job Description
Job Description
About Devfum
Devfum leverages a specialized blend of web technologies, artificial intelligence (AI), and extended reality (XR) solutions to automate business processes, enhance customer experiences, and drive sustainable growth for businesses worldwide.
Our Vision
To be the world's pioneering digital experience provider.
Why Join Devfum?
At Devfum, we foster an environment of growth, learning, and work-life balance. Our interns are not just learners—they are contributors to real-world innovation.
Learning & Development – Hands-on exposure to AI, ML, Generative AI, Agentic AI, and Computer Vision.
Mentorship & Guidance – Work alongside experienced professionals to sharpen your skills.
Career Growth Roadmap – High-performing interns have the opportunity for full-time employment.
Daily Lunch & Tea – Complimentary meals and refreshments.
Monthly Sports Activities – Engage in team-building activities and stay active.
Key Responsibilities
- Research, develop, and implement AI/ML models with a focus on Generative AI, Agentic AI, Large Language Models (LLMs), NLP, and Computer Vision.
- Design, train, and fine-tune AI models using Python and frameworks such as PyTorch and Hugging Face.
- Collaborate with engineers to integrate models into web applications using FastAPI, Flask, and Django.
- Build and maintain CI/CD pipelines, including writing Bash scripts and working with Docker for model deployment.
- Analyze complex datasets to derive actionable insights and optimize AI models for performance and scalability.
- Document and present findings, and communicate progress to team members and stakeholders
Required QualificationsEducation
- Bachelor's degree in Computer Science, Data Science, AI/ML, or a related field (Graduation is required).
Technical Skills
- Strong proficiency in Python and hands-on experience with PyTorch.
- Experience with Generative AI technologies: LLMs, Hugging Face NLP, and image generation tools.
- Knowledge of Agentic AI frameworks, Computer Vision techniques, and NLP applications.
- Familiarity with web frameworks: FastAPI, Flask, and Django.
- Proficiency in writing Bash scripts, developing CI/CD pipelines, and working with Docker.
Soft Skills
- Strong analytical and problem-solving abilities.
- Ability to work independently as well as collaboratively in cross-functional teams.
- Strong communication and presentation skills.
Preferred Qualifications
- Prior experience in projects involving Generative AI, Agentic AI, NLP, or Computer Vision.
- Exposure to deploying AI/ML models in production environments.
- Familiarity with cloud platforms such as AWS, GCP, or Azure.
- Candidates with self-implemented AI/ML projects or open-source contributions will have a clear advantage.
Work Schedule
- Timings: 2:00 PM – 11:00 PM
- Days: Monday to Friday
- Stipend: Rs. 25,000 per month
Join Us
If you are a recent graduate eager to gain real-world experience in Generative AI, Agentic AI, NLP, and Computer Vision, and excited about building the future of intelligent applications, we encourage you to apply and become part of the innovative team at Devfum.
Job Types: Full-time, Internship
Pay: Rs25, Rs30,000.00 per month
Work Location: In person