1,074 Kafka jobs in Pakistan
Data Engineer
Posted today
Job Viewed
Job Description
Job Responsibilities:
1. Design and Development of Data warehouse
2. Design, Development & Maintenance of Azure Data Pipelines using Azure Data Factory
3. Design and development of spark jobs using Data Bricks
4. Team Management and coordination with onsite to ensure the smooth running of business
5. Knowledge or experience with one or more BI tools (Power BI/Business
Objects/Tableau/MicroStrategy/Qlik etc).
6. Maintenance of presentation layer and access layer.
Qualifications Needed:
1. BS Software Engineer with 2-5 years of experience in working on BI and Data Mangement
projects.
2. Expertise of DB+DWH Design, Support and Maintenance.
3. Experience in writing Complex SQL as well as SparkSQL.
4. Proven experience with python and related frameworks
5. Proven experience on Data Bricks using pySpark
6. Expertise with Power BI.
7. Expertise with Azure Data Platform is a plus.
8. Generating detailed documentation.
9. Excellent communication skills.
10. Ability and willingness to learn.
11. Very strong analytical skills.
12. Experience in data analysis is a plus.
13. Ability to independently manage critical situations.
14. Should be a Team player.
Data Engineer
Posted 2 days ago
Job Viewed
Job Description
Join to apply for the Data Engineer role at TenX .
Get AI-powered advice on this job and more exclusive features.
Senior Talent Acquisition Executive @ TenX | 360° HR Resource | Scouting Talent | C-level Recruitment | Your hiring PartnerJOB SUMMARY:
Data Engineers are responsible for designing and creating the data warehouse and all related extraction, transformation, and load of data functions. They should be able to analyze and understand business data needs and design effective ETL/BI processes accordingly.
DESCRIPTION:
Data Warehouse /ETL
- Develop data models and data marts from various source systems.
- Develop and maintain ETL scripts to load data into staging areas, data warehouses, and data marts.
- Identify opportunities to utilize objects and code to reduce development effort and ensure consistent business rules.
- Identify opportunities for analysis and reporting solutions.
- Evaluate and select data modeling, BI, and data warehouse tools.
- Translate business requirements into technical data models and data marts.
- Understand key business drivers.
- Contribute to the development of the company's BI initiatives.
- Provide input on the overall BI strategy and communicate concepts to users.
Reporting & Drafting
- Document all development activities according to company standards.
- Support team members in a collaborative environment.
- Help establish standards and processes for BI data modeling.
- Communicate task status and issues effectively to project leads.
Requirements:
- Bachelor’s degree in computer science, BIT, BBIT, or related field; master’s preferred.
- 1-2 years of experience in data engineering or related fields with exposure to Big Data methodologies.
- Strong knowledge of RDBMS and SQL.
- Familiarity with ETL tools like Talend, Informatica, SSIS, or IBM Infosphere DataStage.
- Experience with Big Data technologies such as Hadoop, Cloudera, and Talend for Big Data is preferred.
- Associate
- Full-time
- Consulting, Engineering, and Information Technology
- IT Services and Consulting, Data Infrastructure and Analytics, IT System Data Services
Referrals increase your chances of interviewing at TenX by 2x.
Get notified about new Data Engineer jobs in Lahore, Punjab, Pakistan .
#J-18808-LjbffrData Engineer
Posted 3 days ago
Job Viewed
Job Description
- Develop data sets, tools and processes used to maintain a common, data standard across the Bank.
- Translate the business requirements into technical requirements
- Analyse processes and technologies, contributing to the integration of new solutions.
- Define and document functional and non-functional user requirements and specifications.
- Develop source mapping matrix and other key artifacts and documents.
- Build physical data model from logical data model into data warehouse and data lake.
- Build data pipelines, ETL/ELT, data control & validation processes using ETL frameworks and data integration tools.
- Write and optimize complex SQL, Python, Shell script code for data pipelines that leverage relational and non-relational data.
- Perform unit testing, system integration testing and user acceptance testing.
- Deploy machine learning models developed by data scientists into production environment.
- Provide training and technical support to operations team and end users.
- Perform performance optimization and maintenance on data warehouse and data lake.
- Create and maintain technical documentation that is required in supporting solutions.
Data Engineer
Posted 5 days ago
Job Viewed
Job Description
Data Engineer with 2+ years experience in AI, Python, SQL, Spark, Azure, Databricks. Skilled in scalable data infrastructure & pipeline design. English required.
- Bachelor’s degree in computer science, Engineering, or a related field.
- Minimum of 2 years of experience in data engineering, specifically in large-scale AI projects and production applications.
- Strong proficiency in Python, SQL, Spark, Cloud Architect, Data & Solution Architect, API, Databricks, and Azure.
- Deep understanding of designing and building scalable and robust data infrastructure.
- Experience with at least one public cloud provider.
- Strong understanding of system architecture and data pipeline construction.
- Familiarity with machine learning models and data processing requirements.
- Team player with analytical and problem-solving skills.
- Good communication skills in English.
- Optional:
- Expertise in distributed systems like Kafka and orchestration systems like Kubernetes.
- Basic knowledge in Data Lake/Data Warehousing/Big Data tools, Apache Spark, RDBMS, NoSQL, Knowledge Graph.
- Design, build, and manage data pipelines, integrating multiple data sources to support company goals.
- Develop and maintain large-scale data processing systems and machine learning pipelines, ensuring data availability and quality.
- Implement systems for data quality and consistency, collaborating with Data Science and IT teams.
- Ensure compliance with security guidelines and SDLC processes.
- Collaborate with Data Science team to provide necessary data infrastructure.
- Lead and manage large-scale AI projects, working with cloud platforms for deployment.
- Maintain and optimize databases, design schemas, and improve data flow across the organization.
“To be ILI” means traveling far to reinvigorate innovation. It symbolizes vocation, commitment, and passion. We are a dedicated taskforce focused on sensing and initiating innovation.
Like a lion in the wild, constantly hunting and alert, we stay focused and agile, unlike a lion in captivity which loses its hunting instinct due to guaranteed food supply.
We offer a steep learning curve, development opportunities, and a flexible, positive work environment.
Modern office with design furniture, rooftop terrace with city view.
International TeamSupportive, open-minded teammates. Enjoy after-work events, barbecues, excursions, and team gatherings.
Fresh fruits, professional coffee, smoothies, on-site gym, and yoga room for relaxation.
#J-18808-LjbffrData Engineer
Posted 5 days ago
Job Viewed
Job Description
Get AI-powered advice on this job and more exclusive features.
Direct message the job poster from DOT LABS
Human Resources Executive @Dot Labs | Talent Acquisition | Technical Recruitment | Performance Management | Training & Development | Employee…At Dot Labs, we are seeking a talented Associate Data Engineer to join our growing team.
Forward your resume to
Location: DHA Phase 2
Timing: 11:00 AM -8:00 PM (Onsite)
As an Associate Data Engineer, you will be responsible for designing and implementing data pipelines, maintaining our data infrastructure, and optimizing data storage and retrieval. You will work closely with other data engineers, data scientists, and stakeholders to ensure our data is reliable, accurate, and accessible.
Responsibilities:
- Design, build, and maintain scalable and efficient data pipelines using Airflow and Python
- Implement and maintain data storage solutions on AWS, such as S3 and Redshift
- Develop and maintain data infrastructure, including ETL processes, data warehousing, and data lakes
- Collaborate with data scientists to ensure data is clean, accurate, and readily available for analysis
- Optimize data retrieval and storage processes to ensure high performance and reliability
- Monitor and troubleshoot data pipelines and infrastructure issues
- Work with stakeholders to understand data requirements and implement solutions that meet their needs
- Stay up-to-date with the latest technologies and trends in data engineering
Requirements:
- Bachelor's or Master's degree in Computer Science, Engineering, or related field
- 1+ year of experience in data engineering, with a focus on designing and implementing data pipelines and infrastructure
- Strong skills in Airflow and Python, with experience in data manipulation, data integration, and data analysis
- Experience with AWS services such as S3, Redshift, EMR, and Lambda
- Experience with data warehousing and data modeling concepts
- Strong problem-solving skills and attention to detail
- Excellent communication and collaboration skills
If you meet these requirements and are passionate about data engineering, we would love to hear from you. Please apply with your resume and cover letter, and let us know why you would be a great fit for our team.
#dotlabs #data #dataanalysis #design #experience #python #dataengineering #datascientists #aws #lahorejobs
Seniority level- Seniority level Entry level
- Employment type Full-time
- Job function Information Technology
- Industries IT Services and IT Consulting
Referrals increase your chances of interviewing at DOT LABS by 2x
Sign in to set job alerts for “Data Engineer” roles. Associate Software Engineer - AI/ML -Fresh Graduates Software Engineer- Full stack (Node.JS / Python) Junior Backend Developer (Python/Django) Python Developer - Backend (Onsite, Lahore, PKR Salary) Associate Software Engineer - Data ScienceLahore, Punjab, Pakistan PKR100,000.00-PKR130,000.00 1 month ago
Full Stack Developer - Node.js (Onsite, Lahore, PKR Salary)We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrData Engineer
Posted 7 days ago
Job Viewed
Job Description
About Burq
Burq started with an ambitious mission: how can we turn the complex process of offering delivery into a simple turnkey solution.
We started with building the largest network of delivery networks, partnering with some of the biggest delivery companies. We then made it extremely easy for businesses to plug into our network and start offering delivery to their customers. Now, we’re powering deliveries for some of the fastest-growing companies from retailers to startups.
It’s a big mission and now we want you to join us to make it even bigger!
We’re already backed by some of the Valley's leading venture capitalists, including Village Global, the fund whose investors include Bill Gates, Jeff Bezos, Mark Zuckerberg, Reid Hoffman, and Sara Blakely. We have assembled a world-class team all over the U.S.
We operate at scale, but we're still a small team relative to the opportunity. We have a staggering amount of work ahead. That means you have an unprecedented opportunity to grow while doing the most important work of your career.
We want people who are unafraid to be wrong and support decisions with numbers and narrative.
Responsibilities- Design, build, and maintain efficient and scalable data pipelines using tools like Airbyte, Airflow, and dbt, with a focus on integrating with Snowflake.
- Manage and optimize data warehousing solutions using Snowflake, ensuring data is organized, secure, and accessible for analysis.
- Develop and implement automations and workflows to streamline data processing and integration, ensuring seamless data flow across systems.
- Collaborate with cross-functional teams to set up and maintain data infrastructure that supports both engineering and analytical needs.
- Utilize Databricks and Spark for big data processing, ensuring data is processed, stored, and analyzed efficiently.
- Monitor data streams and processes using Kafka and Monte Cristo, ensuring data quality and integrity.
- Work closely with data analysts and other stakeholders to create, maintain, and optimize visualizations and reports using Tableau, ensuring data-driven decision-making.
- Ensure the security and compliance of data systems, implementing best practices and leveraging tools like Terraform for infrastructure management.
- Continuously evaluate and improve data processes, staying current with industry best practices and emerging technologies, with a strong emphasis on data analytics and visualization.
- Proficiency in SQL and experience with data visualization tools such as Tableau.
- Hands-on experience with data engineering tools and platforms including Snowflake, Airbyte, Airflow, and Terraform.
- Strong programming skills in Python and experience with data transformation tools like dbt.
- Familiarity with big data processing frameworks such as Databricks and Apache Spark.
- Knowledge of data streaming platforms like Kafka.
- Experience with data observability and quality tools like Monte Cristo.
- Solid understanding of data warehousing, data pipelines, and database management, with specific experience in Snowflake.
- Ability to design and implement automation, workflows, and data infrastructure.
- Strong analytical skills with the ability to translate complex data into actionable insights, particularly using Tableau.
- Excellent problem-solving abilities and attention to detail.
Investing in you
- Competitive salary
- Medical
- Educational courses
Generous Time Off
At Burq, we value diversity. We are an equal opportunity employer: we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
#J-18808-LjbffrData Engineer
Posted 7 days ago
Job Viewed
Job Description
About Burq
Burq started with an ambitious mission: to simplify the complex process of offering delivery through a turnkey solution.
We built the largest network of delivery partners, making it easy for businesses to plug into our network and start offering delivery services. Currently, we power deliveries for some of the fastest-growing companies, from retailers to startups.
We are backed by leading venture capitalists, including Village Global, with investors like Bill Gates, Jeff Bezos, Mark Zuckerberg, Reid Hoffman, and Sara Blakely. Our team is composed of world-class professionals across the U.S.
While we operate at scale, we are still a small team with a significant growth opportunity. This presents an excellent chance for you to develop your career while contributing to impactful work.
We seek individuals unafraid of being wrong, who support decisions with data and narrative. Here’s an overview of your responsibilities:
Some of the responsibilities
- Design, build, and maintain scalable data pipelines using tools like Airbyte, Airflow, and dbt, with a focus on Snowflake integration.
- Manage and optimize data warehousing solutions with Snowflake, ensuring data security, organization, and accessibility.
- Develop automations and workflows to streamline data processing and integration for seamless data flow.
- Collaborate with cross-functional teams to establish and maintain data infrastructure supporting engineering and analytics.
- Utilize Databricks and Spark for big data processing, ensuring efficient data storage, processing, and analysis.
- Monitor data streams and processes with Kafka and Monte Cristo to ensure data quality and integrity.
- Work with data analysts and stakeholders to create, maintain, and optimize visualizations and reports using Tableau for data-driven decisions.
- Ensure data system security and compliance, applying best practices and tools like Terraform for infrastructure management.
- Continuously evaluate and improve data processes, staying updated with industry best practices and emerging technologies, especially in data analytics and visualization.
Be The First To Know
About the latest Kafka Jobs in Pakistan !
Data Engineer
Posted 7 days ago
Job Viewed
Job Description
About us:
VisionX works with world-leading brands, Fortune 1000 as their innovation partner, providing product strategy and custom application development leveraging agile methodologies, technology accelerators, and by creating Intellectual Property.
VisionX has been listed in the Top 10 Most Innovative Companies of 2020 by Fast Company – ranked alongside the likes of Microsoft & Snap Inc.
We develop cutting-edge software products integrating computer vision, 3D modeling, AR, VR, decision sciences, and IoT addressing a wide variety of use cases across different industries.
Your role
We are seeking talented and experienced data engineers for our expanding Data & Analytics team and aid in the development of high-quality Data Lakehouse products and solutions for our company. As a Data Engineer, you will leverage your solid data engineering, ETL/ELT, and data integration skills, along with strong analytical abilities, to deliver valuable business benefits. You will play a pivotal role within the Data Engineering team, interfacing closely with the Data Science, AI, and Reporting teams.
Responsibilities
- Design, build, and maintain scalable data pipelines and systems.
- Collaborate with cross-functional teams to understand data requirements and develop appropriate solutions.
- Implement and automate ETL processes to extract, transform, and load data from various sources, building Data Lakehouse solutions.
- Develop real-time data pipelines and applications using serverless and managed AWS services such as Kafka, Lambda, Kinesis, API Gateway, REST Api, etc.
- Develop scalable ETL pipelines for structured, semi-structured, and unstructured data, including both batch and streaming processes.
- Standardize and cleanse data for AI, Data Science, and Analytics use cases to support business needs.
- Create and optimize databases and data models for efficiency and performance.
- Ensure data quality and integrity by performing data validations and implementing quality checks.
- Develop and maintain documentation related to data infrastructure and processes.
- Troubleshoot and resolve data-related issues.
- Stay up to date with the latest trends and technologies in data engineering and data management.
What you need
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Possess a deep passion for data and a demonstrated ability to create high-quality data products.
- 4+ years of experience working with customer-centric data on a big data scale, preferably in a banking, online, or e-commerce context.
- 4+ years of hands-on experience building production data pipelines and working with structured, semi-structured, and unstructured data.
- 3+ years of experience in engineering data solutions within both cloud and hybrid infrastructures, with strong expertise in AWS Cloud.
- 2+ years of experience with one or more programming languages, especially Python.
- 2+ years of experience with data integration/ETL technologies such as Python, Spark/PySpark, Kafka/Kinesis/EMR, Lambda, API Gateway, REST API, AWS Glue, etc.
- Proficient in database technologies like AWS S3, MongoDB, DynamoDB, AWS Redshift, Glacier, and other SQL and NoSQL databases.
- Experienced with CI/CD pipelines and tools such as Jenkins, GitLab CI, or Azure DevOps.
- Experience with modern real-time data pipelines (e.g., serverless framework, Lambda, Kinesis, etc.).
- Experience designing ETL/ELT process flows for Data Lakehouse implementation.
- Experience in AWS data ecosystem and various other cloud services.
- Experienced in different APIs and API integration, along with microservice architecture.
- Experience integrating customer contact center data with data Lakehouse is a plus.
- Strong analytical, logical, problem-solving and numerical skills.
- A self-starter and creative thinker who collaborates well and communicates confidently.
- Effective in communicating with both technical and non-technical audiences.
- Proven ability to take initiative and proactively learn new technologies.
- Demonstrates a solid work ethic and provides timely, high-quality support.
- Embraces a mindset of Continuous Improvement.
Why choose us
Our global network of industry experts and mentors helps shape your growth and future. We believe in delivering client value through our work. We build products that are not good or great, but outstanding.
You deliver! We will make your stay and journey with us worthwhile.
We are an equal opportunity employer, and we value diversity. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, disability status, or any other legally protected status.
#J-18808-LjbffrData Engineer
Posted 7 days ago
Job Viewed
Job Description
Role Summary
As a Big Data engineer you will be responsible for data-related implementation tasks that include provisioning of data storage services, ingesting, streaming and batch data, transforming data, implementing security requirements, implementing data retention policies, identifying performance bottlenecks, and accessing external data. You will develop and implement data Lake solutions for company-wide applications and manage large sets of structured, semi structured and unstructured data. You will work in a modern cloud-based data lakehousing environment alongside a team of diverse, intense, and interesting co-workers.
Job Responsibilities
- Performing analysis of large data stores and derive insights using Big Data querying
- Design and develop code, scripts, and data pipelines that leverage structured, semi-structured and unstructured data
- Should be able to manage Data ingestion pipelines and stream processing
- Perform computations using Azure functions
- Responsible for the documentation, design, and development of Hadoop applications
- Perform day-to-day tasks to accuracy with and minimal directions while meeting deadlines
Qualification & Skillset
- Bachelor/master’s degree in computer science
- At least 2-5 years of relevant working experience with Cloud Computation
- Hands-on experience on Azure/AWS
- Experience working with Apache Spark, Apache Hive, Apache HBase
- Hands-on experience on:
- Azure/AWS Architectures
- SQL, Scala, Node.js, JS, Java
- Python with Pandas, NumPy, TensorFlow
- ETL
Preferred Requirement
- Implementing Lambda Architecture with stream processing and batch processing
- Implement security standards and practices in the Architecture of Cloud Solution
- Training/Certification on Data Lake HDFS
- Training/Certification on Azure Data Lake Gen 2
- Training/Certification on Data Bricks
- Experience with Azure Data Explorer and Kusto query language would be a plus.
- Experience with Azure Data Factory would be a plus.
Data Engineer
Posted 7 days ago
Job Viewed
Job Description
Get AI-powered advice on this job and more exclusive features.
Join AlGooru as our next Data Engineer!
Who Are We
AlGooru is the leading private tutoring platform in Saudi. We're the first licensed by the National E-Learning Center (NELC), and we're renowned for providing tech-enabled tailored educational support to students from all ages and levels (K-12, university, and professionals).
? We've been backed by various local, regional, and international investors including Constructor Capital, Plug & Play Ventures, Techstars, KAUST, Hub71, family offices, and others.
A Fun Fact about AlGooru is that its name derives from "Guru," a Sanskrit word meaning a mentor, guide, expert, or master.
Your Role
At AlGooru, we're not just building a platform; we're building a future where personalized learning is accessible to everyone. As our Data Engineer , you'll be at the heart of the Engineering tribe, driving impact by:
- Build and deploy 3 new automated data pipelines across Product, Learning, and Business domains within the first 90 days
- Improve data delivery efficiency by 40% by optimizing queries, storage layers, and warehouse logic within 6 months
- Power 5+ real-time business dashboards by Q4 2025 through scalable, structured data availability
- Design, build, and maintain robust and scalable data pipelines using modern data stack tools
- Partner with Product, Operations, and Learning teams to gather and define analytics and reporting needs
- Ensure data pipeline reliability, performance, and integrity across systems
- Optimize SQL queries and ETL workflows for speed, scalability, and accuracy
- Own and enforce data quality, validation checks, and documentation standards
- Structure and deliver clean data models to support reporting tools and dashboards
- Continuously propose and implement improvements in data architecture and analytics engineering workflows
You're the Data Engineer we're looking for if you have:
- 2+ years of experience in data engineering, analytics engineering, or similar roles
- Advanced proficiency in SQL, DBT, and data modeling best practices
- Hands-on experience with ETL tools and data warehouses like BigQuery, Snowflake, or Redshift
- Experience with pipeline orchestration tools (e.g., Airflow, Prefect, Dagster)
- Familiarity with Python or scripting languages
- Strong understanding of data governance, observability, and version control
- Comfortable working in a fast-paced, agile, remote team environment
- Results-Driven
- Proactive & Takes Initiative
- Balances Speed & Quality
- End-User Obsessed
- Strong Communicator
- Super Organized
- Independent & Team Player
- Eager to Learn
- Screening & Intro Call (5-10 mins)
- Chemistry meeting (15-30 minutes)
- Technical interview (30-60 minutes)
- Technical Assessment
- Final interview: Career and Company Alignment (30-60 minutes)
- Offer extended to successful applicants
Why AlGooru?
At AlGooru, you'll have an exceptional opportunity to push the boundaries of education. You'll be challenged but never alone, joining a diverse team of innovators committed to redefining learning.
Here's what we offer:
Impact: Help transform education in Saudi Arabia and beyond.
Growth: Join a rapidly scaling startup with career development opportunities.
Flexibility: Work from anywhere + unlimited PTO.
Rewards: Competitive salary, ESOP shares & quarterly bonuses.
Culture: Vibrant team, monthly Pizza Fridays & a supportive environment.
Perks: Free hardware/software, learning budget, AlGooru Library & more!
We foster a culture of care, one that promotes loyalty, commitment, and a true sense of belonging. Be valued. Be part of it. Seniority level
- Seniority level Mid-Senior level
- Employment type Full-time
- Industries IT Services and IT Consulting
Referrals increase your chances of interviewing at AlGooru | القورو by 2x
Get notified about new Data Engineer jobs in Pakistan .
Arificial Intelligence Full Stack Engineer Software Engineer, Backend (AI Infrastructure) Software Engineer - Supplier/Data Experience Senior Data Modeler - Full remote - contractor in USD Senior Data Modeler - Full remote - contractor in USD Senior Software Engineer - Oracle Retail Dev, Lotus's - REMOTE Senior C++ Software Engineer (100% Remote - Pakistan)We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-Ljbffr