106 Data Acquisition jobs in Pakistan
Software Engineer, Data Infrastructure & Acquisition - Asia
Posted 4 days ago
Job Viewed
Job Description
Join to apply for the Software Engineer, Data Infrastructure & Acquisition - Asia role at Speechify
Software Engineer, Data Infrastructure & Acquisition - AsiaJoin to apply for the Software Engineer, Data Infrastructure & Acquisition - Asia role at Speechify
Get AI-powered advice on this job and more exclusive features.
Speechify is a text-to-speech app that makes it easy for the world to access information. 20+ million people use our Google Chrome extension, web app, iOS app, and Android app. our mission is to make sure that reading is never a barrier to learning.
Our amazing users are students, professionals, and productivity lovers. Many of them have learning differences like dyslexia and ADHD, while many just want to read faster and listen on the go. With Speechify you can turn any book, document, or website into audio, and listen while you’re in the car, doing laundry, walking your dog, making dinner, working out, skydiving—whatever your daily routine is! Speechify also powers Medium, the Star Tribune, The Direct, and more. Easily add text-to-speech to your website.
Cliff Weitzman, our fearless CEO, founded Speechify in 2017 in a dorm room at Brown University so he could share with others the incredible text-to-speech software he’d been working on. Cliff has dyslexia and he was frustrated with how much time and energy it took for him to read. Advanced TTS technology was a total gamechanger, it allowed him to finish his readings 3x faster than a normal reader and to better comprehend and retain information.
At Speechify our goal is for reading to never be a barrier to learning for anyone. Nothing should hold you back from learning information quickly and effectively.
Speechify has grown to employ over 100 team members spread out across the globe in just a few short years. We're proud of the incredible team with members who were previously leaders and senior engineers at companies like Snapchat, Apple, Spotify, Amazon & Uber. We all love and prioritise ownership, delivering value with speed, learning as much as we can and making our users feel empowered.
The Role
Mission
The mission of Speechify is to make sure that reading is never a barrier to learning.
Over 50 million people use Speechify’s text-to-speech products to turn whatever they’re reading – PDFs, books, Google Docs, news articles, websites – into audio, so they can read faster, read more, and remember more. Speechify’s text-to-speech reading products include its iOS app, Android App, Mac App, Chrome Extension, and Web App. Google recently named Speechify the Chrome Extension of the Year and Apple named Speechify its 2025 Design Award winner for Inclusivity.
Today, nearly 200 people around the globe work on Speechify in a 100% distributed setting – Speechify has no office. These include frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, leading PhD programs like Stanford, high growth startups like Stripe, Vercel, Bolt, and many founders of their own companies.
Overview
The responsibilities of our Platform team include building and maintaining all backend services, including, but not limited to, payments, analytics, subscriptions, new products, text to speech, and external APIs.
This is a key role and ideal for someone who thinks strategically, enjoys fast-paced environments, is passionate about making product decisions, and has experience building great user experiences that delight users.
We are a flat organization that allows anyone to become a leader by showing excellent technical skills and delivering results consistently and fast. Work ethic, solid communication skills, and obsession with winning are paramount.
Our interview process involves several technical interviews and we aim to complete them within 1 week.
Overview
We're looking to hire for our Data side of our AI team at Speechify. This role is responsible for all aspects of data collection to support our model training operations. We are able to build high-quality datasets at petabyte-scale and low cost through a tight integration of infrastructure, engineering, and research work. We are looking for a skilled Software Engineer to join us.
What You’ll Do
- Be scrappy to find new sources of audio data and bring it into our ingestion pipeline
- Operate and extend the cloud infrastructure for our ingestion pipeline, currently running on GCP and managed with Terraform.
- Collaborate closely with our Scientists to shift the cost/throughput/quality frontier, delivering richer data at bigger scale and lower cost to power our next-generation models.
- Collaborate with others on the AI Team and Speechify Leadership to craft the AI Team’s dataset roadmap to power Speechify’s next-generation consumer and enterprise products.
- A fast-growing environment where you can help shape the company and product.
- An entrepreneurial-minded team that supports risk, intuition, and hustle.
- A hands-off management approach so you can focus and do your best work.
- An opportunity to make a big impact in a transformative industry.
- Competitive salaries, a friendly and laid-back atmosphere, and a commitment to building a great asynchronous culture.
- Opportunity to work on a life-changing product that millions of people use.
- Build products that directly impact and support people with learning differences like dyslexia, ADD, low vision, concussions, autism, and more.
- Work in one of the fastest-growing sectors of tech, the intersection of artificial intelligence and audio.
- A dynamic environment where your contributions shape the company and its products.
- A team that values innovation, intuition, and drive.
- Autonomy, fostering focus and creativity.
- The opportunity to have a significant impact in a revolutionary industry.
- Competitive compensation, a welcoming atmosphere, and a commitment to an exceptional asynchronous work culture.
- The privilege of working on a product that changes lives, particularly for those with learning differences like dyslexia, ADD, and more.
- An active role at the intersection of artificial intelligence and audio – a rapidly evolving tech domain.
Tell us more about yourself and why you're interested in the role when you apply.
And don’t forget to include links to your portfolio and LinkedIn.
Not looking but know someone who would make a great fit?
Refer them!
Speechify is committed to a diverse and inclusive workplace.
Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
Ideal Profile
An Ideal Candidate Should Have
- BS/MS/PhD in Computer Science or a related field.
- 5+ years of industry experience in software development.
- Proficiency with bash/Python scripting in Linux environments
- Proficiency in Docker and Infrastructure-as-Code concepts and professional experience with at least one major Cloud Provider (we use GCP)
- Experience with web crawlers, large-scale data processing workflows is a plus
- Ability to handle multiple tasks and adapt to changing priorities.
- Strong communication skills, both written and verbal.
- Excellent career development opportunities
- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Engineering and Information Technology
- Industries Software Development
Referrals increase your chances of interviewing at Speechify by 2x
Sign in to set job alerts for “Software Engineer” roles. Intermediate Full-Stack Software DeveloperPakistan $28,000.00-$5,500.00 1 month ago
Full Stack Engineer- Node.js, React,js and Firebase .NET Developer (100% Remote) (Pakistan Only)Pakistan 60,000.00- 120,000.00 1 month ago
Frontend Engineer II (with Webflow expertise) Full Stack Developer (React.js & Node.js)We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrSoftware Engineer, Data Infrastructure & Acquisition - Asia
Posted 5 days ago
Job Viewed
Job Description
Join to apply for the
Software Engineer, Data Infrastructure & Acquisition - Asia
role at
Speechify Software Engineer, Data Infrastructure & Acquisition - Asia
Join to apply for the
Software Engineer, Data Infrastructure & Acquisition - Asia
role at
Speechify Get AI-powered advice on this job and more exclusive features. Speechify is a text-to-speech app that makes it easy for the world to access information. 20+ million people use our Google Chrome extension, web app, iOS app, and Android app. our mission is to make sure that reading is never a barrier to learning.
Our amazing users are students, professionals, and productivity lovers. Many of them have learning differences like dyslexia and ADHD, while many just want to read faster and listen on the go. With Speechify you can turn any book, document, or website into audio, and listen while you’re in the car, doing laundry, walking your dog, making dinner, working out, skydiving—whatever your daily routine is! Speechify also powers Medium, the Star Tribune, The Direct, and more. Easily add text-to-speech to your website.
Cliff Weitzman, our fearless CEO, founded Speechify in 2017 in a dorm room at Brown University so he could share with others the incredible text-to-speech software he’d been working on. Cliff has dyslexia and he was frustrated with how much time and energy it took for him to read. Advanced TTS technology was a total gamechanger, it allowed him to finish his readings 3x faster than a normal reader and to better comprehend and retain information.
At Speechify our goal is for reading to never be a barrier to learning for anyone. Nothing should hold you back from learning information quickly and effectively.
Speechify has grown to employ over 100 team members spread out across the globe in just a few short years. We're proud of the incredible team with members who were previously leaders and senior engineers at companies like Snapchat, Apple, Spotify, Amazon & Uber. We all love and prioritise ownership, delivering value with speed, learning as much as we can and making our users feel empowered.
The Role
Mission
The mission of Speechify is to make sure that reading is never a barrier to learning.
Over 50 million people use Speechify’s text-to-speech products to turn whatever they’re reading – PDFs, books, Google Docs, news articles, websites – into audio, so they can read faster, read more, and remember more. Speechify’s text-to-speech reading products include its iOS app, Android App, Mac App, Chrome Extension, and Web App. Google recently named Speechify the Chrome Extension of the Year and Apple named Speechify its 2025 Design Award winner for Inclusivity.
Today, nearly 200 people around the globe work on Speechify in a 100% distributed setting – Speechify has no office. These include frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, leading PhD programs like Stanford, high growth startups like Stripe, Vercel, Bolt, and many founders of their own companies.
Overview
The responsibilities of our Platform team include building and maintaining all backend services, including, but not limited to, payments, analytics, subscriptions, new products, text to speech, and external APIs.
This is a key role and ideal for someone who thinks strategically, enjoys fast-paced environments, is passionate about making product decisions, and has experience building great user experiences that delight users.
We are a flat organization that allows anyone to become a leader by showing excellent technical skills and delivering results consistently and fast. Work ethic, solid communication skills, and obsession with winning are paramount.
Our interview process involves several technical interviews and we aim to complete them within 1 week.
Overview
We're looking to hire for our Data side of our AI team at Speechify. This role is responsible for all aspects of data collection to support our model training operations. We are able to build high-quality datasets at petabyte-scale and low cost through a tight integration of infrastructure, engineering, and research work. We are looking for a skilled Software Engineer to join us.
What You’ll Do
Be scrappy to find new sources of audio data and bring it into our ingestion pipeline Operate and extend the cloud infrastructure for our ingestion pipeline, currently running on GCP and managed with Terraform. Collaborate closely with our Scientists to shift the cost/throughput/quality frontier, delivering richer data at bigger scale and lower cost to power our next-generation models. Collaborate with others on the AI Team and Speechify Leadership to craft the AI Team’s dataset roadmap to power Speechify’s next-generation consumer and enterprise products.
What We Offer
A fast-growing environment where you can help shape the company and product. An entrepreneurial-minded team that supports risk, intuition, and hustle. A hands-off management approach so you can focus and do your best work. An opportunity to make a big impact in a transformative industry. Competitive salaries, a friendly and laid-back atmosphere, and a commitment to building a great asynchronous culture. Opportunity to work on a life-changing product that millions of people use. Build products that directly impact and support people with learning differences like dyslexia, ADD, low vision, concussions, autism, and more. Work in one of the fastest-growing sectors of tech, the intersection of artificial intelligence and audio.
What We Offer
A dynamic environment where your contributions shape the company and its products. A team that values innovation, intuition, and drive. Autonomy, fostering focus and creativity. The opportunity to have a significant impact in a revolutionary industry. Competitive compensation, a welcoming atmosphere, and a commitment to an exceptional asynchronous work culture. The privilege of working on a product that changes lives, particularly for those with learning differences like dyslexia, ADD, and more. An active role at the intersection of artificial intelligence and audio – a rapidly evolving tech domain.
Think you’re a good fit for this job?
Tell us more about yourself and why you're interested in the role when you apply.
And don’t forget to include links to your portfolio and LinkedIn.
Not looking but know someone who would make a great fit?
Refer them!
Speechify is committed to a diverse and inclusive workplace.
Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
Ideal Profile
An Ideal Candidate Should Have
BS/MS/PhD in Computer Science or a related field. 5+ years of industry experience in software development. Proficiency with bash/Python scripting in Linux environments Proficiency in Docker and Infrastructure-as-Code concepts and professional experience with at least one major Cloud Provider (we use GCP) Experience with web crawlers, large-scale data processing workflows is a plus Ability to handle multiple tasks and adapt to changing priorities. Strong communication skills, both written and verbal.
What's on Offer?
Excellent career development opportunities
Seniority level
Seniority level Mid-Senior level Employment type
Employment type Full-time Job function
Job function Engineering and Information Technology Industries Software Development Referrals increase your chances of interviewing at Speechify by 2x Sign in to set job alerts for “Software Engineer” roles.
Intermediate Full-Stack Software Developer
Pakistan $28,000.00-$5,500.00 1 month ago Full Stack Engineer- Node.js, React,js and Firebase
.NET Developer (100% Remote) (Pakistan Only)
Pakistan 60,000.00- 120,000.00 1 month ago Frontend Engineer II (with Webflow expertise)
Full Stack Developer (React.js & Node.js)
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-Ljbffr
Data Analysis & Machine Learning Expert
Posted 13 days ago
Job Viewed
Job Description
Bestow99 is a dynamic and forward-thinking company dedicated to empowering local talent through high-quality training programs while also offering professional services to both local and international companies. We are seeking a Data Analyst with 2 years of experience to join our E-Learning team in Gilgit.
Responsibilities :
- Analyze and interpret complex data sets to support decision-making.
- Collaborate with cross-functional teams to identify business opportunities.
- Prepare reports and visualizations to communicate findings to stakeholders.
- Ensure data accuracy and integrity by conducting regular audits.
- Proficiency in data analysis tools (e.g., Excel, SQL, Python).
- Strong analytical and problem-solving skills.
- Experience with data visualization software (e.g., Tableau, Power BI).
- Excellent communication and teamwork skills.
Data Engineer
Posted 3 days ago
Job Viewed
Job Description
You will be participating in exciting projects covering the end-to-end data lifecycle – from raw data integrations with primary and third-party systems, through advanced data modeling, to state-of-the-art data visualization and development of innovative data products.
You will have the opportunity to learn how to build and work with both batch and real-time data processing pipelines. You will work in a modern cloud-based data warehousing environment alongside a team of diverse, intense, and interesting co-workers. You will liaise with other departments – such as product & tech, the core business verticals, trust & safety, finance, and others – to enable them to be successful.
Your responsibilities
- Design, implement and support data warehousing;
- Raw data integrations with primary and third-party systems
- Data warehouse modeling for operational & application data layers
- Development in Amazon Redshift cluster
- SQL development as part of agile team workflow
- ETL design and implementation in Matillion ETL
- Real-time data pipelines and applications using serverless and managed AWS services such as Lambda, Kinesis, API Gateway, etc.
- Design and implementation of data products enabling data-driven features or business solutions
- Building data dashboards and advanced visualizations in Sisense for data cloud teams (formerly Periscope Data) with a focus on UX, simplicity, and usability
- Working with other departments on data products – i.e. product & technology, marketing & growth, finance, core business, advertising, and others
- Being part and contributing towards a strong team culture and ambition to be on the cutting edge of big data
- Evaluate and improve data quality by implementing test cases, alerts, and data quality safeguards
- Living the team values: Simpler. Better. Faster.
- Strong desire to learn
Required minimum experience (must)
- 1- 2 years experience in data processing, analysis, and problem-solving with large amounts of data;
- Good SQL skills across a variety of relational data warehousing technologies especially in cloud data warehousing (e.g. Amazon Redshift, Google BigQuery, Snowflake, Vertica, etc.)
- 1+ years of experience with one or more programming languages, especially Python
- Ability to communicate insights and findings to a non-technical audience
- Written and verbal proficiency in English
- Entrepreneurial spirit and ability to think creatively; highly-driven and self-motivated; strong curiosity and strive for continuous learning
- Top of class University technical degree such as computer science, engineering, math, physics.
Additional experience (strong plus)
- Experience working with customer-centric data at big data-scale, preferably in an online / e-commerce context
- Experience with modern big data ETL tools (e.g. Matillion)
- Experience with AWS data ecosystem (or other cloud providers)
- Track record in business intelligence solutions, building and scaling data warehouses, and data modeling
- Tagging, Tracking, and reporting with Google Analytics 360
- Knowledge of modern real-time data pipelines (e.g. serverless framework, lambda, kinesis, etc.)
- Experience with modern data visualization platforms such as Periscope, Looker, Tableau, Google Data Studio, etc.
- Linux, bash scripting, Javascript, HTML, XML
- Docker Containers and Kubernete
#LI-TM1
Data Engineer
Posted 3 days ago
Job Viewed
Job Description
Job Title: Data Engineer
Job Title: Data Engineer
Location: Karachi, Lahore , Islamabad (Hybrid)
Experience: 5+ Years
Job Type: Full-Time
Job Overview
We are looking for a highly skilled and experienced Data Engineer with a strong foundation in Big Data, distributed computing, and cloud-based data solutions . This role demands a strong understanding of end-to-end Data pipelines, data modeling, and advanced data engineering practices across diverse data sources and environments. You will play a pivotal role in building, deploying, and optimizing data infrastructure and pipelines in a scalable cloud-based architecture.
Key Responsibilities
- Design, develop, and maintain large-scale Data pipelines using modern big data technologies and cloud-native tools.
- Build scalable and efficient distributed data processing systems using Hadoop, Spark, Hive, and Kafka.
- Work extensively with cloud platforms (preferably AWS) and services like EMR, Glue, Lambda, Athena, S3.
- Design and implement data integration solutions pulling from multiple sources into a centralized data warehouse or data lake.
- Develop pipelines using DBT (Data Build Tool) and manage workflows with Apache Airflow or Step Functions.
- Write clean, maintainable, and efficient code using Python, PySpark, or Scala for data transformation and processing.
- Build and manage relational and columnar data stores such as PostgreSQL, MySQL, Redshift, Snowflake, HBase, ClickHouse.
- Implement CI/CD pipelines using Docker, Jenkins, and other DevOps tools.
- Collaborate with data scientists, analysts, and other engineering teams to deploy data models into production.
- Drive data quality, integrity, and consistency across systems.
- Participate in Agile/Scrum ceremonies and utilize JIRA for task management.
- Provide mentorship and technical guidance to junior team members.
- Contribute to continuous improvement by making recommendations to enhance data engineering processes and architecture.
- 5+ years of hands-on experience as a Data Engineer
- Deep knowledge of Big Data technologies – Hadoop, Spark, Hive, Kafka.
- Expertise in Python, PySpark and/or Scala.
- Proficient with data modeling, SQL scripting, and working with large-scale datasets.
- Experience with distributed storage like HDFS and cloud storage (e.g., AWS S3).
- Hands-on with data orchestration tools like Apache Airflow or StepFunction.
- Experience working in AWS environments with services such as EMR, Glue, Lambda, Athena.
- Familiarity with data warehousing concepts and experience with tools like Redshift, Snowflake (preferred).
- Exposure to tools like Informatica, AbInitio, Apache Iceberg is a plus.
- Knowledge of Docker, Jenkins, and other CI/CD tools.
- Strong problem-solving skills, initiative, and a continuous learning mindset.
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
- Experience with open table formats such as Apache Iceberg.
- Hands-on with AbInitio (GDE, Collect > IT) or Informatica tools.
- Knowledge of Agile methodology, working experience in JIRA.
- Self-driven, proactive, and a strong team player.
- Excellent communication and interpersonal skills.
- Passion for data and technology innovation.
- Ability to work independently and manage multiple priorities in a fast-paced environment.
- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Information Technology
- Industries IT Services and IT Consulting
Referrals increase your chances of interviewing at NorthBay - Pakistan by 2x
Software Engineering & Development Intern Intermediate Full-Stack Software DeveloperKarachi Division, Sindh, Pakistan 5 months ago
Karachi Division, Sindh, Pakistan 9 months ago
Karachi Division, Sindh, Pakistan 7 months ago
Karachi Division, Sindh, Pakistan 1 year ago
Karachi Division, Sindh, Pakistan 2 months ago
Karachi Division, Sindh, Pakistan 1 day ago
Karachi Division, Sindh, Pakistan 4 months ago
Karachi Division, Sindh, Pakistan 3 months ago
Karachi Division, Sindh, Pakistan 1 day ago
Karachi East District, Sindh, Pakistan 5 months ago
Backend .Net Core Developer (Wallet - Fintech)We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrData Engineer
Posted 13 days ago
Job Viewed
Job Description
Location: Lahore / Kharian
Position Type: Full-Time
Job Overview:
This role focuses on developing and implementing data warehouse solutions across the organization while managing large sets of structured and unstructured data. The Data Engineer will analyse complex customer requirements, define data transformation rules, and oversee their implementation. The ideal candidate should have a solid understanding of data acquisition, integration, and transformation, with the ability to evaluate and recommend optimal architecture and approaches.
Key Responsibilities:
•Design Data Pipelines : Develop robust data pipelines capable of handling both structured and unstructured data effectively.
•Build Data Ingestion and ETL Processes : Create efficient data ingestion pipelines and ETL processes, including low-latency data acquisition and stream processing using tools like Kafka or Glue.
•Develop Integration Procedures : Design and implement processes to integrate data warehouse solutions into operational IT environments.
•Data Lake Management: Manage and optimize the data lake on AWS, ensuring efficient data storage, retrieval, and transformation. Implement best practices for organizing and managing raw, processed, and curated data, with scalability and future growth in mind.
•Optimize SQL and Shell Scripts: Write, optimize, and maintain complex SQL queries and shell scripts to ensure efficient data processing.
•Monitor and Optimize Performance: Continuously monitor system performance and recommend necessary configurations or infrastructure improvements.
•Document and Present Workflows : Prepare detailed documentation, collaborate with cross functional teams, present complete data workflows to teams, and maintain an up-to-date knowledge base.
•Governance & Quality : Develop and maintain data quality checks to ensure data in the lake and warehouse remains accurate, consistent, and reliable.
•Collaboration with Stakeholders : Work closely with CTO, PMOs, business and data analysts to gather requirements and ensure alignment with project goals.
•Scope and Manage Projects : Collaborate with project managers to scope projects, create detailed work breakdown structures, and conduct risk assessments.
•Research & Development (R&D) : Keep up with the latest technological trends and identify innovative solutions to address customer challenges and company priorities.
Skills and Qualifications:
•Bachelor’s or Master’s degree in Engineering, Computer Science, or equivalent experience.
•At least 4+ years of relevant experience as a Data Engineer.
•Hands-on experience with cloud platforms such as AWS, Azure, or GCP, and familiarity with respective cloud services.
•Hands-on experience with one or more ETL tools such as Glue, Spark, Kafka, Informatica, DataStage, Talend, Azure Data Factory (ADF).
•Strong understanding of dimensional modeling techniques, including Star and Snowflake schemas.
•Experience in creating semantic models and reporting mapping documents.
•Solid concepts and experience in designing and developing ETL architectures.
•Strong understanding of RDBMS concepts and proficiency in SQL development.
•Proficiency in data modeling and mapping techniques.
•Experience integrating data from multiple sources.
•Experience working in distributed environments, including clustering and sharding.
•Knowledge of Big Data tools like Pig, Hive or NiFi would be a plus.
•Experience with Hadoop distributions like Cloudera or Hortonworks would be a plus.
•Excellent communication and presentation skills, both verbal and written.
•Ability to solve problems using a creative and logical approach.
•Self-motivated, analytical, detail-oriented, and organized, with a commitment to excellence.
•Experience in the financial services sector is a plus.
ACE Money Transfer Profile:
#J-18808-Ljbffr
Data Engineer
Posted 13 days ago
Job Viewed
Job Description
About Burq
Burq started with an ambitious mission: how can we turn the complex process of offering delivery into a simple turnkey solution.
We started with building the largest network of delivery networks, partnering with some of the biggest delivery companies. We then made it extremely easy for businesses to plug into our network and start offering delivery to their customers. Now, we’re powering deliveries for some of the fastest-growing companies from retailers to startups.
It’s a big mission and now we want you to join us to make it even bigger!
We’re already backed by some of the Valley's leading venture capitalists, including Village Global, the fund whose investors include Bill Gates, Jeff Bezos, Mark Zuckerberg, Reid Hoffman, and Sara Blakely. We have assembled a world-class team all over the U.S.
We operate at scale, but we're still a small team relative to the opportunity. We have a staggering amount of work ahead. That means you have an unprecedented opportunity to grow while doing the most important work of your career.
We want people who are unafraid to be wrong and support decisions with numbers and narrative.
Responsibilities- Design, build, and maintain efficient and scalable data pipelines using tools like Airbyte, Airflow, and dbt, with a focus on integrating with Snowflake.
- Manage and optimize data warehousing solutions using Snowflake, ensuring data is organized, secure, and accessible for analysis.
- Develop and implement automations and workflows to streamline data processing and integration, ensuring seamless data flow across systems.
- Collaborate with cross-functional teams to set up and maintain data infrastructure that supports both engineering and analytical needs.
- Utilize Databricks and Spark for big data processing, ensuring data is processed, stored, and analyzed efficiently.
- Monitor data streams and processes using Kafka and Monte Cristo, ensuring data quality and integrity.
- Work closely with data analysts and other stakeholders to create, maintain, and optimize visualizations and reports using Tableau, ensuring data-driven decision-making.
- Ensure the security and compliance of data systems, implementing best practices and leveraging tools like Terraform for infrastructure management.
- Continuously evaluate and improve data processes, staying current with industry best practices and emerging technologies, with a strong emphasis on data analytics and visualization.
- Proficiency in SQL and experience with data visualization tools such as Tableau.
- Hands-on experience with data engineering tools and platforms including Snowflake, Airbyte, Airflow, and Terraform.
- Strong programming skills in Python and experience with data transformation tools like dbt.
- Familiarity with big data processing frameworks such as Databricks and Apache Spark.
- Knowledge of data streaming platforms like Kafka.
- Experience with data observability and quality tools like Monte Cristo.
- Solid understanding of data warehousing, data pipelines, and database management, with specific experience in Snowflake.
- Ability to design and implement automation, workflows, and data infrastructure.
- Strong analytical skills with the ability to translate complex data into actionable insights, particularly using Tableau.
- Excellent problem-solving abilities and attention to detail.
Investing in you
- Competitive salary
- Medical
- Educational courses
Generous Time Off
At Burq, we value diversity. We are an equal opportunity employer: we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
#J-18808-LjbffrBe The First To Know
About the latest Data acquisition Jobs in Pakistan !
Data Engineer
Posted 13 days ago
Job Viewed
Job Description
Location: Karachi, Lahore , Islamabad (Hybrid)
Experience: 5+ Years
Job Type: Full-Time
We are looking for a highly skilled and experienced Data Engineer with a strong foundation in Big Data, distributed computing, and cloud-based data solutions . This role demands a strong understanding of end-to-end Data pipelines, data modeling, and advanced data engineering practices across diverse data sources and environments. You will play a pivotal role in building, deploying, and optimizing data infrastructure and pipelines in a scalable cloud-based architecture.
Key Responsibilities:
Design, develop, and maintain large-scale Data pipelines using modern big data technologies and cloud-native tools.
Build scalable and efficient distributed data processing systems using Hadoop, Spark, Hive, and Kafka .
Work extensively with cloud platforms (preferably AWS) and services like EMR, Glue, Lambda, Athena, S3 .
Design and implement data integration solutions pulling from multiple sources into a centralized data warehouse or data lake.
Develop pipelines using DBT (Data Build Tool) and manage workflows with Apache Airflow or Step Functions .
Write clean, maintainable, and efficient code using Python, PySpark, or Scala for data transformation and processing.
Build and manage relational and columnar data stores such as PostgreSQL, MySQL, Redshift, Snowflake, HBase, ClickHouse .
Implement CI/CD pipelines using Docker, Jenkins , and other DevOps tools.
Collaborate with data scientists, analysts, and other engineering teams to deploy data models into production.
Drive data quality , integrity, and consistency across systems.
Participate in Agile/Scrum ceremonies and utilize JIRA for task management.
Provide mentorship and technical guidance to junior team members.
Contribute to continuous improvement by making recommendations to enhance data engineering processes and architecture.
Required Skills & Experience:
5+ years of hands-on experience as a Data Engineer
Deep knowledge of Big Data technologies – Hadoop, Spark, Hive, Kafka.
Expertise in Python, PySpark and/or Scala .
Proficient with data modeling , SQL scripting , and working with large-scale datasets .
Experience with distributed storage like HDFS and cloud storage (e.g., AWS S3).
Hands-on with data orchestration tools like Apache Airflow or StepFunction.
Experience working in AWS environments with services such as EMR, Glue, Lambda, Athena.
Familiarity with data warehousing concepts and experience with tools like Redshift, Snowflake (preferred).
Exposure to tools like Informatica , AbInitio , Apache Iceberg is a plus.
Knowledge of Docker , Jenkins , and other CI/CD tools.
Strong problem-solving skills, initiative, and a continuous learning mindset.
Preferred Qualifications:
Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
Experience with open table formats such as Apache Iceberg.
Hands-on with AbInitio (GDE, Collect > IT) or Informatica tools.
Knowledge of Agile methodology , working experience in JIRA .
Soft Skills:
Self-driven, proactive, and a strong team player.
Excellent communication and interpersonal skills.
Passion for data and technology innovation.
Ability to work independently and manage multiple priorities in a fast-paced environment.
Data Engineer
Posted 13 days ago
Job Viewed
Job Description
Data Engineer with 2+ years experience in AI, Python, SQL, Spark, Azure, Databricks. Skilled in scalable data infrastructure & pipeline design. English required.
- Bachelor’s degree in computer science, Engineering, or a related field.
- Minimum of 2 years of experience in data engineering, specifically in large-scale AI projects and production applications.
- Strong proficiency in Python, SQL, Spark, Cloud Architect, Data & Solution Architect, API, Databricks, and Azure.
- Deep understanding of designing and building scalable and robust data infrastructure.
- Experience with at least one public cloud provider.
- Strong understanding of system architecture and data pipeline construction.
- Familiarity with machine learning models and data processing requirements.
- Team player with analytical and problem-solving skills.
- Good communication skills in English.
- Optional:
- Expertise in distributed systems like Kafka and orchestration systems like Kubernetes.
- Basic knowledge in Data Lake/Data Warehousing/Big Data tools, Apache Spark, RDBMS, NoSQL, Knowledge Graph.
- Design, build, and manage data pipelines, integrating multiple data sources to support company goals.
- Develop and maintain large-scale data processing systems and machine learning pipelines, ensuring data availability and quality.
- Implement systems for data quality and consistency, collaborating with Data Science and IT teams.
- Ensure compliance with security guidelines and SDLC processes.
- Collaborate with Data Science team to provide necessary data infrastructure.
- Lead and manage large-scale AI projects, working with cloud platforms for deployment.
- Maintain and optimize databases, design schemas, and improve data flow across the organization.
“To be ILI” means traveling far to reinvigorate innovation. It symbolizes vocation, commitment, and passion. We are a dedicated taskforce focused on sensing and initiating innovation.
Like a lion in the wild, constantly hunting and alert, we stay focused and agile, unlike a lion in captivity which loses its hunting instinct due to guaranteed food supply.
We offer a steep learning curve, development opportunities, and a flexible, positive work environment.
Modern office with design furniture, rooftop terrace with city view.
International TeamSupportive, open-minded teammates. Enjoy after-work events, barbecues, excursions, and team gatherings.
Fresh fruits, professional coffee, smoothies, on-site gym, and yoga room for relaxation.
#J-18808-LjbffrData Engineer
Posted 13 days ago
Job Viewed
Job Description
Location: Lahore / Kharian
Position Type: Full-Time
Job Overview:
This role focuses on developing and implementing data warehouse solutions across the organization while managing large sets of structured and unstructured data. The Data Engineer will analyse complex customer requirements, define data transformation rules, and oversee their implementation. The ideal candidate should have a solid understanding of data acquisition, integration, and transformation, with the ability to evaluate and recommend optimal architecture and approaches.
Key Responsibilities:
•Design Data Pipelines : Develop robust data pipelines capable of handling both structured and unstructured data effectively.
•Build Data Ingestion and ETL Processes : Create efficient data ingestion pipelines and ETL processes, including low-latency data acquisition and stream processing using tools like Kafka or Glue.
•Develop Integration Procedures : Design and implement processes to integrate data warehouse solutions into operational IT environments.
•Data Lake Management: Manage and optimize the data lake on AWS, ensuring efficient data storage, retrieval, and transformation. Implement best practices for organizing and managing raw, processed, and curated data, with scalability and future growth in mind.
•Optimize SQL and Shell Scripts: Write, optimize, and maintain complex SQL queries and shell scripts to ensure efficient data processing.
•Monitor and Optimize Performance: Continuously monitor system performance and recommend necessary configurations or infrastructure improvements.
•Document and Present Workflows : Prepare detailed documentation, collaborate with cross functional teams, present complete data workflows to teams, and maintain an up-to-date knowledge base.
•Governance & Quality : Develop and maintain data quality checks to ensure data in the lake and warehouse remains accurate, consistent, and reliable.
•Collaboration with Stakeholders : Work closely with CTO, PMOs, business and data analysts to gather requirements and ensure alignment with project goals.
•Scope and Manage Projects : Collaborate with project managers to scope projects, create detailed work breakdown structures, and conduct risk assessments.
•Research & Development (R&D) : Keep up with the latest technological trends and identify innovative solutions to address customer challenges and company priorities.
Skills and Qualifications:
•Bachelor’s or Master’s degree in Engineering, Computer Science, or equivalent experience.
•At least 4+ years of relevant experience as a Data Engineer.
•Hands-on experience with cloud platforms such as AWS, Azure, or GCP, and familiarity with respective cloud services.
•Hands-on experience with one or more ETL tools such as Glue, Spark, Kafka, Informatica, DataStage, Talend, Azure Data Factory (ADF).
•Strong understanding of dimensional modeling techniques, including Star and Snowflake schemas.
•Experience in creating semantic models and reporting mapping documents.
•Solid concepts and experience in designing and developing ETL architectures.
•Strong understanding of RDBMS concepts and proficiency in SQL development.
•Proficiency in data modeling and mapping techniques.
•Experience integrating data from multiple sources.
•Experience working in distributed environments, including clustering and sharding.
•Knowledge of Big Data tools like Pig, Hive or NiFi would be a plus.
•Experience with Hadoop distributions like Cloudera or Hortonworks would be a plus.
•Excellent communication and presentation skills, both verbal and written.
•Ability to solve problems using a creative and logical approach.
•Self-motivated, analytical, detail-oriented, and organized, with a commitment to excellence.
•Experience in the financial services sector is a plus.
ACE Money Transfer Profile:
#J-18808-Ljbffr