13 Etl Engineer jobs in Pakistan
Data Engineering
Posted 13 days ago
Job Viewed
Job Description
Karachi
Full time
Remote
We are seeking an experienced Senior Data Engineer with strong expertise in Informatica Cloud to join
our data engineering team. In this role, you will be responsible for designing, developing, and optimizing
high-performance data pipelines and ensuring seamless data integration across cloud platforms. Your
technical expertise will help improve data flow and processing capabilities while supporting critical data
initiatives for the business.
Description
- Develop and Maintain Data Pipelines: Design, develop, and maintain efficient data pipelines
using Informatica Cloud, ensuring high performance, scalability, and data integrity. - Implement data integration solutions across multiple cloud platforms (AWS, Azure, GCP),
ensuring seamless data movement and transformation. - Focus on building robust ETL pipelines to extract, transform, and load data from various sources
into cloud-based storage and data warehouses. - Optimize the performance of data pipelines and integration processes to ensure speed,
accuracy, and cost-efficiency. - Work closely with teams to implement data quality checks and ensure data integrity across the
data pipelines - Identify and resolve issues in the data pipeline, and work to continuously improve processes for
efficient data management - Collaborate with data analysts, data scientists, and other stakeholders to ensure the data
pipelines support business analytics and decision-making. - Document all processes, data pipeline configurations, and troubleshooting steps for future
reference and transparency.
Requirements
- 5+ years of experience in data engineering, with strong hands-on experience in Informatica
Cloud (Data Integration, Data Quality, etc.). - Extensive experience in designing and developing ETL pipelines and data integration workflows.
- Strong knowledge of cloud platforms such as AWS, Azure, or Google Cloud, and their integration
with data engineering solutions.
Senior ETL Data Engineer (Ab Initio / Informatica)
Posted 10 days ago
Job Viewed
Job Description
3 weeks ago Be among the first 25 applicants
Job Title: Senior ETL Data Engineer (Ab Initio / Informatica)
Experience: 4+ Years
Location: Lahore / Karachi / Islamabad (Hybrid)
Job Type: Full-Time
About The Role
We are looking for a highly skilled Senior ETL Data Engineer (Ab Initio / Informatica) with strong experience in building robust data pipelines, working with large-scale datasets, and leveraging modern Big Data and cloud technologies. The ideal candidate should have hands-on expertise in ETL frameworks, distributed data processing, and data modeling within cloud environments such as AWS. If you have a passion for working with data and enjoy designing scalable systems, we’d like to meet you.
Key Responsibilities
- Design and develop complex ETL pipelines and data solutions using Big Data and cloud-native technologies.
- Leverage tools such as Ab Initio, Informatica, DBT, and Apache Spark to build scalable data workflows.
- Implement distributed data processing using Hadoop, Hive, Kafka, and Spark.
- Build and optimize data pipelines in AWS using services like EMR, Glue, Lambda, Athena, and S3.
- Work with various structured and unstructured data sources to perform efficient data ingestion and transformation.
- Write optimized SQL queries and manage stored procedures for complex data processing tasks.
- Orchestrate workflows using Airflow, AWS Step Functions, or similar schedulers.
- Collaborate with cross-functional teams to understand data needs and deliver high-quality datasets for analytics and reporting.
- Deploy data models into production environments and ensure robust monitoring and resource management.
- Mentor junior engineers and contribute to the team’s knowledge sharing and continuous improvement efforts.
- Identify and recommend process and technology improvements to enhance data pipeline performance and reliability.
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- 4+ years of hands-on experience in ETL development, data engineering, and data pipeline orchestration.
- Strong working knowledge of Ab Initio, Informatica, or similar ETL tools.
- Expertise in Python, PySpark, or Scala for data processing.
- Proven experience in Big Data technologies (Hadoop, Hive, Spark, Kafka).
- Proficient with AWS services related to data engineering (EMR, Glue, Lambda, Athena, S3).
- In-depth understanding of data modeling, ETL cycle, data warehousing, and data management principles.
- Hands-on experience with relational (PostgreSQL, MySQL) and columnar databases (Redshift, HBase, Snowflake).
- Familiarity with containerization (Docker), CI/CD pipelines (Jenkins), and Agile tools (Jira).
- Ability to troubleshoot complex data issues and propose scalable solutions.
- Excellent communication and collaboration skills.
- Experience with open table formats such as Apache Iceberg.
- Working knowledge of Snowflake and its data warehousing capabilities.
- Familiarity with GDE, Collect > IT, or other components of Ab Initio.
- Hybrid work model with flexibility to work from home and office.
- Exposure to cutting-edge technologies and high-impact projects.
- Collaborative team environment with opportunities for growth and innovation.
- Culture that values ownership, continuous learning, and mutual respect.
- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Information Technology
- Industries IT Services and IT Consulting
Referrals increase your chances of interviewing at NorthBay - Pakistan by 2x
Sign in to set job alerts for “Senior Data Engineer” roles. Senior Software Engineer / Assistant Team Lead - AI/ML Senior Software Engineer - Python - Django Senior Software Engineer - Python (Django)We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrSenior ETL Data Engineer (Ab Initio / Informatica)
Posted 13 days ago
Job Viewed
Job Description
Job Title: Senior ETL Data Engineer (Ab Initio / Informatica)
Experience: 4+ Years
Location: Lahore / Karachi / Islamabad (Hybrid)
Job Type: Full-Time
We are looking for a highly skilled Senior ETL Data Engineer (Ab Initio / Informatica) with strong experience in building robust data pipelines, working with large-scale datasets, and leveraging modern Big Data and cloud technologies. The ideal candidate should have hands-on expertise in ETL frameworks, distributed data processing, and data modeling within cloud environments such as AWS. If you have a passion for working with data and enjoy designing scalable systems, we’d like to meet you.
Key Responsibilities:Design and develop complex ETL pipelines and data solutions using Big Data and cloud-native technologies.
Leverage tools such as Ab Initio , Informatica , DBT , and Apache Spark to build scalable data workflows.
Implement distributed data processing using Hadoop , Hive , Kafka , and Spark .
Build and optimize data pipelines in AWS using services like EMR , Glue , Lambda , Athena , and S3 .
Work with various structured and unstructured data sources to perform efficient data ingestion and transformation.
Write optimized SQL queries and manage stored procedures for complex data processing tasks.
Orchestrate workflows using Airflow , AWS Step Functions , or similar schedulers.
Collaborate with cross-functional teams to understand data needs and deliver high-quality datasets for analytics and reporting.
Deploy data models into production environments and ensure robust monitoring and resource management.
Mentor junior engineers and contribute to the team’s knowledge sharing and continuous improvement efforts.
Identify and recommend process and technology improvements to enhance data pipeline performance and reliability.
Bachelor’s degree in Computer Science, Engineering, or a related field.
4+ years of hands-on experience in ETL development , data engineering , and data pipeline orchestration .
Strong working knowledge of Ab Initio , Informatica , or similar ETL tools.
Expertise in Python , PySpark , or Scala for data processing.
Proven experience in Big Data technologies (Hadoop, Hive, Spark, Kafka).
Proficient with AWS services related to data engineering (EMR, Glue, Lambda, Athena, S3).
In-depth understanding of data modeling , ETL cycle , data warehousing , and data management principles .
Hands-on experience with relational (PostgreSQL, MySQL) and columnar databases (Redshift, HBase, Snowflake).
Familiarity with containerization (Docker), CI/CD pipelines (Jenkins), and Agile tools (Jira).
Ability to troubleshoot complex data issues and propose scalable solutions.
Excellent communication and collaboration skills.
Experience with open table formats such as Apache Iceberg .
Working knowledge of Snowflake and its data warehousing capabilities.
Familiarity with GDE , Collect > IT , or other components of Ab Initio.
Hybrid work model with flexibility to work from home and office.
Exposure to cutting-edge technologies and high-impact projects.
Collaborative team environment with opportunities for growth and innovation.
Culture that values ownership, continuous learning, and mutual respect.
Senior ETL Data Engineer (Ab Initio / Informatica)
Posted 13 days ago
Job Viewed
Job Description
Job Title: Senior ETL Data Engineer (Ab Initio / Informatica)
Experience: 4+ Years
Location: Lahore / Karachi / Islamabad (Hybrid)
Job Type: Full-Time
We are looking for a highly skilled Senior ETL Data Engineer (Ab Initio / Informatica) with strong experience in building robust data pipelines, working with large-scale datasets, and leveraging modern Big Data and cloud technologies. The ideal candidate should have hands-on expertise in ETL frameworks, distributed data processing, and data modeling within cloud environments such as AWS. If you have a passion for working with data and enjoy designing scalable systems, we’d like to meet you.
Key Responsibilities:Design and develop complex ETL pipelines and data solutions using Big Data and cloud-native technologies.
Leverage tools such as Ab Initio , Informatica , DBT , and Apache Spark to build scalable data workflows.
Implement distributed data processing using Hadoop , Hive , Kafka , and Spark .
Build and optimize data pipelines in AWS using services like EMR , Glue , Lambda , Athena , and S3 .
Work with various structured and unstructured data sources to perform efficient data ingestion and transformation.
Write optimized SQL queries and manage stored procedures for complex data processing tasks.
Orchestrate workflows using Airflow , AWS Step Functions , or similar schedulers.
Collaborate with cross-functional teams to understand data needs and deliver high-quality datasets for analytics and reporting.
Deploy data models into production environments and ensure robust monitoring and resource management.
Mentor junior engineers and contribute to the team’s knowledge sharing and continuous improvement efforts.
Identify and recommend process and technology improvements to enhance data pipeline performance and reliability.
Bachelor’s degree in Computer Science, Engineering, or a related field.
4+ years of hands-on experience in ETL development , data engineering , and data pipeline orchestration .
Strong working knowledge of Ab Initio , Informatica , or similar ETL tools.
Expertise in Python , PySpark , or Scala for data processing.
Proven experience in Big Data technologies (Hadoop, Hive, Spark, Kafka).
Proficient with AWS services related to data engineering (EMR, Glue, Lambda, Athena, S3).
In-depth understanding of data modeling , ETL cycle , data warehousing , and data management principles .
Hands-on experience with relational (PostgreSQL, MySQL) and columnar databases (Redshift, HBase, Snowflake).
Familiarity with containerization (Docker), CI/CD pipelines (Jenkins), and Agile tools (Jira).
Ability to troubleshoot complex data issues and propose scalable solutions.
Excellent communication and collaboration skills.
Experience with open table formats such as Apache Iceberg .
Working knowledge of Snowflake and its data warehousing capabilities.
Familiarity with GDE , Collect > IT , or other components of Ab Initio.
Hybrid work model with flexibility to work from home and office.
Exposure to cutting-edge technologies and high-impact projects.
Collaborative team environment with opportunities for growth and innovation.
Culture that values ownership, continuous learning, and mutual respect.
Senior ETL Data Engineer (Ab Initio / Informatica)
Posted 13 days ago
Job Viewed
Job Description
Job Title: Senior ETL Data Engineer (Ab Initio / Informatica)
Experience: 4+ Years
Location: Lahore / Karachi / Islamabad (Hybrid)
Job Type: Full-Time
We are looking for a highly skilled Senior ETL Data Engineer (Ab Initio / Informatica) with strong experience in building robust data pipelines, working with large-scale datasets, and leveraging modern Big Data and cloud technologies. The ideal candidate should have hands-on expertise in ETL frameworks, distributed data processing, and data modeling within cloud environments such as AWS. If you have a passion for working with data and enjoy designing scalable systems, we’d like to meet you.
Key Responsibilities:Design and develop complex ETL pipelines and data solutions using Big Data and cloud-native technologies.
Leverage tools such as Ab Initio , Informatica , DBT , and Apache Spark to build scalable data workflows.
Implement distributed data processing using Hadoop , Hive , Kafka , and Spark .
Build and optimize data pipelines in AWS using services like EMR , Glue , Lambda , Athena , and S3 .
Work with various structured and unstructured data sources to perform efficient data ingestion and transformation.
Write optimized SQL queries and manage stored procedures for complex data processing tasks.
Orchestrate workflows using Airflow , AWS Step Functions , or similar schedulers.
Collaborate with cross-functional teams to understand data needs and deliver high-quality datasets for analytics and reporting.
Deploy data models into production environments and ensure robust monitoring and resource management.
Mentor junior engineers and contribute to the team’s knowledge sharing and continuous improvement efforts.
Identify and recommend process and technology improvements to enhance data pipeline performance and reliability.
Bachelor’s degree in Computer Science, Engineering, or a related field.
4+ years of hands-on experience in ETL development , data engineering , and data pipeline orchestration .
Strong working knowledge of Ab Initio , Informatica , or similar ETL tools.
Expertise in Python , PySpark , or Scala for data processing.
Proven experience in Big Data technologies (Hadoop, Hive, Spark, Kafka).
Proficient with AWS services related to data engineering (EMR, Glue, Lambda, Athena, S3).
In-depth understanding of data modeling , ETL cycle , data warehousing , and data management principles .
Hands-on experience with relational (PostgreSQL, MySQL) and columnar databases (Redshift, HBase, Snowflake).
Familiarity with containerization (Docker), CI/CD pipelines (Jenkins), and Agile tools (Jira).
Ability to troubleshoot complex data issues and propose scalable solutions.
Excellent communication and collaboration skills.
Experience with open table formats such as Apache Iceberg .
Working knowledge of Snowflake and its data warehousing capabilities.
Familiarity with GDE , Collect > IT , or other components of Ab Initio.
Hybrid work model with flexibility to work from home and office.
Exposure to cutting-edge technologies and high-impact projects.
Collaborative team environment with opportunities for growth and innovation.
Culture that values ownership, continuous learning, and mutual respect.
Senior ETL Data Engineer (Ab Initio / Informatica)
Posted 13 days ago
Job Viewed
Job Description
Job Title: Senior ETL Data Engineer (Ab Initio / Informatica)
Experience: 4+ Years
Location: Lahore / Karachi / Islamabad (Hybrid)
Job Type: Full-Time
We are looking for a highly skilled Senior ETL Data Engineer (Ab Initio / Informatica) with strong experience in building robust data pipelines, working with large-scale datasets, and leveraging modern Big Data and cloud technologies. The ideal candidate should have hands-on expertise in ETL frameworks, distributed data processing, and data modeling within cloud environments such as AWS. If you have a passion for working with data and enjoy designing scalable systems, we’d like to meet you.
Key Responsibilities:Design and develop complex ETL pipelines and data solutions using Big Data and cloud-native technologies.
Leverage tools such as Ab Initio , Informatica , DBT , and Apache Spark to build scalable data workflows.
Implement distributed data processing using Hadoop , Hive , Kafka , and Spark .
Build and optimize data pipelines in AWS using services like EMR , Glue , Lambda , Athena , and S3 .
Work with various structured and unstructured data sources to perform efficient data ingestion and transformation.
Write optimized SQL queries and manage stored procedures for complex data processing tasks.
Orchestrate workflows using Airflow , AWS Step Functions , or similar schedulers.
Collaborate with cross-functional teams to understand data needs and deliver high-quality datasets for analytics and reporting.
Deploy data models into production environments and ensure robust monitoring and resource management.
Mentor junior engineers and contribute to the team’s knowledge sharing and continuous improvement efforts.
Identify and recommend process and technology improvements to enhance data pipeline performance and reliability.
Bachelor’s degree in Computer Science, Engineering, or a related field.
4+ years of hands-on experience in ETL development , data engineering , and data pipeline orchestration .
Strong working knowledge of Ab Initio , Informatica , or similar ETL tools.
Expertise in Python , PySpark , or Scala for data processing.
Proven experience in Big Data technologies (Hadoop, Hive, Spark, Kafka).
Proficient with AWS services related to data engineering (EMR, Glue, Lambda, Athena, S3).
In-depth understanding of data modeling , ETL cycle , data warehousing , and data management principles .
Hands-on experience with relational (PostgreSQL, MySQL) and columnar databases (Redshift, HBase, Snowflake).
Familiarity with containerization (Docker), CI/CD pipelines (Jenkins), and Agile tools (Jira).
Ability to troubleshoot complex data issues and propose scalable solutions.
Excellent communication and collaboration skills.
Experience with open table formats such as Apache Iceberg .
Working knowledge of Snowflake and its data warehousing capabilities.
Familiarity with GDE , Collect > IT , or other components of Ab Initio.
Hybrid work model with flexibility to work from home and office.
Exposure to cutting-edge technologies and high-impact projects.
Collaborative team environment with opportunities for growth and innovation.
Culture that values ownership, continuous learning, and mutual respect.
Senior ETL Data Engineer (Ab Initio / Informatica)
Posted 13 days ago
Job Viewed
Job Description
3 weeks ago Be among the first 25 applicants
Get AI-powered advice on this job and more exclusive features.
Job Title: Senior ETL Data Engineer (Ab Initio / Informatica)
Experience: 4+ Years
Location: Lahore / Karachi / Islamabad (Hybrid)
Job Type: Full-Time
About The Role
We are looking for a highly skilled Senior ETL Data Engineer (Ab Initio / Informatica) with strong experience in building robust data pipelines, working with large-scale datasets, and leveraging modern Big Data and cloud technologies. The ideal candidate should have hands-on expertise in ETL frameworks, distributed data processing, and data modeling within cloud environments such as AWS. If you have a passion for working with data and enjoy designing scalable systems, we’d like to meet you.
Key Responsibilities
- Design and develop complex ETL pipelines and data solutions using Big Data and cloud-native technologies.
- Leverage tools such as Ab Initio, Informatica, DBT, and Apache Spark to build scalable data workflows.
- Implement distributed data processing using Hadoop, Hive, Kafka, and Spark.
- Build and optimize data pipelines in AWS using services like EMR, Glue, Lambda, Athena, and S3.
- Work with various structured and unstructured data sources to perform efficient data ingestion and transformation.
- Write optimized SQL queries and manage stored procedures for complex data processing tasks.
- Orchestrate workflows using Airflow, AWS Step Functions, or similar schedulers.
- Collaborate with cross-functional teams to understand data needs and deliver high-quality datasets for analytics and reporting.
- Deploy data models into production environments and ensure robust monitoring and resource management.
- Mentor junior engineers and contribute to the team’s knowledge sharing and continuous improvement efforts.
- Identify and recommend process and technology improvements to enhance data pipeline performance and reliability.
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- 4+ years of hands-on experience in ETL development, data engineering, and data pipeline orchestration.
- Strong working knowledge of Ab Initio, Informatica, or similar ETL tools.
- Expertise in Python, PySpark, or Scala for data processing.
- Proven experience in Big Data technologies (Hadoop, Hive, Spark, Kafka).
- Proficient with AWS services related to data engineering (EMR, Glue, Lambda, Athena, S3).
- In-depth understanding of data modeling, ETL cycle, data warehousing, and data management principles.
- Hands-on experience with relational (PostgreSQL, MySQL) and columnar databases (Redshift, HBase, Snowflake).
- Familiarity with containerization (Docker), CI/CD pipelines (Jenkins), and Agile tools (Jira).
- Ability to troubleshoot complex data issues and propose scalable solutions.
- Excellent communication and collaboration skills.
- Experience with open table formats such as Apache Iceberg.
- Working knowledge of Snowflake and its data warehousing capabilities.
- Familiarity with GDE, Collect > IT, or other components of Ab Initio.
- Hybrid work model with flexibility to work from home and office.
- Exposure to cutting-edge technologies and high-impact projects.
- Collaborative team environment with opportunities for growth and innovation.
- Culture that values ownership, continuous learning, and mutual respect.
- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Information Technology
- Industries IT Services and IT Consulting
Referrals increase your chances of interviewing at NorthBay - Pakistan by 2x
Sign in to set job alerts for “Senior Data Engineer” roles. Manager - Manager Finance Tech Data Engineering (Mashreq Global Network Pakistan) Senior Software Quality Assurance EngineerKarachi Division, Sindh, Pakistan 3 days ago
Karachi Division, Sindh, Pakistan 2 months ago
Senior Software Engineer I - AI (Contract) Senior Software Engineer- D365 Development Senior Data Scientist - GenAI & Agentic SystemsKarachi Division, Sindh, Pakistan 3 hours ago
Senior Full Stack Software Engineer (.NET Core & Angular)Karachi Division, Sindh, Pakistan 1 day ago
Karachi Division, Sindh, Pakistan 2 days ago
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrBe The First To Know
About the latest Etl engineer Jobs in Pakistan !
Senior ETL Data Engineer (Ab Initio / Informatica)
Posted 14 days ago
Job Viewed
Job Description
Job Title: Senior ETL Data Engineer (Ab Initio / Informatica)
Experience: 4+ Years
Location: Lahore / Karachi / Islamabad (Hybrid)
Job Type: Full-Time
We are looking for a highly skilled Senior ETL Data Engineer (Ab Initio / Informatica) with strong experience in building robust data pipelines, working with large-scale datasets, and leveraging modern Big Data and cloud technologies. The ideal candidate should have hands-on expertise in ETL frameworks, distributed data processing, and data modeling within cloud environments such as AWS. If you have a passion for working with data and enjoy designing scalable systems, we’d like to meet you.
Key Responsibilities:Design and develop complex ETL pipelines and data solutions using Big Data and cloud-native technologies.
Leverage tools such as Ab Initio , Informatica , DBT , and Apache Spark to build scalable data workflows.
Implement distributed data processing using Hadoop , Hive , Kafka , and Spark .
Build and optimize data pipelines in AWS using services like EMR , Glue , Lambda , Athena , and S3 .
Work with various structured and unstructured data sources to perform efficient data ingestion and transformation.
Write optimized SQL queries and manage stored procedures for complex data processing tasks.
Orchestrate workflows using Airflow , AWS Step Functions , or similar schedulers.
Collaborate with cross-functional teams to understand data needs and deliver high-quality datasets for analytics and reporting.
Deploy data models into production environments and ensure robust monitoring and resource management.
Mentor junior engineers and contribute to the team’s knowledge sharing and continuous improvement efforts.
Identify and recommend process and technology improvements to enhance data pipeline performance and reliability.
Bachelor’s degree in Computer Science, Engineering, or a related field.
4+ years of hands-on experience in ETL development , data engineering , and data pipeline orchestration .
Strong working knowledge of Ab Initio , Informatica , or similar ETL tools.
Expertise in Python , PySpark , or Scala for data processing.
Proven experience in Big Data technologies (Hadoop, Hive, Spark, Kafka).
Proficient with AWS services related to data engineering (EMR, Glue, Lambda, Athena, S3).
In-depth understanding of data modeling , ETL cycle , data warehousing , and data management principles .
Hands-on experience with relational (PostgreSQL, MySQL) and columnar databases (Redshift, HBase, Snowflake).
Familiarity with containerization (Docker), CI/CD pipelines (Jenkins), and Agile tools (Jira).
Ability to troubleshoot complex data issues and propose scalable solutions.
Excellent communication and collaboration skills.
Experience with open table formats such as Apache Iceberg .
Working knowledge of Snowflake and its data warehousing capabilities.
Familiarity with GDE , Collect > IT , or other components of Ab Initio.
Hybrid work model with flexibility to work from home and office.
Exposure to cutting-edge technologies and high-impact projects.
Collaborative team environment with opportunities for growth and innovation.
Culture that values ownership, continuous learning, and mutual respect.
Data Science & Engineering Lead
Posted 13 days ago
Job Viewed
Job Description
Overview:
We’re looking for a hands-on Data Science & Engineering Lead to lead our data strategy and help scale a lean, high-impact team. This role blends leadership, architecture, and deep technical work - from building predictive models to designing the infrastructure that powers real-time decision-making. You’ll partner closely with cross-functional teams (Product, Business, Finance, Tech) and take full ownership of analytics delivery from raw data to actionable insight.
This is a builder’s role - ideal for someone who wants deep ownership, startup pace, and the chance to grow as we scale.
Responsibilities:
- Define and deliver our data strategy - from core infrastructure to insights delivery
- Build and mentor a team of 2–5 data scientists and engineers
- Design and deploy predictive models, recommendation systems, and performance analytics
- Architect, deploy, and maintain scalable data pipelines and analytics tooling
- Own and scale robust data pipelines and ensure data integrity across business verticals
- Collaborate closely with stakeholders across Product, Business Finance, and Tech teams to integrate data into daily operations and product decisions
- Act as the go-to person for data strategy, experimentation, and insights
- 5–6 years of relevant experience in data science, engineering or analytics,
- At least 1-2 years in a leading or mentoring small teams (leading 2-5 people) within an agile high tech environment
- Strong command of Python or R; strong SQL skills required
- Deep expertise in data analysis, predictive modeling, designing scalable pipelines, and maintaining analytics infrastructure
- Familiarity with modern BI tools (e.g., Looker, Metabase, Power BI, Tableau)
- Experience working cross functionally with business and product teams
- Startup mindset; strong bias for action, autonomy and ownership; able to apply agile principles and own delivery end to end
- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Engineering and Information Technology
- Industries Hospitality, Food and Beverage Services, and Retail
Referrals increase your chances of interviewing at Soum by 2x
Sign in to set job alerts for “Data Science Specialist” roles. Senior Machine Learning Engineer (Personalization)We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrData Science & Engineering Lead
Posted 25 days ago
Job Viewed
Job Description
This is a builder’s role - ideal for someone who wants deep ownership, startup pace, and the chance to grow as we scale.
Responsibilities:
Define and deliver our data strategy - from core infrastructure to insights delivery Build and mentor a team of 2–5 data scientists and engineers Design and deploy predictive models, recommendation systems, and performance analytics Architect, deploy, and maintain scalable data pipelines and analytics tooling Own and scale robust data pipelines and ensure data integrity across business verticals Collaborate closely with stakeholders across Product, Business Finance, and Tech teams to integrate data into daily operations and product decisions Act as the go-to person for data strategy, experimentation, and insights
Requirements:
5–6 years of relevant experience in data science, engineering or analytics, At least 1-2 years in a leading or mentoring small teams (leading 2-5 people) within an agile high tech environment Strong command of Python or R; strong SQL skills required Deep expertise in data analysis, predictive modeling, designing scalable pipelines, and maintaining analytics infrastructure Familiarity with modern BI tools (e.g., Looker, Metabase, Power BI, Tableau) Experience working cross functionally with business and product teams Startup mindset; strong bias for action, autonomy and ownership; able to apply agile principles and own delivery end to end
Seniority level
Seniority level Mid-Senior level Employment type
Employment type Full-time Job function
Job function Engineering and Information Technology Industries Hospitality, Food and Beverage Services, and Retail Referrals increase your chances of interviewing at Soum by 2x Sign in to set job alerts for “Data Science Specialist” roles.
Senior Machine Learning Engineer (Personalization)
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-Ljbffr