38 Data Scientist jobs in Islamabad
Data Scientist
Posted today
Job Viewed
Job Description
About Fusemachines
Fusemachines is a 10+ year old AI company, dedicated to delivering state-of-the-art AI products and solutions to a diverse range of industries. Founded by Sameer Maskey, Ph.D., an Adjunct Associate Professor at Columbia University, our company is on a steadfast mission to democratize AI and harness the power of global AI talent from underserved communities. With a robust presence in four countries and a dedicated team of over 400 full-time employees, we are committed to fostering AI transformation journeys for businesses worldwide. At Fusemachines, we not only bridge the gap between AI advancement and its global impact but also strive to deliver the most advanced technology solutions to the world.
About the Role:
Location: Remote | Full-time
We are seeking an experienced and motivated Data Scientist to spearhead our data-driven initiatives. The ideal candidate will be responsible for architecting scalable solutions, and applying advanced analytical techniques to solve complex business problems. You will be instrumental in transforming raw data into actionable insights that drive strategy and operational improvements.
Key Responsibilities
Drive the design, development, and testing of big data applications to ensure the timely delivery of product goals.
Proactively identifying and implementing code and design optimizations.
Collaborate closely with data engineers, analysts, and business teams to analyze requirements and ensure data-driven solutions are effectively implemented.
Develop end-to-end data solutions, from data collection and cleaning to building and deploying predictive machine learning models.
Conduct exploratory data analysis using statistical methods to uncover trends, patterns, and actionable insights.
Create compelling data visualizations, dashboards, and reports to communicate complex findings to both technical and non-technical stakeholders.
Provide data-backed recommendations to support key business decisions and improve strategies.
Learn and integrate with a wide variety of internal and external systems, APIs, and platforms.
Required Skills & Qualifications
Minimum of 3+ years of hands-on experience in data science or big data development.
Proven track record of successfully guiding development projects.
Expertise in Python and PySpark, including tools like Jupyter Notebooks and environment controllers (e.g., Poetry, PipEnv).
Hands-on experience with the Databricks platform and Apache Spark.
Proficiency with relational databases (e.g., PostgreSQL, SQL Server, Oracle) and SQL.
Strong practical knowledge of data cleansing, transformation, and validation techniques.
Experience with code versioning tools like Git (GitHub, Azure DevOps, Bitbucket).
Excellent written and verbal communication skills, with the ability to articulate complex technical concepts clearly.
Fusemachines is an Equal Opportunities Employer, committed to diversity and inclusion. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or any other characteristic protected by applicable federal, state, or local laws
Data Scientist
Posted 2 days ago
Job Viewed
Job Description
Overview
We are seeking a Data Scientist with hands-on Python experience and proven abilities to support software activities in an Agile software development lifecycle. We are seeking a well-rounded developer to lead a cloud-based big data application using a variety of technologies. The ideal candidate will possess strong technical, analytical, and interpersonal skills. In addition, the candidate will lead developers on the team to achieve architecture and design objectives as agreed with stakeholders.
Responsibilities- Work with developers on the team to meet product deliverables
- Work independently and collaboratively on a multi-disciplined project team in an Agile development environment
- Contribute detailed design and architectural discussions as well as customer requirements sessions to support the implementation of code and procedures for our big data product
- Design and develop clear and maintainable code with automated open-source test functions
- Identify areas of code/design optimization and implementation
- Learn and integrate with a variety of systems, APIs, and platforms
- Interact with a multi-disciplined team to clarify, analyze, and assess requirements
- Be actively involved in design, development, and testing activities in big data applications
- Data Engineering & Processing: Develop scalable data pipelines using PySpark for processing large datasets
- Work extensively in Databricks for collaborative data science workflows and model deployment
- Handle messy, unstructured, and semi-structured data, performing thorough Exploratory Data Analysis (EDA)
- Apply appropriate statistical measures and hypothesis testing to derive insights and validate assumptions
- Write complex SQL queries for data extraction, transformation, and analysis
- Build and validate predictive models using techniques such as GBMs (XGBoost, LightGBM) and GLMs (logistic/Poisson)
- Apply unsupervised learning techniques like clustering (K-Means, DBSCAN), PCA, and anomaly detection
- Automate data workflows and model training pipelines using scheduling tools (e.g., Airflow, Databricks Jobs)
- Optimize model performance and data processing efficiency
- Basic experience with Azure or other cloud platforms (AWS, GCP) for data storage, compute, and model deployment
- Familiarity with cloud-native tools like Azure Data Lake, Azure ML, or equivalent
- Programming Languages: Python (with PySpark), SQL
- Tools & Platforms: Databricks, Azure (or other cloud platforms), Git
- Libraries & Frameworks: scikit-learn, pandas, numpy, matplotlib/seaborn, XGBoost/LightGBM
- Statistical Knowledge: Hypothesis testing, confidence intervals, correlation analysis
- Machine Learning: Supervised and Unsupervised learning, model evaluation metrics
- Data Handling: EDA, feature engineering, dealing with missing/outlier data
- Automation: Experience with job scheduling and pipeline automation
- Minimum 5+ years in Data Science or related fields
- Hands on experience with Databricks
- Experience with data cleansing, transformation, and validation
- Proven technical leadership on prior development projects
- Hands-on experience with versioning tools such as GitHub, Azure Devops, Bitbucket, etc
- Hands-on experience building pipelines in GitHub (or Azure Devops, etc.)
- Hands-on experience using Relational Databases, such as Oracle, SQL Server, MySQL, Postgres or similar
- Experience using Markdown to document code in repositories or automated documentation tools like PyDoc
- Strong written and verbal communication skills
- Experience with data visualization tools such as Power BI or Tableau
- Experience with MLOps, DEVOPS CI/CD tools and automation processes (e.g., Azure DevOPS, GitHub, BitBucket)
- Containers and their environments (Docker, Podman, Docker-Compose, Kubernetes, Minikube, Kind, etc.)
- Experience working in cross-functional teams and communicating insights to stakeholders
Master of Science/B. Tech degree from an accredited university
Equal OpportunityFusemachines is an Equal Opportunities Employer, committed to diversity and inclusion. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or any other characteristic protected by applicable federal, state, or local laws.
Job Details- Seniority level: Mid-Senior level
- Employment type: Contract
- Job function: Engineering and Information Technology
- Industry: Internet Publishing
Data Scientist
Posted 25 days ago
Job Viewed
Job Description
Join us to shape your future, unlock opportunities, and drive innovation. Apply now to get registered in our talent pool.
Job Openings- Data Scientist - Experience: 5+ years - Location: Remote - Posted: 05/01/2025 - View Job Details
- React Native Developer - Experience: 5+ years - Location: Islamabad, PK - Posted: 05/01/2025 - View Job Details
- Data Annotator - Experience: 1 year - Location: Remote - Posted: 05/01/2025 - View Job Details
Leave your CV, and our manager will contact you soon. We look forward to welcoming you to our team.
#J-18808-LjbffrData Scientist
Posted today
Job Viewed
Job Description
About the Role:
Location: Remote | Full-time
We are seeking an experienced and motivated Data Scientist to spearhead our data-driven initiatives. The ideal candidate will be responsible for architecting scalable solutions, and applying advanced analytical techniques to solve complex business problems. You will be instrumental in transforming raw data into actionable insights that drive strategy and operational improvements.
Key Responsibilities Drive the design, development, and testing of big data applications to ensure the timely delivery of product goals.
Proactively identifying and implementing code and design optimizations.
Collaborate closely with data engineers, analysts, and business teams to analyze requirements and ensure data-driven solutions are effectively implemented.
Develop end-to-end data solutions, from data collection and cleaning to building and deploying predictive machine learning models.
Conduct exploratory data analysis using statistical methods to uncover trends, patterns, and actionable insights.
Create compelling data visualizations, dashboards, and reports to communicate complex findings to both technical and non-technical stakeholders.
Provide data-backed recommendations to support key business decisions and improve strategies.
Learn and integrate with a wide variety of internal and external systems, APIs, and platforms.
Required Skills & Qualifications Minimum of 3+ years of hands-on experience in data science or big data development.
Proven track record of successfully guiding development projects.
Expertise in Python and PySpark, including tools like Jupyter Notebooks and environment controllers (e.g., Poetry, PipEnv).
Hands-on experience with the Databricks platform and Apache Spark.
Proficiency with relational databases (e.g., PostgreSQL, SQL Server, Oracle) and SQL.
Strong practical knowledge of data cleansing, transformation, and validation techniques.
Experience with code versioning tools like Git (GitHub, Azure DevOps, Bitbucket).
Excellent written and verbal communication skills, with the ability to articulate complex technical concepts clearly.
Fusemachines is an Equal Opportunities Employer, committed to diversity and inclusion. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or any other characteristic protected by applicable federal, state, or local laws
#J-18808-Ljbffr
Data Scientist
Posted 3 days ago
Job Viewed
Job Description
We are seeking a Data Scientist with hands-on Python experience and proven abilities to support software activities in an Agile software development lifecycle. We are seeking a well-rounded developer to lead a cloud-based big data application using a variety of technologies. The ideal candidate will possess strong technical, analytical, and interpersonal skills. In addition, the candidate will lead developers on the team to achieve architecture and design objectives as agreed with stakeholders. Responsibilities
Work with developers on the team to meet product deliverables Work independently and collaboratively on a multi-disciplined project team in an Agile development environment Contribute detailed design and architectural discussions as well as customer requirements sessions to support the implementation of code and procedures for our big data product Design and develop clear and maintainable code with automated open-source test functions Identify areas of code/design optimization and implementation Learn and integrate with a variety of systems, APIs, and platforms Interact with a multi-disciplined team to clarify, analyze, and assess requirements Be actively involved in design, development, and testing activities in big data applications Key Responsibilities
Data Engineering & Processing: Develop scalable data pipelines using PySpark for processing large datasets Work extensively in Databricks for collaborative data science workflows and model deployment Handle messy, unstructured, and semi-structured data, performing thorough Exploratory Data Analysis (EDA) Apply appropriate statistical measures and hypothesis testing to derive insights and validate assumptions Data Analysis & Modeling
Write complex SQL queries for data extraction, transformation, and analysis Build and validate predictive models using techniques such as GBMs (XGBoost, LightGBM) and GLMs (logistic/Poisson) Apply unsupervised learning techniques like clustering (K-Means, DBSCAN), PCA, and anomaly detection Automation & Optimization
Automate data workflows and model training pipelines using scheduling tools (e.g., Airflow, Databricks Jobs) Optimize model performance and data processing efficiency Cloud & Deployment
Basic experience with Azure or other cloud platforms (AWS, GCP) for data storage, compute, and model deployment Familiarity with cloud-native tools like Azure Data Lake, Azure ML, or equivalent Required Skills
Programming Languages: Python (with PySpark), SQL Tools & Platforms: Databricks, Azure (or other cloud platforms), Git Libraries & Frameworks: scikit-learn, pandas, numpy, matplotlib/seaborn, XGBoost/LightGBM Statistical Knowledge: Hypothesis testing, confidence intervals, correlation analysis Machine Learning: Supervised and Unsupervised learning, model evaluation metrics Data Handling: EDA, feature engineering, dealing with missing/outlier data Automation: Experience with job scheduling and pipeline automation Required Experience
Minimum 5+ years in Data Science or related fields Hands on experience with Databricks Experience with data cleansing, transformation, and validation Proven technical leadership on prior development projects Hands-on experience with versioning tools such as GitHub, Azure Devops, Bitbucket, etc Hands-on experience building pipelines in GitHub (or Azure Devops, etc.) Hands-on experience using Relational Databases, such as Oracle, SQL Server, MySQL, Postgres or similar Experience using Markdown to document code in repositories or automated documentation tools like PyDoc Strong written and verbal communication skills Preferred Qualifications
Experience with data visualization tools such as Power BI or Tableau Experience with MLOps, DEVOPS CI/CD tools and automation processes (e.g., Azure DevOPS, GitHub, BitBucket) Containers and their environments (Docker, Podman, Docker-Compose, Kubernetes, Minikube, Kind, etc.) Experience working in cross-functional teams and communicating insights to stakeholders Education
Master of Science/B. Tech degree from an accredited university Equal Opportunity
Fusemachines is an Equal Opportunities Employer, committed to diversity and inclusion. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or any other characteristic protected by applicable federal, state, or local laws. Job Details
Seniority level: Mid-Senior level Employment type: Contract Job function: Engineering and Information Technology Industry: Internet Publishing
#J-18808-Ljbffr
Data Scientist
Posted 9 days ago
Job Viewed
Job Description
Data Scientist
- Experience: 5+ years - Location: Remote - Posted: 05/01/2025 -
View Job Details React Native Developer
- Experience: 5+ years - Location: Islamabad, PK - Posted: 05/01/2025 -
View Job Details Data Annotator
- Experience: 1 year - Location: Remote - Posted: 05/01/2025 -
View Job Details Leave your CV, and our manager will contact you soon. We look forward to welcoming you to our team.
#J-18808-Ljbffr
Senior Data Scientist
Posted today
Job Viewed
Job Description
About the Role:
Grade Level (for internal use):
10Team OverviewOur Data and AI Engineering Team consists of skilled professionals focused on creating innovative and scalable data solutions. We specialise in cloud technologies, especially AWS and Snowflake , and prioritise data security and governance. Collaboration is key , as we work closely with Data Scientists and ML Engineers to advance impactful data projects. We encourage continuous learning and are actively adopting new technologies like Generative AI models and advanced machine learning tools to improve our data platforms and achieve outstanding outcomes.
Role ImpactIn this role, you will design and implement scalable data architectures aligned with strategic goals, especially those supporting all stages of the Machine Learning (ML) and Generative AI (GenAI) lifecycle . You will apply your expertise in Snowflake, Python, and cloud technologies to maintain strong data security and integrity. Your work will drive innovation by developing, optimising, and maintaining production-level data pipelines that support both analytical and predictive modelling, enhancing data processing efficiency and enabling data-driven decision-making across the organisation.
Main Contributions
Responsibilities: (Dual Emphasis on Data Engineering & Data Science Enablement)- ML Data Pipeline Development: Create and uphold scalable, reliable, and secure data infrastructure essential for Machine Learning model training, validation, and inference in production settings.
- Feature Store Collaboration: Partner with Data Scientists to design, build, and maintain a centralized Feature Store , ensuring consistency, reusability, and low-latency access to features for model training and deployment.
- GenAI Data Handling: Develop and execute data ingestion and processing workflows for text and unstructured data , including methods for vectorisation and populating vector databases to support Retrieval-Augmented Generation (RAG) architectures .
- Core Data Engineering: Build and maintain robust ELT/ETL processes within Snowflake and AWS to integrate diverse, large-scale data sources, ensuring excellent data governance, quality, and performance for general business intelligence.
- MLOps Partnership: Collaborate closely with ML Engineers to operationalise data flows to and from MLOps platforms (e.g., SageMaker, MLflow) , focusing on automating pipelines and managing data versioning .
- Query and Compute Efficiency: Optimize data storage and computing resources across the platform to speed up data science experiments and model training workloads , while managing costs effectively.
- Data Security and Compliance: Implement advanced security measures to safeguard sensitive data used in AI datasets , maintaining data integrity and confidentiality in line with governance standards.
Essential Skills: (Dual Focus)
- Data Engineering Expertise (5+ Years): Extensive experience in a core Data Engineering role, including 5+ years working with Snowflake and strong skills in SQL and Python for data manipulation and engineering.
- Data Science Enablement: Demonstrated experience building and optimizing data pipelines specifically for Machine Learning model training and deployment (e.g., feature engineering, time-series data handling, complex schema management).
- Cloud Computing: Solid practical experience with AWS services (S3, Lambda, EC2, ECS) for creating reliable, scalable cloud data solutions.
- Distributed Processing: Familiarity with distributed computing frameworks such as PySpark or Dask for large-scale data transformation.
- Data Security: Strong knowledge of RBAC security models and secure data connection protocols.
- Communication: Excellent communication and teamwork skills, with a proven history of effective collaboration with Data Science and Business teams .
- MLOps / MLOps Platforms: Hands-on experience integrating data systems with MLOps platforms (e.g., MLflow, Amazon SageMaker) for production model deployment.
- Generative AI Data: Experience with vector databases (e.g., Pinecone, ChromaDB) and familiarity with data preparation for LLMs and RAG architectures .
- Feature Store Expertise: Practical or theoretical knowledge of Feature Stores (e.g., Feast, Tecton) and their role in the ML lifecycle.
- Advanced Data Orchestration: Experience with modern orchestrators like Apache Airflow, Prefect, or Dagster for managing complex, data-science-focused workflows.
- Experience with Cortex AI or comparable cloud AI tools.
- Certifications in Snowflake, AWS Data Analytics, or related areas.
#LI-USA
What’s In It For You?
Our Purpose:
Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world.
Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress.
Our People:
We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all.
From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference.
Our Values:
Integrity, Discovery, Partnership
At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals.
Benefits:
We take care of you, so you cantake care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global.
Our benefits include:
Health & Wellness: Health care coverage designed for the mind and body.
Flexible Downtime: Generous time off helps keep you energized for your time on.
Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills.
Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs.
Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families.
Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference.
For more information on benefits by country visit:
Global Hiring and Opportunity at S&P Global:
At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets.
Recruitment Fraud Alert:
If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activityhere .
---
Equal Opportunity Employer
S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment.
If you need an accommodation during the application process due to a disability, please send an email to: and your request will be forwarded to the appropriate person.
US Candidates Only: The EEO is the Law Poster describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision -
---
20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning)Job ID:
Posted On:
Location: Islamabad, Pakistan
#J-18808-Ljbffr
Be The First To Know
About the latest Data scientist Jobs in Islamabad !
Senior Data Scientist
Posted 2 days ago
Job Viewed
Job Description
Seasoned Technical Recruiter | Building high-performing Teams | Searching for Innovative Minds | Talent Advisor | Mentor | Certified DEI Professional…
About The RoleGrade Level (for internal use): 10
Team OverviewOur Data and AI Engineering Team consists of skilled professionals focused on creating innovative and scalable data solutions. We specialise in cloud technologies, especially AWS and Snowflake, and prioritise data security and governance. Collaboration is key, as we work closely with Data Scientists and ML Engineers to advance impactful data projects. We encourage continuous learning and are actively adopting new technologies like Generative AI models and advanced machine learning tools to improve our data platforms and achieve outstanding outcomes.
Role ImpactIn this role, you will design and implement scalable data architectures aligned with strategic goals, especially those supporting all stages of the Machine Learning (ML) and Generative AI (GenAI) lifecycle. You will apply your expertise in Snowflake, Python, and cloud technologies to maintain strong data security and integrity. Your work will drive innovation by developing, optimising, and maintaining production-level data pipelines that support both analytical and predictive modelling, enhancing data processing efficiency and enabling data-driven decision-making across the organisation.
Responsibilities- ML Data Pipeline Development: Create and uphold scalable, reliable, and secure data infrastructure essential for Machine Learning model training, validation, and inference in production settings.
- Feature Store Collaboration: Partner with Data Scientists to design, build, and maintain a centralized Feature Store, ensuring consistency, reusability, and low-latency access to features for model training and deployment.
- GenAI Data Handling: Develop and execute data ingestion and processing workflows for text and unstructured data, including methods for vectorisation and populating vector databases to support Retrieval-Augmented Generation (RAG) architectures.
- Core Data Engineering: Build and maintain robust ELT/ETL processes within Snowflake and AWS to integrate diverse, large-scale data sources, ensuring excellent data governance, quality, and performance for general business intelligence.
- MLOps Partnership: Collaborate closely with ML Engineers to operationalise data flows to and from MLOps platforms (e.g., SageMaker, MLflow), focusing on automating pipelines and managing data versioning.
- Query and Compute Efficiency: Optimize data storage and computing resources across the platform to speed up data science experiments and model training workloads, while managing costs effectively.
- Data Security and Compliance: Implement advanced security measures to safeguard sensitive data used in AI datasets, maintaining data integrity and confidentiality in line with governance standards.
- Data Engineering Expertise (5+ Years): Extensive experience in a core Data Engineering role, including 5+ years working with Snowflake and strong skills in SQL and Python for data manipulation and engineering.
- Data Science Enablement: Demonstrated experience building and optimizing data pipelines specifically for Machine Learning model training and deployment (e.g., feature engineering, time-series data handling, complex schema management).
- Cloud Computing: Solid practical experience with AWS services (S3, Lambda, EC2, ECS) for creating reliable, scalable cloud data solutions.
- Distributed Processing: Familiarity with distributed computing frameworks such as PySpark or Dask for large-scale data transformation.
- Data Security: Strong knowledge of RBAC security models and secure data connection protocols.
- Communication: Excellent communication and teamwork skills, with a proven history of effective collaboration with Data Science and Business teams.
- MLOps / MLOps Platforms: Hands-on experience integrating data systems with MLOps platforms (e.g., MLflow, Amazon SageMaker) for production model deployment.
- Generative AI Data: Experience with vector databases (e.g., Pinecone, ChromaDB) and familiarity with data preparation for LLMs and RAG architectures.
- Feature Store Expertise: Practical or theoretical knowledge of Feature Stores (e.g., Feast, Tecton) and their role in the ML lifecycle.
- Advanced Data Orchestration: Experience with modern orchestrators like Apache Airflow, Prefect, or Dagster for managing complex, data-science-focused workflows.
- Experience with Cortex AI or comparable cloud AI tools.
- Certifications in Snowflake, AWS Data Analytics, or related areas.
Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress.
Our People: We’re more than 35,000 strong worldwide. Our team is driven by curiosity and a belief that Essential Intelligence can help build a more prosperous future for us all. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business.
Our Values: Integrity, Discovery, Partnership. We focus on Powering Global Markets with integrity, discovery, and close collaboration with each other and our customers to achieve shared goals.
BenefitsWe take care of you, so you can take care of business. We provide everything you—and your career—need to thrive at S&P Global.
- Health & Wellness: Health care coverage designed for the mind and body.
- Flexible Downtime: Generous time off to stay energized.
- Continuous Learning: Resources to grow your career and skills.
- Invest in Your Future: Competitive pay, retirement planning, education benefits with company-matched student loan contributions, and financial wellness programs.
- Family Friendly Perks: Benefits for partners and children.
- Beyond the Basics: Perks like retail discounts and referral incentives.
Equal Opportunity Employer: S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. If you need an accommodation during the application process, please contact
For more information on benefits by country, visit: spgbenefits.com/benefit-summaries
#J-18808-LjbffrConsultant Data Scientist
Posted 4 days ago
Job Viewed
Job Description
Job Title: Expert in Predictive Analytics
Location: Information Technology and Services - Islamabad, Pakistan
Posted on: Jul 4, 2016
Last Date: Aug 8, 2016
Job Responsibilities- This role will act as an expert in Predictive Analytics to exploit new analytic and statistical tools, producing insights into our customers and undertaking predictive and behavioral analytics driving evidence-based resource decisions.
- The post holder will be expected to act independently as the expert, driving analytical insight into department-wide problems with complex recommendations that have been rigorously tested.
- Decisions will affect all work areas in terms of enforcement and compliance of FBR, providing advice, direction, and support to enhance analytical capability in algorithm development to predict taxpayer outcomes.
- The post holder will have a pioneering role with responsibility to significantly enhance the FBR’s analytical capability sustainably, informing decision-making and compliance strategy using data-driven analytics.
- Work alongside existing technical staff to mentor, develop, and deepen predictive modeling/data science skills in the team.
ESSENTIAL KNOWLEDGE AND SKILLS
- Critically challenge analysis employed in their business and provide constructive, pioneering alternatives to drive significant improvements.
- Support innovation and improvement of risk, customer relationship management (CRM), and cost modeling to improve end-to-end processes.
- Extensive experience of analyzing large and complex datasets.
- Familiarity in algorithm development categorized as Artificial Intelligence, Predictive Analytics, Data Mining, or Machine Learning.
- Demonstrably expert in predictive analytics using a variety of commercially available analytical IT tools, e.g., SAS, R, or Python.
- Extensive hands-on experience of credit/risk and/or behavioral data analytics in a large organization.
- Experience of delivering risk and analysis projects balancing cost, quality, and timing variables for successful delivery.
- Significant experience (ideally 5 years +) in a post with project lead responsibility for operational applications of propensity/risk analysis strategies.
- A minimum of a 4 year Bachelor’s degree or Masters in a numerate discipline from a recognized local or international university.
- Strong quantitative and technical skills.
- Experience of mentoring and developing teams in a complex analytical environment.
- Ability to engage effectively with senior business users through clear and professional written communication.
- An appreciation of Big Data technologies such as Hadoop, Spark, and Mapreduce.
Desired Knowledge and Skills
- Knowledge of performance indicators used to monitor risk portfolio performance.
- Experience of segmentation and marketing analytics approaches.
- Good general understanding of BI tools and techniques such as Cognos, Business Objects, and Tableau.
- The work is inherently complex and intellectually demanding - if you like ambiguous and challenging data work and logical problem-solving, this is for you.
Senior Data Scientist
Posted 7 days ago
Job Viewed
Job Description
Grade Level (for internal use):
10The Team: Our Data Engineering Team is a dynamic group of professionals dedicated to building innovative and scalable data solutions. We pride ourselves on our expertise in cloud technologies, particularly AWS and Snowflake, and our commitment to data security and governance. Our team thrives on collaboration, working closely with business and technical stakeholders to drive impactful data initiatives. We value continuous learning and are always exploring new technologies, such as AI-driven models and advanced data visualization tools, to enhance our data platforms and deliver exceptional results.
The Impact: At S&P Global, we don’t give you intelligence—we give you essential intelligence. The essential intelligence you need to make decisions with conviction. We’re the world’s foremost provider of credit ratings, benchmarks and analytics in the global capital and commodity markets. Our divisions include S&P Global Ratings, S&P Global Market Intelligence, S&P Dow Jones Indices and S&P Global Platts. For more information, visit .
As part of this- you will be designing and implementing scalable data architectures that align with strategic objectives, enhancing overall business performance. Utilize your expertise in Snowflake, Python, and cloud technologies to ensure robust data security and integrity. Drive innovation by integrating AI-driven models and optimizing data workflows, thereby improving data processing efficiency and supporting informed decision-making across the organization.
What’s in it for you: As a Data Engineer, you will be at the forefront of transforming our data landscape, playing a crucial role in shaping the future of our data-driven strategies. Engage in exciting projects that leverage cutting-edge technologies like AI and cloud platforms to optimize data operations. Benefit from a collaborative and innovative work environment that encourages professional growth and values your contributions to achieving strategic goals. Enjoy opportunities to expand your skills and make a significant impact on our organization's success.
Our Marketing & Data Technology department at S&P Global is dedicated to driving innovation and ensuring the seamless integration and management of data across our commercial and marketing platforms. Our team values collaboration, continuous learning, and a commitment to excellence. We pride ourselves on our ability to adapt quickly to technological advancements and our focus on delivering high-quality data solutions to our clients.
Responsibilities:
- Design and implement scalable and efficient data systems, ensuring robust data architecture.
- Develop innovative data products that align with business goals and enhance data accessibility.
- Ensure data integrity and confidentiality through expert application of data security and encryption protocols.
- Leverage cloud technologies, particularly AWS, to optimize data operations and enhance performance.
- Implement and manage RBAC security models within Snowflake to maintain secure data environments.
- Collaborate across business and technical teams to ensure effective communication and project delivery.
- Manage large-scale data projects from inception to completion, ensuring timely and successful outcomes.
Required Skills:
- Bachelor’s degree in software engineering, Computer Science, or a related field, or 5+ years of experience in data engineering.
- 5+ years of experience working with Snowflake in a data engineering capacity.
- Strong proficiency in data architecture and demonstrated experience in data product development.
- Advanced proficiency in Python and SQL for data transformation and analytics.
- Deep understanding of RBAC security models and secure data connection protocols (JDBC, ODBC, etc.).
- Excellent communication skills for effective collaboration across diverse teams.
- Exposure to systems integrations for seamless data interoperability.
- Proficiency in query optimization and algorithmic efficiency.
Preferred Qualities:
- Knowledge of machine learning for integrating basic models into data processes.
- Strong collaborative skills and the ability to work effectively in team-oriented environments.
- Experience with Cortex AI, LLM tools, and developing Streamlit applications.
- Familiarity with data governance, security, and compliance best practices.
- Certifications in Snowflake, data engineering, or related fields.
- Experience with data visualization tools like Tableau or Power BI, and ELT/ETL tools such as Informatica.
#LI-USA
What’s In It For You?
Our Purpose:
Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world.
Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress.
Our People:
We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all.
From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference.
Our Values:
Integrity, Discovery, Partnership
At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals.
Benefits:
We take care of you, so you cantake care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global.
Our benefits include:
Health & Wellness: Health care coverage designed for the mind and body.
Flexible Downtime: Generous time off helps keep you energized for your time on.
Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills.
Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs.
Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families.
Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference.
For more information on benefits by country visit:
Global Hiring and Opportunity at S&P Global:
At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets.
Recruitment Fraud Alert:
If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here .
---
Equal Opportunity Employer
S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment.
If you need an accommodation during the application process due to a disability, please send an email to: and your request will be forwarded to the appropriate person.
US Candidates Only: The EEO is the Law Poster describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - English_formattedESQA508c.pdf
---
20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) #J-18808-Ljbffr