1,131 Python Engineer jobs in Pakistan
Python Engineer
Posted 19 days ago
Job Viewed
Job Description
We are seeking a Principal Software Engineer/ Associate Architect to join our Delivery team as we accelerate innovation and support growth through technology solutions. The resource will support mission-critical integrations between source data with financial SaaS and custom applications.
Responsibilities:
- Build automation tools for ML workflows orchestration and model deployments using CI/CD workflows.
- Automate infrastructure deployments and tool configurations using Terraform or similar IaC tools.
- Write clean, maintainable code in Python, Go, and other object-oriented programming languages (e.g., Java).
- Troubleshoot and resolve issues related to ML workflow deployments.
- Collaborate with data science teams to improve developer workflows and ensure seamless integration with tools.
- Write CICD code — experience is a must.
- Develop internal tools and applications using Go.
Requirements:
- Proficiency in Go — required. Familiarity with Java is acceptable.
- Flexibility to work from 2 PM to 11 PM PKT for team collaboration.
- 5+ years of professional experience.
- Hands-on experience with MLOps workflow tools such as MLFlow, Vertex AI, etc.
- Proficiency in Python, Go, and familiarity with another object-oriented language (e.g., Java).
- Experience with Terraform for automated deployments and IaC.
- Experience with Kubernetes or other container orchestration platforms.
- Basic knowledge of cloud technologies (AWS, GCP).
- Familiarity with security best practices in cloud and containerized environments.
- Ability to write unit tests, integration tests, and produce clean, reusable code.
- Passion for leveraging GenAI to automate manual processes.
We have a talented team of 700+ professionals working on innovative enterprise projects & products. Our clients include Fortune 100 retail and CPG companies, leading store chains, fast-growth fintech firms, and Silicon Valley startups.
What sets Confiz apart is our focus on processes and culture. We are ISO 9001:2015, ISO 27001:2022, ISO 2000-1:2018, and ISO 14001:2015 certified. We foster a vibrant culture of learning, collaboration, and fun at the workplace.
Join us to work with cutting-edge technologies and contribute to our success and your growth.
To learn more about Confiz Limited, visit:
#J-18808-LjbffrPython Engineer
Posted 19 days ago
Job Viewed
Job Description
Principal Software Engineer/ Associate Architect
to join our Delivery team as we accelerate innovation and support growth through technology solutions. The resource will support mission-critical integrations between source data with financial SaaS and custom applications. Responsibilities: Build automation tools for ML workflows orchestration and model deployments using CI/CD workflows. Automate infrastructure deployments and tool configurations using Terraform or similar IaC tools. Write clean, maintainable code in Python, Go, and other object-oriented programming languages (e.g., Java). Troubleshoot and resolve issues related to ML workflow deployments. Collaborate with data science teams to improve developer workflows and ensure seamless integration with tools. Write CICD code — experience is a must. Develop internal tools and applications using Go. Requirements: Proficiency in Go — required. Familiarity with Java is acceptable. Flexibility to work from 2 PM to 11 PM PKT for team collaboration. 5+ years of professional experience. Hands-on experience with MLOps workflow tools such as MLFlow, Vertex AI, etc. Proficiency in Python, Go, and familiarity with another object-oriented language (e.g., Java). Experience with Terraform for automated deployments and IaC. Experience with Kubernetes or other container orchestration platforms. Basic knowledge of cloud technologies (AWS, GCP). Familiarity with security best practices in cloud and containerized environments. Ability to write unit tests, integration tests, and produce clean, reusable code. Passion for leveraging GenAI to automate manual processes. We have a talented team of 700+ professionals working on innovative enterprise projects & products. Our clients include Fortune 100 retail and CPG companies, leading store chains, fast-growth fintech firms, and Silicon Valley startups. What sets Confiz apart is our focus on processes and culture. We are ISO 9001:2015, ISO 27001:2022, ISO 2000-1:2018, and ISO 14001:2015 certified. We foster a vibrant culture of learning, collaboration, and fun at the workplace. Join us to work with cutting-edge technologies and contribute to our success and your growth. To learn more about Confiz Limited, visit:
Full Stack Python Engineer
Posted 14 days ago
Job Viewed
Job Description
At Synares Systems , we transform ideas into impactful software solutions. Specializing in custom development, we empower businesses to streamline processes and enhance user experiences. Join our innovative team and help shape the future of software excellence!
Tasks- Develop and maintain APIs and back-end systems using Python frameworks (Django, Flask, or FastAPI).
- Build responsive front-end interfaces using React, Vue.js, or similar frameworks.
- Design and optimize databases for performance and scalability.
- Integrate third-party APIs and deploy applications using Docker and CI/CD pipelines.
- Collaborate with cross-functional teams to deliver high-quality features.
- Ensure system security, performance, and reliability.
- Stay updated on industry trends to drive innovation and improvements.
- Proficiency in Python (Django, Flask, or FastAPI) and front-end frameworks (React, Vue.js).
- Experience with databases (PostgreSQL, MySQL, MongoDB) and RESTful API design.
- Familiarity with Docker, Git, and cloud platforms (AWS, GCP, or Azure).
- Strong problem-solving, communication, and teamwork skills.
- 2+ years of full-stack development experience; Bachelor’s degree in a relevant field.
- Competitive salary and performance-based bonuses.
- Flexible working hours and remote work options.
- Opportunities for career growth and skill development.
- Collaborative and innovative work environment.
- Paid time off, including vacations and public holidays.
- Access to learning resources, workshops, and certifications.
If you’re passionate about building innovative software and want to be part of a dynamic, growing team, we’d love to hear from you. Join us at Synares Systems and make an impact on exciting projects that shape the future of technology. Apply today and let’s create something extraordinary together!
#J-18808-LjbffrFull Stack Python Engineer
Posted 26 days ago
Job Viewed
Job Description
Synares Systems , we transform ideas into impactful software solutions. Specializing in custom development, we empower businesses to streamline processes and enhance user experiences. Join our innovative team and help shape the future of software excellence! Tasks
Develop and maintain APIs and back-end systems using Python frameworks (Django, Flask, or FastAPI). Build responsive front-end interfaces using React, Vue.js, or similar frameworks. Design and optimize databases for performance and scalability. Integrate third-party APIs and deploy applications using Docker and CI/CD pipelines. Collaborate with cross-functional teams to deliver high-quality features. Ensure system security, performance, and reliability. Stay updated on industry trends to drive innovation and improvements. Requirements
Proficiency in Python (Django, Flask, or FastAPI) and front-end frameworks (React, Vue.js). Experience with databases (PostgreSQL, MySQL, MongoDB) and RESTful API design. Familiarity with Docker, Git, and cloud platforms (AWS, GCP, or Azure). Strong problem-solving, communication, and teamwork skills. 2+ years of full-stack development experience; Bachelor’s degree in a relevant field. Benefits
Competitive salary and performance-based bonuses. Flexible working hours and remote work options. Opportunities for career growth and skill development. Collaborative and innovative work environment. Paid time off, including vacations and public holidays. Access to learning resources, workshops, and certifications. If you’re passionate about building innovative software and want to be part of a dynamic, growing team, we’d love to hear from you. Join us at
Synares Systems
and make an impact on exciting projects that shape the future of technology. Apply today and let’s create something extraordinary together!
#J-18808-Ljbffr
Data Engineer
Posted today
Job Viewed
Job Description
Join to apply for the Data Engineer role at Contour Software
Join to apply for the Data Engineer role at Contour Software
Get AI-powered advice on this job and more exclusive features.
About Contour
Contour Software has grown from a dozen people to over 2,000 staff across 3 cities, in less than 14 years.
About Contour
Contour Software has grown from a dozen people to over 2,000 staff across 3 cities, in less than 14 years.
As a subsidiary of Constellation Software Inc., we are proud to be part of a global enterprise software conglomerate that has grown to become one of the top 10 software companies in the world, with employees and customers in 100+ countries. With a broad-based and ever-growing portfolio of market-leading, vertical-market enterprise solutions covering more than 100 industry domains in predominantly mature markets, CSI's recipe creates the perfect environment for professionals to build fulfilling, long-term careers.
What started as an R&D & Accounting back-office, has progressed into a full-service Global Centre serving all functions and departments, at the divisional as well as operating group/corporate level. Today Contour employees, located in Karachi, Lahore & Islamabad, are serving CSI divisions located in time zones spanning the globe, from Sydney to Vancouver. With the global growth of Constellation as the wind in our sails, we are only just getting started!
The Division
At CSIPay, a Jonas company, we are dedicated to empowering Constellation’s six operating groups in the USA and Canada with seamless payment processing solutions. Our goal is to give you greater control over your revenue, enhance your market potential, and improve customer satisfaction, fostering success for all within the Constellation family.
The Position
We are looking for Data Engineers to join our development team and participate in different projects. We are looking for proactive people, team players passionate about technology and big data. This is an excellent opportunity for those professionals looking to develop in one of the fastest growing areas of software development.
The selected individuals will work out of the Contour Software Lahore/Karachi/Islamabad resource center office, as an extension of the division-based R&D department.
Role And Responsibilities Include But Not Limited To
- Write efficient queries to extract and analyze large datasets while ensuring seamless systems integration by developing robust data workflows.
- Design, develop, and maintain scalable ETL solutions for data pipelines, enhancing and optimizing existing processes to meet evolving business needs.
- Develop and expand the organization’s data technology stack to support advanced data processing and analytics needs, leveraging tools like AWS MapReduce and PySpark for big data handling.
- Apply machine learning techniques to clean and process data, supporting future initiatives to integrate machine learning into analytics workflows.
- Continuously evaluate and adopt the most efficient tools and technologies for data extraction, transformation, analysis, and integration.
- Build a strong understanding of customer business needs to deliver tailored and impactful data solutions, collaborating with stakeholders to translate business requirements into technical implementations.
- Utilize AWS services, including MapReduce and related tools, to enable scalable data processing and pipeline optimization while ensuring security, scalability, and cost-efficiency.
- Monitor and improve the performance of data systems and processes, staying updated on industry trends and emerging technologies in data engineering and analytics.
- Minimum 3 years of experience with AWS cloud data lake architectures including services like S3, Glue, Athena, and Redshift.
- Hands on experience with Apache Airflow for designing and managing complex data workflows.
- Deep understanding of data warehouse concepts, architectures, and structures.
- Expertise in cloud technologies, particularly AWS services, including AWS Glue, AWS Data Lake, and Amazon EMR for big data processing.
- Proven ability to design and build robust, scalable data pipelines for big data processing and transformation.
- Hands-on experience with big data tools and frameworks such as PySpark, Databricks, and related technologies.
- Proficiency in programming languages such as Python for data manipulation, transformation, and analysis. Advanced Python development skills, including experience with AWS SDKs (e.g. boto3) for interacting with cloud services.
- Proficiency in Apache Kafka for real-time data streaming and event-driven architectures.
- Strong SQL skills with a focus on complex query development and optimization. Familiarity with NoSQL databases such as DynamoDB or MongoDB
- Experience integrating Python applications with RESTful APIs and external services
- Deep understanding of data security best practices, including encryption at rest and in transit. Hands-on experience in implementing AWS KMS (Key Management Services) for managing encryption keys.
- Familiarity with IAM policies, VPC configurations, and security groups for securing data pipelines.
- Familiarity with machine learning concepts and their application to data cleaning and integration tasks.
- Advanced English proficiency, with excellent communication skills for collaborating with stakeholders
- Market-leading Salary
- Medical Coverage – Self & Dependents
- Parents Medical Coverage
- Provident Fund
- Employee Performance-based bonuses
- Home Internet Subsidy
- Conveyance Allowance
- Profit Sharing Plan (Tenured Employees Only)
- Life Benefit
- Child Care Facility
- Company Provided Lunch/Dinner
- Professional Development Budget
- Recreational area for in-house games
- Sporadic On-shore training opportunities
- Friendly work environment
- Leave Encashment
In our continuous effort to promote inclusivity, we extend our commitment to individuals with special needs by providing reasonable accommodations. We actively encourage qualified individuals with special needs to apply for the various openings within our company. Should you require assistance in completing the application process or have any inquiries regarding special facilities, please do not hesitate to contact our HR team. Your unique talents and abilities are welcomed and valued here. Seniority level
- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Information Technology
- Industries IT Services and IT Consulting
Referrals increase your chances of interviewing at Contour Software by 2x
Karachi Division, Sindh, Pakistan 6 months ago
Karachi Division, Sindh, Pakistan 7 months ago
Software Engineer (Workflow Automation & Web Development) – Low-Code Principal Software Support Engineer-II (L3) Software Quality Assurance Engineer-ManualKarachi Division, Sindh, Pakistan 1 month ago
Karachi Division, Sindh, Pakistan 8 months ago
Full Stack Software Engineer (.NET Core & Angular)We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrData Engineer
Posted today
Job Viewed
Job Description
Get AI-powered advice on this job and more exclusive features.
We’re on a mission to eliminate geographic borders as barriers to full-time employment and fair wages. We’re creating a global HR platform ecosystem that seamlessly connects exceptional talent worldwide with North American businesses. By making global hiring easier than local hiring, we provide businesses access to a broader talent pool and accelerate their hiring process. Spread across four continents, we’re a global team disrupting how people work together.
Role Overview
We are looking for a skilled Data Engineer to join our team. The ideal candidate will have strong expertise in designing, building, and maintaining scalable data pipelines and warehouses, ensuring data accuracy, quality, and accessibility across the organization.
Key Responsibilities
- Design, develop, and maintain ETL/ELT pipelines for efficient data processing.
- Build and optimize data models and warehouses to support analytics and reporting.
- Implement data governance and quality frameworks to ensure reliable insights.
- Integrate data from various sources (APIs, databases, third-party systems).
- Collaborate with cross-functional teams (engineering, product, analytics) to enable data-driven decision-making.
Required Skills
- Strong knowledge of data modeling and data warehousing .
- Proficiency in BigQuery, SQL, and NoSQL databases .
- Experience working with APIs for data integration.
- Strong programming skills in Python for data engineering tasks and automation.
Preferred Skills
- Experience with data governance and data quality frameworks .
- Familiarity with cloud-based data platforms and modern data stack tools.
Qualifications
- 3+ years of experience as a Data Engineer or in a related role.
- Strong problem-solving and analytical skills.
Why Join Edge?
Edge is at a pivotal growth point, offering you the rare opportunity to shape the future of global employment. Your work will directly impact business growth, enable global opportunities, and transform how people work across borders.
We’re not just offering a job — we’re inviting you to be part of a revolution.
Ready to leave a global footprint and change lives? Edge is where your vision becomes reality.
Seniority level- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Product Management, Analyst, and Quality Assurance
- Industries IT Services and IT Consulting and Software Development
Referrals increase your chances of interviewing at Edge by 2x
Associate Software Engineer - Data Science L1 Application Support Engineer – Data Science Solutions Senior Software Engineer/Team Lead - Python/Django Assistant Manager Data Science & Channel Insights Senior Software Engineer - .NET (6-months contract)We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrData Engineer
Posted 4 days ago
Job Viewed
Job Description
You will be participating in exciting projects covering the end-to-end data lifecycle – from raw data integrations with primary and third-party systems, through advanced data modeling, to state-of-the-art data visualization and development of innovative data products.
You will have the opportunity to learn how to build and work with both batch and real-time data processing pipelines. You will work in a modern cloud-based data warehousing environment alongside a team of diverse, intense, and interesting co-workers. You will liaise with other departments – such as product & tech, the core business verticals, trust & safety, finance, and others – to enable them to be successful.
Your responsibilities
- Design, implement and support data warehousing;
- Raw data integrations with primary and third-party systems
- Data warehouse modeling for operational & application data layers
- Development in Amazon Redshift cluster
- SQL development as part of agile team workflow
- ETL design and implementation in Matillion ETL
- Real-time data pipelines and applications using serverless and managed AWS services such as Lambda, Kinesis, API Gateway, etc.
- Design and implementation of data products enabling data-driven features or business solutions
- Building data dashboards and advanced visualizations in Sisense for data cloud teams (formerly Periscope Data) with a focus on UX, simplicity, and usability
- Working with other departments on data products – i.e. product & technology, marketing & growth, finance, core business, advertising, and others
- Being part and contributing towards a strong team culture and ambition to be on the cutting edge of big data
- Evaluate and improve data quality by implementing test cases, alerts, and data quality safeguards
- Living the team values: Simpler. Better. Faster.
- Strong desire to learn
Required minimum experience (must)
- 1- 2 years experience in data processing, analysis, and problem-solving with large amounts of data;
- Good SQL skills across a variety of relational data warehousing technologies especially in cloud data warehousing (e.g. Amazon Redshift, Google BigQuery, Snowflake, Vertica, etc.)
- 1+ years of experience with one or more programming languages, especially Python
- Ability to communicate insights and findings to a non-technical audience
- Written and verbal proficiency in English
- Entrepreneurial spirit and ability to think creatively; highly-driven and self-motivated; strong curiosity and strive for continuous learning
- Top of class University technical degree such as computer science, engineering, math, physics.
Additional experience (strong plus)
- Experience working with customer-centric data at big data-scale, preferably in an online / e-commerce context
- Experience with modern big data ETL tools (e.g. Matillion)
- Experience with AWS data ecosystem (or other cloud providers)
- Track record in business intelligence solutions, building and scaling data warehouses, and data modeling
- Tagging, Tracking, and reporting with Google Analytics 360
- Knowledge of modern real-time data pipelines (e.g. serverless framework, lambda, kinesis, etc.)
- Experience with modern data visualization platforms such as Periscope, Looker, Tableau, Google Data Studio, etc.
- Linux, bash scripting, Javascript, HTML, XML
- Docker Containers and Kubernete
#LI-TM1
Be The First To Know
About the latest Python engineer Jobs in Pakistan !
Data Engineer
Posted 4 days ago
Job Viewed
Job Description
Job Title: Data Engineer
Job Title: Data Engineer
Location: Karachi, Lahore , Islamabad (Hybrid)
Experience: 5+ Years
Job Type: Full-Time
Job Overview
We are looking for a highly skilled and experienced Data Engineer with a strong foundation in Big Data, distributed computing, and cloud-based data solutions . This role demands a strong understanding of end-to-end Data pipelines, data modeling, and advanced data engineering practices across diverse data sources and environments. You will play a pivotal role in building, deploying, and optimizing data infrastructure and pipelines in a scalable cloud-based architecture.
Key Responsibilities
- Design, develop, and maintain large-scale Data pipelines using modern big data technologies and cloud-native tools.
- Build scalable and efficient distributed data processing systems using Hadoop, Spark, Hive, and Kafka.
- Work extensively with cloud platforms (preferably AWS) and services like EMR, Glue, Lambda, Athena, S3.
- Design and implement data integration solutions pulling from multiple sources into a centralized data warehouse or data lake.
- Develop pipelines using DBT (Data Build Tool) and manage workflows with Apache Airflow or Step Functions.
- Write clean, maintainable, and efficient code using Python, PySpark, or Scala for data transformation and processing.
- Build and manage relational and columnar data stores such as PostgreSQL, MySQL, Redshift, Snowflake, HBase, ClickHouse.
- Implement CI/CD pipelines using Docker, Jenkins, and other DevOps tools.
- Collaborate with data scientists, analysts, and other engineering teams to deploy data models into production.
- Drive data quality, integrity, and consistency across systems.
- Participate in Agile/Scrum ceremonies and utilize JIRA for task management.
- Provide mentorship and technical guidance to junior team members.
- Contribute to continuous improvement by making recommendations to enhance data engineering processes and architecture.
- 5+ years of hands-on experience as a Data Engineer
- Deep knowledge of Big Data technologies – Hadoop, Spark, Hive, Kafka.
- Expertise in Python, PySpark and/or Scala.
- Proficient with data modeling, SQL scripting, and working with large-scale datasets.
- Experience with distributed storage like HDFS and cloud storage (e.g., AWS S3).
- Hands-on with data orchestration tools like Apache Airflow or StepFunction.
- Experience working in AWS environments with services such as EMR, Glue, Lambda, Athena.
- Familiarity with data warehousing concepts and experience with tools like Redshift, Snowflake (preferred).
- Exposure to tools like Informatica, AbInitio, Apache Iceberg is a plus.
- Knowledge of Docker, Jenkins, and other CI/CD tools.
- Strong problem-solving skills, initiative, and a continuous learning mindset.
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
- Experience with open table formats such as Apache Iceberg.
- Hands-on with AbInitio (GDE, Collect > IT) or Informatica tools.
- Knowledge of Agile methodology, working experience in JIRA.
- Self-driven, proactive, and a strong team player.
- Excellent communication and interpersonal skills.
- Passion for data and technology innovation.
- Ability to work independently and manage multiple priorities in a fast-paced environment.
- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Information Technology
- Industries IT Services and IT Consulting
Referrals increase your chances of interviewing at NorthBay - Pakistan by 2x
Software Engineering & Development Intern Intermediate Full-Stack Software DeveloperKarachi Division, Sindh, Pakistan 5 months ago
Karachi Division, Sindh, Pakistan 9 months ago
Karachi Division, Sindh, Pakistan 7 months ago
Karachi Division, Sindh, Pakistan 1 year ago
Karachi Division, Sindh, Pakistan 2 months ago
Karachi Division, Sindh, Pakistan 1 day ago
Karachi Division, Sindh, Pakistan 4 months ago
Karachi Division, Sindh, Pakistan 3 months ago
Karachi Division, Sindh, Pakistan 1 day ago
Karachi East District, Sindh, Pakistan 5 months ago
Backend .Net Core Developer (Wallet - Fintech)We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrData Engineer
Posted 14 days ago
Job Viewed
Job Description
Location: Lahore / Kharian
Position Type: Full-Time
Job Overview:
This role focuses on developing and implementing data warehouse solutions across the organization while managing large sets of structured and unstructured data. The Data Engineer will analyse complex customer requirements, define data transformation rules, and oversee their implementation. The ideal candidate should have a solid understanding of data acquisition, integration, and transformation, with the ability to evaluate and recommend optimal architecture and approaches.
Key Responsibilities:
•Design Data Pipelines : Develop robust data pipelines capable of handling both structured and unstructured data effectively.
•Build Data Ingestion and ETL Processes : Create efficient data ingestion pipelines and ETL processes, including low-latency data acquisition and stream processing using tools like Kafka or Glue.
•Develop Integration Procedures : Design and implement processes to integrate data warehouse solutions into operational IT environments.
•Data Lake Management: Manage and optimize the data lake on AWS, ensuring efficient data storage, retrieval, and transformation. Implement best practices for organizing and managing raw, processed, and curated data, with scalability and future growth in mind.
•Optimize SQL and Shell Scripts: Write, optimize, and maintain complex SQL queries and shell scripts to ensure efficient data processing.
•Monitor and Optimize Performance: Continuously monitor system performance and recommend necessary configurations or infrastructure improvements.
•Document and Present Workflows : Prepare detailed documentation, collaborate with cross functional teams, present complete data workflows to teams, and maintain an up-to-date knowledge base.
•Governance & Quality : Develop and maintain data quality checks to ensure data in the lake and warehouse remains accurate, consistent, and reliable.
•Collaboration with Stakeholders : Work closely with CTO, PMOs, business and data analysts to gather requirements and ensure alignment with project goals.
•Scope and Manage Projects : Collaborate with project managers to scope projects, create detailed work breakdown structures, and conduct risk assessments.
•Research & Development (R&D) : Keep up with the latest technological trends and identify innovative solutions to address customer challenges and company priorities.
Skills and Qualifications:
•Bachelor’s or Master’s degree in Engineering, Computer Science, or equivalent experience.
•At least 4+ years of relevant experience as a Data Engineer.
•Hands-on experience with cloud platforms such as AWS, Azure, or GCP, and familiarity with respective cloud services.
•Hands-on experience with one or more ETL tools such as Glue, Spark, Kafka, Informatica, DataStage, Talend, Azure Data Factory (ADF).
•Strong understanding of dimensional modeling techniques, including Star and Snowflake schemas.
•Experience in creating semantic models and reporting mapping documents.
•Solid concepts and experience in designing and developing ETL architectures.
•Strong understanding of RDBMS concepts and proficiency in SQL development.
•Proficiency in data modeling and mapping techniques.
•Experience integrating data from multiple sources.
•Experience working in distributed environments, including clustering and sharding.
•Knowledge of Big Data tools like Pig, Hive or NiFi would be a plus.
•Experience with Hadoop distributions like Cloudera or Hortonworks would be a plus.
•Excellent communication and presentation skills, both verbal and written.
•Ability to solve problems using a creative and logical approach.
•Self-motivated, analytical, detail-oriented, and organized, with a commitment to excellence.
•Experience in the financial services sector is a plus.
ACE Money Transfer Profile:
#J-18808-Ljbffr
Data Engineer
Posted 14 days ago
Job Viewed
Job Description
About Burq
Burq started with an ambitious mission: how can we turn the complex process of offering delivery into a simple turnkey solution.
We started with building the largest network of delivery networks, partnering with some of the biggest delivery companies. We then made it extremely easy for businesses to plug into our network and start offering delivery to their customers. Now, we’re powering deliveries for some of the fastest-growing companies from retailers to startups.
It’s a big mission and now we want you to join us to make it even bigger!
We’re already backed by some of the Valley's leading venture capitalists, including Village Global, the fund whose investors include Bill Gates, Jeff Bezos, Mark Zuckerberg, Reid Hoffman, and Sara Blakely. We have assembled a world-class team all over the U.S.
We operate at scale, but we're still a small team relative to the opportunity. We have a staggering amount of work ahead. That means you have an unprecedented opportunity to grow while doing the most important work of your career.
We want people who are unafraid to be wrong and support decisions with numbers and narrative.
Responsibilities- Design, build, and maintain efficient and scalable data pipelines using tools like Airbyte, Airflow, and dbt, with a focus on integrating with Snowflake.
- Manage and optimize data warehousing solutions using Snowflake, ensuring data is organized, secure, and accessible for analysis.
- Develop and implement automations and workflows to streamline data processing and integration, ensuring seamless data flow across systems.
- Collaborate with cross-functional teams to set up and maintain data infrastructure that supports both engineering and analytical needs.
- Utilize Databricks and Spark for big data processing, ensuring data is processed, stored, and analyzed efficiently.
- Monitor data streams and processes using Kafka and Monte Cristo, ensuring data quality and integrity.
- Work closely with data analysts and other stakeholders to create, maintain, and optimize visualizations and reports using Tableau, ensuring data-driven decision-making.
- Ensure the security and compliance of data systems, implementing best practices and leveraging tools like Terraform for infrastructure management.
- Continuously evaluate and improve data processes, staying current with industry best practices and emerging technologies, with a strong emphasis on data analytics and visualization.
- Proficiency in SQL and experience with data visualization tools such as Tableau.
- Hands-on experience with data engineering tools and platforms including Snowflake, Airbyte, Airflow, and Terraform.
- Strong programming skills in Python and experience with data transformation tools like dbt.
- Familiarity with big data processing frameworks such as Databricks and Apache Spark.
- Knowledge of data streaming platforms like Kafka.
- Experience with data observability and quality tools like Monte Cristo.
- Solid understanding of data warehousing, data pipelines, and database management, with specific experience in Snowflake.
- Ability to design and implement automation, workflows, and data infrastructure.
- Strong analytical skills with the ability to translate complex data into actionable insights, particularly using Tableau.
- Excellent problem-solving abilities and attention to detail.
Investing in you
- Competitive salary
- Medical
- Educational courses
Generous Time Off
At Burq, we value diversity. We are an equal opportunity employer: we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
#J-18808-Ljbffr