Check out our latest project โ€” dmp-af.cloud, an open-source orchestration platform for dbt →
Career Details

Data Engineer

Job Description iJKos & Partners builds enterprise data platforms for clients across industries. We are hiring a Data Engineer to design, build, and maintain the data infrastructure that powers analytics, reporting, and machine learning workloads. You will work directly with client teams to deliver production-grade pipelines on modern cloud data stacks. Responsibilities Design and implement batch and streaming data pipelines using Airflow, Dagster, Spark, and Kafka Model and transform data in cloud warehouses (Snowflake, BigQuery, Redshift) using dbt and SQL Build and maintain data lake storage layers on S3 and GCS Define and enforce data quality checks, schema contracts, and SLAs for pipeline reliability Collaborate with analytics engineers and data scientists to deliver clean, well-documented datasets Diagnose and resolve pipeline failures, data drift, and performance bottlenecks Write infrastructure-as-code for data platform resources (Terraform, Pulumi, or equivalent) Participate in architecture reviews for new client engagements Qualifications 3+ years of professional experience in data engineering or a related backend role Strong proficiency in Python and SQL Hands-on experience with at least one orchestration framework (Airflow, Dagster, Prefect) Working knowledge of a major cloud data warehouse (Snowflake, BigQuery, or Redshift) Experience with dbt or similar transformation frameworks Familiarity with cloud services on AWS, GCP, or Azure Understanding of data modeling patterns (dimensional, Data Vault, or similar) Ability to work autonomously in a remote, async environment Nice to Have Experience with streaming platforms (Kafka, Kinesis, Pub/Sub) Exposure to Spark or other distributed compute engines Knowledge of containerization and CI/CD for data workloads (Docker, GitHub Actions) Familiarity with data cataloging or governance tools (DataHub, Atlan, OpenMetadata) Prior consulting or client-facing experience Benefits Fully remote position with flexible working hours Async-first culture with minimal meetings Annual learning budget for conferences, courses, and certifications Access to modern tooling and cloud environments Work on diverse projects across multiple industries Performance-based compensation reviews

  • Employment Type Full-time
  • Experience 3+ years
  • Salary Range Competitive
  • Location Remote

Job Description

iJKos & Partners builds enterprise data platforms for clients across industries. We are hiring a Data Engineer to design, build, and maintain the data infrastructure that powers analytics, reporting, and machine learning workloads. You will work directly with client teams to deliver production-grade pipelines on modern cloud data stacks.

Responsibilities

  • Design and implement batch and streaming data pipelines using Airflow, Dagster, Spark, and Kafka
  • Model and transform data in cloud warehouses (Snowflake, BigQuery, Redshift) using dbt and SQL
  • Build and maintain data lake storage layers on S3 and GCS
  • Define and enforce data quality checks, schema contracts, and SLAs for pipeline reliability
  • Collaborate with analytics engineers and data scientists to deliver clean, well-documented datasets
  • Diagnose and resolve pipeline failures, data drift, and performance bottlenecks
  • Write infrastructure-as-code for data platform resources (Terraform, Pulumi, or equivalent)
  • Participate in architecture reviews for new client engagements

Qualifications

  • 3+ years of professional experience in data engineering or a related backend role
  • Strong proficiency in Python and SQL
  • Hands-on experience with at least one orchestration framework (Airflow, Dagster, Prefect)
  • Working knowledge of a major cloud data warehouse (Snowflake, BigQuery, or Redshift)
  • Experience with dbt or similar transformation frameworks
  • Familiarity with cloud services on AWS, GCP, or Azure
  • Understanding of data modeling patterns (dimensional, Data Vault, or similar)
  • Ability to work autonomously in a remote, async environment

Nice to Have

  • Experience with streaming platforms (Kafka, Kinesis, Pub/Sub)
  • Exposure to Spark or other distributed compute engines
  • Knowledge of containerization and CI/CD for data workloads (Docker, GitHub Actions)
  • Familiarity with data cataloging or governance tools (DataHub, Atlan, OpenMetadata)
  • Prior consulting or client-facing experience

Benefits

  • Fully remote position with flexible working hours
  • Async-first culture with minimal meetings
  • Annual learning budget for conferences, courses, and certifications
  • Access to modern tooling and cloud environments
  • Work on diverse projects across multiple industries
  • Performance-based compensation reviews

Apply for position

Share Job
Call to Action Background
Free discovery call

Ready to Make Data Work for Your Business?

Join companies that trust iJKos & partners to build reliable data infrastructure and turn complexity into clear, confident decisions.