We help companies turn raw data into trusted assets. Join us and work on real engineering challenges across industries and tech stacks.
What makes iJKos & Partners different
No busywork. You'll design pipelines, data models, and platforms for real clients with real data.
Every project is a new domain, a new stack challenge, and a chance to level up fast.
Work from anywhere. We value results over hours and trust over micromanagement.
Set your own hours. We're async-first and respect your time.
Get the equipment you need to do your best work.
Conferences, courses, certifications โ we invest in your growth.
Generous vacation policy. Rest is part of the job.
Job Description iJKos & Partners helps enterprises design, build, and operate modern data platforms. As an Analytics Engineer, you will own the modeling layer between raw data and business intelligence โ transforming ingested data into clean, tested, well-documented datasets that analysts and stakeholders can trust and query independently. You will work directly with client data teams and internal engineers to define metrics, enforce data quality, and reduce time-to-insight across the organization. Responsibilities Design and maintain dbt projects: staging, intermediate, and mart models following consistent naming and style conventions Define and manage a metrics/semantic layer so business users get consistent numbers across tools Write and maintain data tests (schema, custom, and source freshness) to catch issues before they reach dashboards Document models, columns, and business logic in dbt docs and external knowledge bases Build and optimize dashboards and self-serve datasets in Looker, Tableau, or Metabase depending on the client stack Collaborate with data engineers on source ingestion contracts and with analysts on reporting requirements Review pull requests for SQL quality, performance, and adherence to project conventions Profile query performance on Snowflake, BigQuery, or Redshift and recommend materializations or refactors Qualifications 2+ years working with SQL in an analytics or data engineering role Hands-on experience with dbt (dbt Core or dbt Cloud) Solid understanding of dimensional modeling, slowly changing dimensions, and incremental materializations Experience with at least one cloud data warehouse: Snowflake, BigQuery, or Redshift Familiarity with Git-based workflows (branching, code review, CI/CD for dbt) Ability to translate business questions into reliable data models Strong written communication โ you document as you build Nice to Have Experience with Looker (LookML), Tableau, or Metabase administration Exposure to orchestration tools such as Airflow, Dagster, or dbt Cloud jobs Knowledge of data contracts, data mesh concepts, or metadata management tools (Atlan, DataHub) Python or Jinja proficiency for advanced dbt macros and scripting Prior consulting or client-facing experience Benefits Fully remote position with async-first culture and flexible working hours Work across diverse client engagements โ no two projects are the same Annual learning budget for conferences, courses, and certifications Home office equipment stipend Collaborative, low-hierarchy team that values clear thinking over meetings
Job Description iJKos & Partners builds enterprise data platforms for clients across industries. We are hiring a Data Engineer to design, build, and maintain the data infrastructure that powers analytics, reporting, and machine learning workloads. You will work directly with client teams to deliver production-grade pipelines on modern cloud data stacks. Responsibilities Design and implement batch and streaming data pipelines using Airflow, Dagster, Spark, and Kafka Model and transform data in cloud warehouses (Snowflake, BigQuery, Redshift) using dbt and SQL Build and maintain data lake storage layers on S3 and GCS Define and enforce data quality checks, schema contracts, and SLAs for pipeline reliability Collaborate with analytics engineers and data scientists to deliver clean, well-documented datasets Diagnose and resolve pipeline failures, data drift, and performance bottlenecks Write infrastructure-as-code for data platform resources (Terraform, Pulumi, or equivalent) Participate in architecture reviews for new client engagements Qualifications 3+ years of professional experience in data engineering or a related backend role Strong proficiency in Python and SQL Hands-on experience with at least one orchestration framework (Airflow, Dagster, Prefect) Working knowledge of a major cloud data warehouse (Snowflake, BigQuery, or Redshift) Experience with dbt or similar transformation frameworks Familiarity with cloud services on AWS, GCP, or Azure Understanding of data modeling patterns (dimensional, Data Vault, or similar) Ability to work autonomously in a remote, async environment Nice to Have Experience with streaming platforms (Kafka, Kinesis, Pub/Sub) Exposure to Spark or other distributed compute engines Knowledge of containerization and CI/CD for data workloads (Docker, GitHub Actions) Familiarity with data cataloging or governance tools (DataHub, Atlan, OpenMetadata) Prior consulting or client-facing experience Benefits Fully remote position with flexible working hours Async-first culture with minimal meetings Annual learning budget for conferences, courses, and certifications Access to modern tooling and cloud environments Work on diverse projects across multiple industries Performance-based compensation reviews
Job Description iJKos & Partners helps enterprises build reliable, scalable data platforms. Our core product, DMP.AF, is the foundation we deploy and extend for every client engagement. As a Data Platform Developer, you will own the infrastructure, tooling, and automation that our data engineers and analytics engineers depend on daily. You will work across the full platform stack โ from provisioning cloud resources to shipping internal developer tools โ in a remote-first, async team. Responsibilities Design, build, and maintain cloud infrastructure for DMP.AF using Terraform or Pulumi on AWS and GCP Develop and improve internal CLI tools, libraries, and services in Python and Go Own CI/CD pipelines โ keep builds fast, deployments safe, and rollbacks painless Package and orchestrate services with Docker and Kubernetes Manage and optimize PostgreSQL databases that back platform services Automate repetitive operational work: environment provisioning, secret rotation, monitoring setup Collaborate with data engineers to understand pain points and ship tooling that removes friction Write clear documentation for the systems you build so others can operate them independently Qualifications 3+ years of professional experience in platform engineering, infrastructure, or backend development Strong working knowledge of at least one of Python or Go Hands-on experience with infrastructure-as-code tools (Terraform, Pulumi, or equivalent) Practical experience running workloads on Kubernetes in a production environment Familiarity with AWS or GCP services (networking, IAM, managed databases, object storage) Solid understanding of CI/CD principles and tools (GitHub Actions, GitLab CI, or similar) Comfortable working with PostgreSQL โ schema management, query tuning, backups Clear written communication skills suited to async, distributed teams Nice to Have Experience building internal developer platforms or self-service tooling Familiarity with data engineering workflows (dbt, Airflow, Spark) Contributions to open-source infrastructure or developer tools Experience with observability stacks (Prometheus, Grafana, OpenTelemetry) Background in multi-tenant SaaS architectures Benefits Fully remote position with flexible working hours Async-first culture โ minimal meetings, maximum focus time Direct impact on the product used by every client engagement Professional development budget for conferences, courses, and certifications Equipment and home office stipend Competitive compensation reviewed annually
Join companies that trust iJKos & partners to build reliable data infrastructure and turn complexity into clear, confident decisions.