Retour aux carrières

Infrastructure and data

Data Platform Engineer

This role owns the data layer that the product depends on: ingestion, storage, transformation, and query performance. You will design systems that handle high-volume AI output reliably and give the product the data quality it needs to report with confidence.

Candidatures bientôt ouvertes

Résumé du rôle

Build and maintain the data infrastructure that makes AI visibility analysis reliable, fast, and scalable as prompt volume and reporting complexity grow.

Pourquoi ce rôle existe

Data volume is increasing faster than the current infrastructure was designed for. We need someone to build a platform that can grow with the product without becoming a constant maintenance burden.

90 premiers jours

Map the current data flow end to end and identify the top three reliability or performance gaps.

Pourquoi ce rôle existe

Data volume is increasing faster than the current infrastructure was designed for. We need someone to build a platform that can grow with the product without becoming a constant maintenance burden.

Ce sur quoi vous travaillerez

  • Design and operate data pipelines from AI prompt execution through to query-ready reporting tables.
  • Own storage, schema design, and query optimization for high-volume, time-series AI output data.
  • Build internal tooling that helps the research and product teams explore and validate data quality.
  • Establish data observability practices so problems are caught before they reach the product.

À quoi ressemble un bon fit

  • Strong experience designing data pipelines and schemas for analytical workloads.
  • Comfort with time-series data, partitioning strategies, and query performance at volume.
  • Experience building and operating pipelines that handle partial failures, late data, and schema evolution.
  • A product mindset: you understand that bad data leads to bad product decisions.

Ce qui vous enthousiasmera ici

  • Building the data foundation for a product category that does not have an established playbook yet.
  • Owning the full data platform, not just one pipeline.
  • Working on infrastructure where quality directly determines whether the product can be trusted.

90 premiers jours

  1. 01Map the current data flow end to end and identify the top three reliability or performance gaps.
  2. 02Ship at least one pipeline improvement that reduces latency or error rate on a core data path.
  3. 03Establish basic data observability so the team has visibility into pipeline health.

Processus de recrutement

Le processus est volontairement court, direct et ancré dans le travail réel.

  1. 1

    Candidature

    Envoyez-nous votre parcours, votre travail pertinent et pourquoi ce rôle vous correspond.

  2. 2

    Conversation de base

    Un échange centré sur votre travail, votre jugement et le rôle.

  3. 3

    Deep dive du rôle

    Une discussion ou un exercice qui ressemble davantage au travail réel qu'à une boucle d'entretien générique.

  4. 4

    Conversation avec le fondateur

    Un dernier échange sur le niveau d'exigence, l'ambition et ce que serait la réussite ici.

  5. 5

    Décision

    Nous bouclons clairement et avançons vite quand la conviction est là.

Besoin de contexte avant de postuler ? [email protected]

Data Platform Engineer

Le rôle est visible sur le site. Les candidatures s'ouvrent dès que le poste Dover correspondant est actif.

Les candidatures restent fermées jusqu'à l'activation du poste Dover correspondant. D'ici là, vous pouvez écrire à [email protected].