Backend Engineer
About the Role
We're looking for a Backend Engineer to build the core infrastructure that powers Syntrek's autonomous AI development pipelines. You'll work on the systems that orchestrate multiple LLMs, route tasks through multi-stage review processes, and maintain chain-of-custody verification for every automated action.
This is deep infrastructure work. You'll design the pipeline execution engine that coordinates dozens of concurrent agent workflows, build the model orchestration layer that routes requests across Claude, GPT, and other LLMs with cost tracking and fallback logic, and implement the evidence logging systems that make every automated decision auditable.
The systems you build must be reliable, observable, and efficient at scale. Our pipelines process thousands of development tasks daily, and every stage — from ticket triage to code review to deployment verification — must execute with high reliability and full traceability. You'll own critical pieces of this infrastructure.
What You'll Do
- Design and build the pipeline execution engine for multi-stage autonomous development workflows
- Implement the multi-model orchestration layer — routing, cost tracking, fallback logic, and rate limiting across LLM providers
- Build chain-of-custody verification systems that log evidence for every automated action and decision
- Design and optimize data models for pipeline state, execution history, and audit trails
- Implement robust error handling, retry logic, and circuit breakers for distributed AI agent workflows
- Build monitoring and alerting systems for pipeline health, model performance, and cost anomalies
- Develop internal APIs consumed by the frontend team for pipeline visibility and configuration
Requirements
- 5+ years of backend engineering experience with distributed systems in production
- Expert-level Python skills — you write clean, well-tested, production-grade Python without effort
- Experience designing and operating distributed systems: queues, workers, schedulers, and state machines
- Strong database expertise: PostgreSQL schema design, query optimization, and data modeling at scale
- Experience with API design (REST and/or gRPC) and service-to-service communication patterns
- Solid understanding of observability: structured logging, metrics, tracing, and alerting
- Experience with containerized deployments and infrastructure-as-code (Docker, Terraform, or similar)
Nice to Haves
- Experience integrating with LLM APIs (Claude, GPT, etc.) and managing model-specific behaviors
- Background in workflow orchestration engines (Temporal, Prefect, Airflow)
- Experience with event sourcing or CQRS patterns for audit-heavy systems
- Familiarity with cost optimization for cloud infrastructure and API usage
- Prior work on CI/CD systems, code review automation, or developer tooling
Benefits
- Fully remote with flexible hours — we care about output, not hours logged
- Competitive compensation with meaningful equity
- Latest AI tooling and infrastructure provided — we use what we build
- Direct impact on product direction in a small, senior team
- Annual team offsites and generous PTO
- Health, dental, and vision coverage
Apply for this Role
Fields marked with * are required.