AI Engineer
ZocaFull Description
ZOCA
If you want to build real things for real businesses, keep reading.
We are at $2.5Mn+ ARR, 1000+ beauty and wellness businesses on the platform, and a $6M Series A behind us. We are not starting out. But we are also nowhere close to where we want to be.
Zoca is building the growth infrastructure that small beauty and wellness businesses have never had access to. Online visibility, bookings, payments, customer retention, voice AI. All of it, in one place, built to work without a full-time marketing or ops team running it.
We want people who will think, challenge, ship, and own outcomes.
About The Role
We're building an AI-native operating system for service businesses — agents that book
appointments, manage clients, run insights, and handle leads through natural conversation. You'll own the agent layer end-to-end: from prompt engineering and tool design to streaming infrastructure and evals.
This isn't "wrap an LLM in a chat UI."
Required
What You Should Know
* 3+ years building production software (any stack — TypeScript/Python preferred)
* Hands-on experience shipping LLM-powered features (OpenAI, Anthropic, etc.) — not just prototypes
* Strong intuition for prompt engineering: when to use system prompts vs few-shot vs tool descriptions
* Comfort reading and writing TypeScript, NestJS, and SQL (we use Drizzle ORM + Postgres)
* Understanding of streaming protocols (SSE, WebSockets) and async tool execution
* Ability to write evals and reason about agent failure modes
Bonus
* Built or contributed to an MCP server
* Experience with agent frameworks (OpenAI Agents SDK, LangGraph, Mastra, etc.)
* Worked on RAG, semantic memory, or conversation summarization
* Familiarity with eval frameworks (Braintrust, Langsmith, custom harnesses)
* Background in security — prompt injection, adversarial testing
* Voice/realtime AI experience (Retell, LiveKit, Pipecat)
How You Work
* You ship — you'd rather have a v1 in production learning from real usage than a v3 spec
* You read the source — when an LLM behaves weirdly, your first move is to read the prompt and tool schema, not stack overflow
* You instrument first — every agent run gets traces, every tool gets metrics
* You think about the user — the agent talking to a salon owner on SMS isn't the same agent talking to a developer in a CLI
Stack You'll Touch
* Languages: TypeScript, SQL
* Backend: NestJS, Fastify, Postgres + Drizzle ORM, Redis, BullMQ
* AI: OpenAI Agents SDK, MCP, custom orchestrators
* Frontend: React (Next.js + Vite), Tailwind, Radix
* Infra: AWS, Nx monorepo, OpenTelemetry + Tempo, Grafana
* Eval/Test: Custom eval harness, K6, Jest
What We Offer
* Direct ownership of an entire agent vertical from day one
* Tight feedback loop — your work ships to real salon owners every week
* No "AI hype" theater — we ship things that move metrics
Skills: typescript,react,nestjs,sql