QA Engineer

Serva · Posted 2026-04-28

Manual & Automation QA EngineerWe're looking for a skilled QA Engineer who's equally comfortable crafting detailed test plans by hand and building robust automation frameworks. You'll ensure that every feature we ship meets the highest quality bar — including our growing suite of AI-powered capabilities.Our StackTest Automation: Playwright / CypressUnit & Integration: Jest / PytestAPI Testing: REST & GraphQLDatabase: PostgreSQLDevOps: CI/CD Pipelines, GitHub Actions, AWSAI: AI Agents / LLMs (non-deterministic testing)What You'll DoDesign, write, and maintain manual test plans and test cases for features across the full productBuild and scale automation frameworks for regression, integration, and end-to-end testingTest AI-powered features including LLM outputs, agent behavior, and model responses for accuracy, safety, and consistencyCollaborate closely with developers and product managers to shift QA left in the development lifecycleDefine and track quality metrics, identify risk areas, and surface issues early before they reach productionDevelop evaluation strategies for non-deterministic systems — including prompt variation testing and output validation pipelinesParticipate in sprint planning and retrospectives as the quality voice on the teamWhat We're Looking For2+ years of professional experience in software quality assurance, covering both manual and automation testingHands-on experience testing AI agents, LLM-powered features, or ML models — understanding their non-deterministic nature is a mustProficiency in at least one modern automation framework (Playwright, Cypress, Selenium, or equivalent)Strong API testing skills using tools like Postman, REST Assured, or similarExperience working in agile environments with CI/CD pipelines and integrated test gatesA methodical, curious mindset — you find edge cases others miss, and you document everything clearlySolid communication skills to write clear bug reports, test documentation, and quality summariesBonus Points If You...Have experience evaluating agentic workflows, multi-step reasoning chains, or RAG pipelinesHave worked with LLM evaluation frameworks like RAGAS, LangSmith, or PromptfooUse AI coding tools like Cursor or Claude in your daily QA workflow — we actively encourage itHave built custom eval harnesses or synthetic test data generators using LLMsAI-First CultureWe don't just build AI — we test it rigorously. Our QA engineers use Cursor and Claude to move faster, write better test cases, and automate the repetitive work. If you're already approaching quality assurance with AI tools in hand, you'll feel right at home.

Apply for this role