About The RoleWhat if your curiosity, creativity, and knack for thinking outside the box could make AI systems safer for everyone? We're looking for AI Red Team Testers to do exactly that — probe, challenge, and outwit cutting-edge AI models to expose their blind spots before they cause real-world harm.This is a fully remote, flexible contract role open to anyone who loves puzzles, thinks adversarially, and enjoys finding the cracks in seemingly solid systems. No cybersecurity or technical background required.Organization: AlignerrType: Hourly ContractLocation: RemoteCommitment: 10–40 hours/weekWhat You'll DoDesign inventive prompts, scenarios, and conversational strategies to probe AI model weaknessesAttempt to elicit incorrect, unsafe, biased, or otherwise problematic outputs from AI systemsDocument discovered failure modes with clear, reproducible steps so they can be fixedAssess and rate the severity and potential impact of each issue you uncoverCollaborate asynchronously with AI safety and research teams to share findingsWork across a variety of AI models and task types as new projects ariseWho You AreA creative, curious thinker who genuinely enjoys puzzles and finding unexpected solutionsComfortable adopting an adversarial mindset — you naturally look for what could go wrongDetail-oriented and systematic: you don't just find problems, you document them clearlyStrong written communicator who can explain findings in structured, actionable formatsSelf-motivated and reliable — you can work independently without close supervisionNo background in hacking, cybersecurity, or AI requiredNice to HaveExperience in creative writing, journalism, research, or critical analysisFamiliarity with AI chatbots or language models as an end userBackground in philosophy, ethics, psychology, or social sciencesPrior experience in quality assurance or software testingWhy Join UsWork on meaningful, high-impact AI safety projects alongside leading research labsFully remote and flexible — set your own schedule and work from anywhereFreelance autonomy with the satisfaction of structured, purpose-driven workContribute directly to making AI safer, more reliable, and more trustworthy for people everywherePotential for ongoing work and contract extension as new projects launch