Offensive Security Analyst (Structured / Non-Exploit)

Alignerr · Cairo, Egypt · Posted 2026-05-04

Offensive Security Analyst (Structured / Non-Exploit) — AI TrainingAbout The RoleWhat if your red-team mindset and deep knowledge of how real attacks unfold could directly shape how the world's most advanced AI systems understand cybersecurity?We're looking for Offensive Security Analysts to bring adversarial thinking to AI training — mapping attack paths, analyzing kill chains, and modeling how threats move through real environments. This isn't exploit development. It's structured adversarial reasoning: thinking clearly about how attackers operate, where defenses fail, and how risk propagates across modern systems.This is a fully remote, flexible contract role built for security professionals who can think offensively and communicate precisely.Organization: AlignerrType: Hourly ContractLocation: RemoteCommitment: 10–40 hours/weekWhat You'll DoAnalyze attack paths, kill chains, and adversary strategies across realistic, real-world system scenariosIdentify weaknesses, misconfigurations, and defensive gaps with clear, structured reasoningReview red-team-style scenarios and intrusion narratives for accuracy and depthGenerate, label, and validate adversarial reasoning data used to train and evaluate frontier AI systemsArticulate attack chains, potential impact, and security tradeoffs in ways that are clear and technically soundWork independently and asynchronously — fully on your own scheduleWho You Are2+ years of hands-on experience in pentesting, red teaming, or a blue-team role with strong offensive knowledgeYou understand how real attacks unfold in production environments — not just in theoryYou can clearly explain complex attack scenarios, their impact, and the tradeoffs involvedDetail-oriented and methodical — you think in systems and spot what others missStrong written communicator who can document findings with precisionNo exploit development skills required — this role is about structured adversarial thinking, not codeNice to HaveFamiliarity with MITRE ATT&CK, kill chain frameworks, or threat modeling methodologiesExperience writing security reports, red team narratives, or threat assessmentsBackground in security architecture, cloud security, or enterprise environmentsPrior experience working with AI tools or data labeling platformsWhy Join UsWork directly on frontier AI systems alongside leading AI research labsFully remote and flexible — work when and where it suits youFreelance autonomy with the structure of meaningful, task-based workMake a tangible impact on how AI understands and reasons about real-world cybersecurity threatsPotential for ongoing work and contract extension as new projects launch

Apply for this role