AI / Emerging Tech Security Analyst

Alignerr · Cairo, Egypt · Posted 2026-04-29

AI / Emerging Tech Security Analyst (AI Training)About The RoleWhat if your security expertise could directly shape how the world's most powerful AI systems defend themselves against attack? We're looking for AI Security Analysts to probe, stress-test, and evaluate frontier AI models — identifying how they can be manipulated, misused, or pushed beyond their intended boundaries before those vulnerabilities cause real-world harm.This is a fully remote, flexible contract role for security professionals who are curious about AI and want to work at the cutting edge of both fields. If you think in threat models, love finding what breaks, and want your work to matter — this is the role for you.Organization: AlignerrType: Hourly ContractLocation: RemoteCommitment: 10–40 hours/weekWhat You'll DoAnalyze AI and LLM security scenarios to understand how models behave under adversarial or unexpected conditionsReview and evaluate cases involving prompt injection, data leakage, model abuse, and system misuseClassify security vulnerabilities and recommend appropriate mitigations based on real-world impact and likelihoodApply threat modeling frameworks to emerging AI technologies and deployment contextsHelp evaluate and improve AI system behavior so it remains safe, reliable, and aligned with security best practicesWork independently and asynchronously on task-based assignments at your own paceWho You AreBackground in cybersecurity, information security, or a closely related fieldStrong understanding of security threat modeling applied to modern software systemsGenuinely curious about how AI systems are built, deployed, and potentially exploitedAnalytical and precise — you approach complex systems methodically and don't miss edge casesClear written communicator who can document findings and reasoning with structure and confidenceSelf-motivated and reliable when working independently without supervisionNice to HaveHands-on experience with penetration testing, red teaming, or vulnerability researchFamiliarity with large language models (LLMs), AI APIs, or prompt engineering conceptsBackground in application security, cloud security, or API securityExperience with adversarial machine learning or AI safety conceptsCertifications such as OSCP, CEH, CISSP, or equivalent practical experienceWhy Join UsWork directly on frontier AI systems alongside the world's leading AI research labsFully remote and flexible — work when and where it suits youFreelance autonomy with the structure of meaningful, high-stakes workBe at the forefront of an entirely new discipline — AI security is one of the most important emerging fields in techPotential for ongoing work and contract extension as new projects launch

Apply for this role