About Our ClientOur client is an established management consulting firm working with government, financial services, and enterprise clients across the GCC and wider MENA region. The Team You'd Be JoiningA rapidly growing, ambitious AI engineering team being built from the ground up to design and ship the AI products at the centre of the client’s consulting work. As an early hire in this build, you will help shape both the technical foundations and the engineering culture of the function.The RoleThis role builds and optimises retrieval-augmented generation pipelines and LLM-based applications end-to-end — chunking, embeddings, retrieval logic, prompt orchestration, LLM integration, and the evaluation loops that keep production AI honest. You are the practitioner who turns a research-paper architecture into a production system meeting real targets for latency, cost, accuracy, and hallucination control.This is a hands-on engineering role, not research. The team ships into live client engagements against measured outcomes — not demos, not proofs of concept. MANDATORY REQUIREMENTSEducation. Bachelor’s in Computer Science (or very similar/related) from a Tier 1 / Tier 2 university. Master’s preferred. “Very similar/related” includes Software Engineering, Computer Engineering, Information Systems, Mechatronics, Applied Mathematics with software focus, and similar substantively quantitative or CS-equivalent engineering disciplines.Experience. 5–10 years in software engineering, with at least 2 years building RAG pipelines or LLM-based applications that ran in production.Core technical: Strong PythonDeep working knowledge of LLM APIs and embeddingsVector databasesPrompt engineeringAPI integration.Mandatory certifications. Both of the following are required: “Generative AI with LLMs” (DeepLearning.AI) AND “LangChain for LLM Application Development” (DeepLearning.AI).Engineering discipline. Production RAG without an evaluation loop is not engineering. Evidence of formal evaluation pipelines (RAGAS, custom eval sets, regression tests) is expected at interview.Languages. Proficiency in English is required.Strong PlusLangChain or LlamaIndex framework experience at production scale.Reranking models in production use — Cohere Rerank, cross-encoders, ColBERT, or fine-tuned rerankers.Hybrid retrieval design (dense + sparse / BM25 / RRF).Fine-tuned or domain-adapted embedding models.Agentic / tool-use orchestration in production.Hallucination evaluation tooling — RAGAS, TruLens, custom eval pipelines.Cost / latency tuning experience — prompt caching, model routing, semantic cache.Optional certifications: NLP / Transformers (Hugging Face), Vector Database (Pinecone).A live GitHub with working RAG implementations.Arabic-language RAG experience.