Key Responsibilities Design scalable & reliable solutions for Data PipelinesManage day-by-day tasks in queue according to priorities set in sprint planning meetingsWork closely with different commercial teams to deliver personalized customer offersEnsure on-time high quality deliverables Plan releases and provide proper support to released packagesRequirements Bachelor's degree in Computer Science, Information Systems, Software Engineering, or similarMinimum 3 years of experience as a Big Data Enginner Strong programming skills in Python, pysparkExperience in SQL/H-BaseKnowledge about Kafka and HadoopKnowledge about Data Stage or any ETL ToolBenefits Hybrid Working model Social and Medical insurance Transportation Flexible and Friendly working environment