6+ years of experience in data integration technologies using Python, Spark on Kubernetes etc.
Strong hands-on experience in designing, coding, testing complex programs using PySpark/Kafka
Strong analytical experience with databases in writing sql queries, SQL queries optimization, debugging, creating user defined functions, views, indexes etc.
Knowledge of source control systems such as Git, Bitbucket, and Jenkins.
Good experience in Python and common python libraries. Experience in following best coding practices and performance tuning techniques in python.
Experience in creating reusable components and simplifying complex systems.
Experience in building jobs using Airflow is preferred.
Lot of experience in troubleshooting and fixing defects and handling the batch schedules.
Experience in post-production application support for critical applications.
Knowledge of ETL tools such as Informatica and Talend is preferred.
Experience in Supply chain domain will be an added advantage.