Developer hired for this position will be working on implementing multiple data requirements coming from the Group Financial Crime Prevention unit.
Data requirements spans from developing streaming application that connect to multiple data sources both internally or externally, but also to enable advanced analytics pipeline that produces clean, transformed data set that are being utilized to feed multiple products within the Financial Crime sector.
Key responsibilities include:
• Building distributed and highly parallelized BigData processing pipeline which process massive amount of data (both structured and unstructured) in near real-time
• Leverage Spark to enrich and transform corporate data to enable searching, data visualization, and advanced analytics
• Requirement analysis and questioning requirements when they are not clear
• Work closely with DevOps, QA and Product Management teams in a Continuous Delivery environment
This is a cross functional role between QA and Dev capabilities, where test driven approach will be the key in implementing the solution. That will include, but not limit to the test case & test data preparation, test automations scripts implementation in Robot Framework and Python,
test planning and test execution ( test results analysis and defect reporting), ect.
As a cross functional developer, you will work closely with DevOps, QA and Product Management teams in a Continuous Delivery environment
• Are collaborative and achieve your expectations through communication and teamwork in Agile setup (SAFe Framework)
• Are curious, responsive and can understand the needs of others to ensure delivery of the desired results
• Work qualitatively and strive to always do things better
• Have a high level of self-motivation
Your experience and background:
As an Expert Data Engineer/Developer, you are expected to have:
7+ years of experience in Spark & Scala
5+ years of development experience with Python
3+ years experience with CI/CD tools such as Jenkins
In addition to above, you should have:
• Experience with distributed data processing engines like Spark
• Experience with Hadoop and (Hive, Oozie, map reduce, ect)
• Strong SQL skills with previous experience in creating data flows
• BitBucket and GIT expert
• AWS knowledge about architectural components ( Lamba, step functions, state machine, ect)
• Unit testing Experience (Junit 5, Mockito, Spark testing)
• Code versioning strategy & Branching strategy
• Advanced English skills
• Strong working experience with Linux and bash
• Experience using following tools : Jenkins or Bamboo, Jira, bitbucket
Knowledge of streaming and queues technologies e.g. KAFKA, IBM MQ etc
Working knowledge of DevOps