Data Engineer - Mox Bank
About Standard Chartered
We are a leading international bank focused on helping people and companies prosper across Asia, Africa and the Middle East.
To us, good performance is about much more than turning a profit. It's about showing how you embody our valued behaviours - do the right thing, better together and never settle - as well as our brand promise, Here for good.
We're committed to promoting equality in the workplace and creating an inclusive and flexible culture - one where everyone can realise their full potential and make a positive contribution to our organisation. This in turn helps us to provide better support to our broad client base. The Role Responsibilities
We're looking for a Data Engineer to work on site with our development and data science teams in our offices in Hong Kong. We work in project-based sprints in small, interdisciplinary teams.
As a Data Engineer you'd be responsible for the design, creation and maintenance of analytics infrastructure that enables almost every other function in the data world. You will be responsible for the development, construction, maintenance and testing of architectures, such as data lake, warehouses, databases, data pipelines and large-scale processing systems. As part of Data Engineering team, you are also responsible for the creation of data set processes used in modelling, mining, acquisition, and verification.
- Collaborate closely with our development and product teams in our fast-paced delivery environment
- Design, build and maintaining modern, automated, cloud native, analytics infrastructure.
- Build and manage data warehouses, databases, data pipelines.
- Understand and translate business needs into data models supporting long-term solutions. Work with the development team to implement data strategies, build data flows and develop conceptual, logical and physical data models that ensure high data quality and reduced redundancy
Qualifications and Education Requirements
- Knowledge of technology best practices for building a modern data lake, data warehouses and data pipelines
- Good understand of technologies and experience in building a highly scalable and fault tolerant cloud data platform
- Self-starter, capable of working without direction and able to deliver projects from scratch
- Good practical experience and knowledge in building and maintaining Data Warehousing/Big Data Tools - Hadoop and MapReduce, Apache Spark and Spark SQL, HIVE
- In-Depth Database Knowledge of RDBMS (PostgreSQL and MySQL) and NoSQL (HBase)
- Strong experience in building and maintaining cloud Big Data and ETL tool, Google Big Table, Big Query and Air Flow (Google Composer)
- Strong knowledge and experience with Apache Beam in implementing batch and streaming data processing jobs, strong Development background in Python or Java.
- Strong knowledge in messaging systems like Kafka, RabbitMQ and Google Pub/Sub.
- Experience with Agile/Lean projects SCRUM, KANBAN etc.
- Practical knowledge with Git flow, Trunk and GitHub flow branching strategies.
- Strong English communication skills
- Container Management and container orchestration experience - Docker, Kubernetes
- Monitoring tools Elastic Stack, Prometheus, Grafana
- Breadth of knowledge - operating systems, networking, distributed computing, cloud computing,
Familiar with Big Data Technologies (AWS RedShift, Panoply), ETL Tools (StitchData and Segment) Machine Learning technologies and environments.