Job Summary
You will also help us build CobbleWeb’s internal communication system and knowledge base known as Umy. This set of internal tools will support our globally distributed company structure.
- Minimum Qualification: Degree
- Experience Level: Senior level
- Experience Length: 5 years
Job Description/Requirements
- Design, deliver and continuously test data pipelines that will aggregate data into reports.
- Collaborate with the team to create innovative proofs-of-concept, pilot projects, minimum viable products, and business cases.
- Transform data into valuable insights that inform business decisions, making use of our internal data platforms and applying appropriate analytical techniques.
- Help us to understand our users and serve them better through data, conversations, and active research to hear from them directly.
- Engineer reliable data pipelines for sourcing, processing, distributing, and storing data in different ways, using data platform infrastructure effectively.
- Produce and automate delivery of key metrics and KPIs to the business. In some cases, this will mean simply making data available and in others it will constitute developing full reports for end users.
- Monitor usage of data platforms and work with clients to deprecate reports and data sets that are not needed and create a continuous improvement model for the data.
- Work with clients to understand data issues, tracing back data lineage and helping the business put appropriate data cleansing and quality processes in place.
- Work with stakeholders to define and establish data quality rules, definitions and strategies in line with business strategies and goals.
- Monitor and set standards for data quality.
- Prioritise data issues.
- Expert with Python(5+ years experience)
- Experience with SQL and NoSQL (5+ years experience)
- Experience with database technologies like Relational, NoSQL, MPP, Vector and Columnar databases (3+ years experience)
- Experience in AWS (3+ years experience)
- A comprehensive understanding of cloud data warehousing and data transformation (extract, transform and load) processes and supporting technologies such as Airbyte, Dbt, Dagster, AWS S3, EMR, Data Lakehouse, and other analytics tools.
- Experience in manipulating data through cleansing, parsing, standardising etc., especially in relation to improving data quality and integrity
- Proven ability to design Data Models and ETL pipelines that meet business requirements in the most efficient manner.
- You have designed and deployed data pipelines and ETL systems for data-at-scale
- Previous experience in meeting the visualisation, reporting and analytics needs of key business functions through the development of presentation and data models
- Experienced in defining and developing data sets, models and cubes.
- Knowledge of the emerging technologies that support Business Intelligence, Analytics and Data.
- You have a curious level-headed approach to problem-solving, with a fine eye for detail and the ability to look at the wider business context to spot opportunities for improvement.
- Passionate about data and unlocking data for the masses
- BSc or MS in Computer Science or related technical fields. Equivalent work experience will also be considered.
Important Safety Tips
- Do not make any payment without confirming with the Jobberman Customer Support Team.
- If you think this advert is not genuine, please report it via the Report Job link below.