Job Summary
We’re looking for a Senior Data Engineer, Data Platform to help us build a scalable, reliable and performant platform for both real-time customer facing applications and internal business intelligence needs. The Data Platform team enables data-driven decision making through robust data infrastructure, pipelines and analytics systems that power our business
- Minimum Qualification: Degree
- Experience Level: Senior level
- Experience Length: 5 years
Job Description/Requirements
As a Senior Data Engineer, Data Platform, you’ll …Â
- Develop event-driven data infrastructure on AWS.
- Build data pipelines for ingesting, processing, and routing events using Kafka, Spark streaming and other technologies.
- Build a data lakehouse architecture.
- Create unified frameworks for stream, batch and real-time processing.
- Develop data models, schemas and standards for event data.
- Optimize data replication and loading across systems.
- Optimize data storage and access patterns for fast querying.
- Improve data reliability, discoverability and observability.
- Improve our planning, development, and deployment processes to help you and your fellow team members.
- Participate in all engineering activities including incident response, interviewing, designing and reviewing technical specifications, code review, and releasing new functionality.
- Mentor, coach, and inspire a team of engineers of various levels.
- Collaborate with software engineers, product managers, and data scientists in an autonomous, supportive team environment.
- Effectively communicate team priorities and strategy to engineering and cross-functional leadership teams.
In addition to the responsibilities outlined above, at Webflow we will support you in identifying where your interests and development opportunities lie and we'll help you incorporate them into your role.
About youÂ
You’ll thrive as a Senior Data Engineer, Data Platform if you:
- Have 5+ years experience building large scale data platforms.
- Have hands-on experience with event-driven architecture and streaming data processing frameworks like Kafka, Spark, Flink.
- Have experience building lakehouse architecture on cloud storage.
- Have experience with storage layers like Hudi, Delta Lake and Iceberg.
- Familiarity with infrastructure tooling such as Terraform/Pulumi and worked with Kubernetes.
- Are experienced with SQL, Python, Java.
- Are experienced with time-series databases like Clickhouse, InfluxDB.
- Are experienced working with dbt and Snowflake, BigQuery, Redshift or other data warehouses.
- Have familiarity with Kimball’s dimensional modeling techniques.
- Take pride in taking ownership and driving projects to business impact
- Excellent organization and communication skills, both verbal and written.
Important Safety Tips
- Do not make any payment without confirming with the Jobberman Customer Support Team.
- If you think this advert is not genuine, please report it via the Report Job link below.