2 months ago

Job Summary

Pipe is a new kind of trading platform that enables entrepreneurs to grow their businesses on their terms. By treating recurring revenue streams as an asset, Pipe allows companies to transform their recurring revenue into up-front capital, instantly. For entrepreneurs, that means more cash flow for scaling a business without dilution or restrictive debt. For investors, Pipe has unlocked a previously untapped asset class. Whether you’re an entrepreneur or an investor, Pipe is growth on your terms. We’re a fully distributed, remote-first, fast-growing startup. Our engineering and data teams are spread from UTC-8 to UTC+6 and we rely heavily on our written communication skills in order to make it work. We believe in giving our team agency and control over their schedules: we avoid standing meetings, and default to asynchronous communication. There are no core working hours, we just ask our team to communicate clearly about their schedules and be considerate to their coworkers if plans change. You will occasionally need to be flexible in order to meet synchronously with colleagues in different time zones.

  • Minimum Qualification:Degree
  • Experience Level:Mid level
  • Experience Length:3 years

Job Description/Requirements

The Role:

This is a full-time, fully-remote position as a Data Engineer. In this role, you will:

  • Build data systems and pipelines to enable high-velocity model development, and self-serve analytics and reporting for teams across the company.
  • Own and improve various data infrastructure, including the data warehouse, distributed compute clusters, and model deployment tools.
  • Build tooling, testing and processes to drive data reliability, integrity and availability for applications across the company.
  • Help define flexible and scalable schemas across the data stack.
  • Independently and proactively find opportunities in our data model to unlock business value.
  • Ensure data safety, security, and regulatory compliance.


Qualifications:

We are looking for talented data engineers with past experience in a similar role. Ideal candidates will have:

  • Experience building ELT/ETL pipelines
  • Expert-level proficiency in SQL and at least one programming language
  • Experience with distributed computing frameworks such as Spark
  • Familiarity with building model deployment tooling and model hosting infrastructure
  • Bachelor's degree in Computer Science, or another technical field, or equivalent working experience
  • Strong written and verbal communication skills

Important Safety Tips

  • Do not make any payment without confirming with the Jobberman Customer Support Team.
  • If you think this advert is not genuine, please report it via the Report Job link below.
Report Job

Share Job Post

Lorem ipsum dolor (Location) Lorem ipsum ₵ Confidential

Job Function : Lorem ipsum

6 months ago

Lorem ipsum dolor (Location) Lorem ipsum ₵ Confidential

Job Function : Lorem ipsum

6 months ago

Lorem ipsum dolor (Location) Lorem ipsum ₵ Confidential

Job Function : Lorem ipsum

6 months ago

Stay Updated

Join our newsletter and get the latest job listings and career insights delivered straight to your inbox.

We care about the protection of your data. Read our privacy policy.

This action will pause all job alerts. Are you sure?

Cancel Proceed
Follow us On:
Follow us on FacebookFollow us on InstagramFollow us on LinkedInFollow us on TwitterFollow us on YouTube
Get it on Google Play
2023 Jobberman