Staff Data Engineer

  • full_time
  • Germany


Company Description is on a mission to simplify buying for businesses. We believe that it’s not just about what you buy, but how you buy it. Today’s purchase-to-pay process is riddled with complexity and missed opportunities for leverage. Our goal is to highlight that leverage through the use of our platform. eliminates manual purchasing and payment tasks and gives your team one place to purchase, approve, track and pay for all the physical goods your business needs. With customizable budgets and reporting, operations and finance teams can take back control over the buying process and start saving time, saving money, and gaining clarity in how they buy.

Founded in 2016 and headquartered in New York City, oversees nearly half a billion dollars in annualized spend across hundreds of customers like WeWork, SoulCycle, and Lume. has raised $50M in funding from industry-leading investors like MIT, Stage 2 Capital, Rally Ventures, 645 Ventures, and more. has been proudly named as a 50 to Watch by Spend Matters and a Best Place to Work by BuiltIn and Inc. Magazine.

Job Description

As a staff data engineer, you will take the lead in designing, building, and maintaining our data infrastructure. Your expertise in data modeling, ETL pipelines, and data warehousing will enable data-driven decision making throughout the organization. You will empower the data analysts and scientists on your team to deliver valuable insights that impact business outcomes. If you’re ready for a challenging yet rewarding opportunity, we invite you to join us in shaping the future of our organization by unlocking the power of our data and transforming it into actionable insights.


  • Contribute to the data roadmap and OKRs, ensuring data infrastructure initiatives are identified, prioritized and delivered
  • Plan epics, ensuring they are appropriately broken down, prioritized, and well understood by the team
  • Design and build simple services and systems with a focus on iterative development, reliability, and minimizing the cost of future changes
  • Continually optimize the data architecture to provide a reliable and adaptive infrastructure that scales with the business
  • Support the current data pipelines and build new integrations based on the business needs
  • Develop and implement monitoring processes
  • Deploy and monitor products on cloud platforms
  • Strive for continuous learning and improvement for yourself and your team, providing technical mentorship wherever possible
  • Participate in rotating on-call duties, including incident management
  • You have a strong leadership mindset and are motivated by accountability
  • You are results-oriented
  • You love helping people in your team grow and improve
  • Writing tests is an integral part of your development process
  • You know how to design and build software incrementally
  • Rallying people around achieving a goal comes naturally to you
  • You are collaborative, open-minded, and looking to continue to develop your craft

Technical Skills

  • You can demonstrate expertise using Python and SQL
  • You have hands-on experience with data orchestration tools (preferably Airflow, Dagster, AWS Step Functions)
  • You have a proven track record of implementing AWS cloud services (specifically Lambda, SQS, ECS)
  • Your skills include hands-on experience with infrastructure as code (preferably Terraform)
  • You have worked with big data platforms in production (preferably Spark/PySpark, AWS Glue and EMR)
  • Knowledge of data lake tools on AWS (specifically S3, Glue Catalog, and LakeFormation
  • You understand how to implement and use business intelligence and data visualization tools (preferably Metabase or Tableau)
  • You have an understanding of CI/CD and supporting tools like GitHub Actions and CircleCI
  • You have a strong understanding data security and experience protecting data

Nice to Have

  • Your experience with streaming data architecture would be an asset (preferably AWS Kinesis, Spark Stream, Kafka Stream, Flink, Storm) 
  • Your experience with MLOps model deployment and pipelines using AWS services would be an asset (preferably Sagemaker, S3, ECR, Redshift)
Additional Information

What You’ll Receive

  • A competitive compensation package including stock options
  • Robust medical, dental, vision, and wellness benefits
  • Flexible time off and remote work policies
  • Employer-sponsored 401(k)
  • The anticipated annual base salary range for this role is $210,000 – $220,000. Actual compensation and title will be commensurate with experience, qualifications, knowledge, and skills. is an equal opportunity employer. Applicant’s qualifications are considered without regard to race, color, religion, sex (including pregnancy, gender identity, and sexual orientation), national origin, age, disability, veteran status, genetic information, or any other basis prohibited by law.







Comments are closed.