Provides an incredibly simple payments API with associated merchant account setup.
We’re looking for Machine Learning Engineers who can help us build and deploy machine learning models to directly enable our fraud and risk detection systems and expand machine learning to other segments of our business. As a Machine Learning Engineer at Stripe, you will work on problems that run the gamut from data science to production engineering. You’ll work with other machine learning engineers at Stripe and partner with a diverse set of other teams, including engineers who build platform-level infrastructure or user-facing products incorporating machine learning, as well as analysts who interpret and act on our models. You will identify new approaches and methods to improve performance in our core machine learning applications and investigate new applications for machine learning as Stripe grows.
Build machine learning models that power applications like fraud detection
Define metrics for feature evaluation and model performance
Analyze data and investigate different model types and parameters
Design and implement robust data pipelines
Own and improve production scoring systems and participate in on-call rotations, along with every member of the engineering team
You may be fit for this role if you:
Have an advanced degree in a quantitative field (e.g. stats, physics, computer science) and some experience in software engineering
Have industry experience doing software development on a data or machine learning team
You know how to manipulate data to perform analysis, including querying data, defining metrics, or slicing and dicing data to evaluate a hypothesis
Are excited about taking real-world business problems and building machine learning solutions to them, including identify appropriate approaches and techniques
You might work on:
Working with risk analysts to take feature ideas and turn them into valuable new features in our models, quantifying the expected performance improvements and getting them into production.
Writing simulation code using Scalding to run MapReduce jobs on our Hadoop cluster to help us understand what would happen across different segments if we changed how we action our models.
Collaborating with our machine learning infrastructure team to build support for a new model type into our scoring infrastructure.
Defining application-specific metrics to help us evaluate the performance of our models, and tracking the results by creating a dashboard in React.