Sorry. This page is not yet translated.
strongDM

Single sign-on for backend infrastructure.

Data Engineer
Remote
Job Description / Skills Required

strongDM is a customer-first, second, and third company with a rabid fan base. When was the last time you heard things like:
 
* Splunk's CISO Joel Fulton says "strongDM gives you what you can’t get any other way -- the ability to see what happens, replay and analyze incidents."
 
* Chef's co-founder Adam Jacob says "strongDM takes the friction out of getting staff access to the systems they need."
 
Customers love us because:
 
The product rocks: strongDM fundamentally changes the relationship between InfoSec, DevOps, and end users. Enforce the controls security needs while making it easier to facilitate access.  
 
They can trust us: we built a technical product for technical buyers. We do not use jargon. There is no alternative but to always be technically accurate. We are not afraid to admit product gaps.  
 
We’re real humans: we built a serious product without taking ourselves too seriously. Each member of the team is deadly good at their job, and yet we crack jokes on the phone with customers. 
 
 
Analytics at strongDM is unique because...
 
...we’re a young company that has invested in data efforts early. Our leadership team is data literate and has broad experience building data products and leading analytics functions. And, we have a shared focus on how we expect the data platform to power our business.
 
Sound good? We’re looking to add a data engineer to the team to help define our strategy, and execute on building everything we need to get there.

What You'll Do:

    • Build and operate batch and streaming pipelines to ingest data into a lakehouse (we’re currently using S3+Glue+Athena+Redshift)
    • Build and operate full refresh and incremental pipelines to compute derived models (we’re currently using dbt, python+pandas, and Spark)
    • Build and operate outbound pipelines to send data in actionable shapes to line-of-business tools
    • Curate the datasets in the lakehouse to facilitate analysis, reporting, and self-serve querying (we’re currently using Jupyter and Redash)
    • Partner with engineers, data analysts, and business stakeholders on data efforts

Requirements:

    • Proficiency in SQL and SQL based modelling tools (we’re using dbt)
    • Proficiency in python and pandasverse, and familiarity with Golang, Javascript
    • 3+ years of experience in a data-heavy engineering role
    • Experience working at a tech startup or other high velocity engineering culture
    • A strong ethos for getting things done (we iterate quickly, and pair program daily)

Compensation:

    • Industry-standard base
    • Medical, dental, and vision insurance
    • 401k, HSA, FSA, short/long-term disability, 3 months parental leave
    • 3 weeks PTO + standard holidays
    • Equity in a fast-growing startup
    • No travel required