Introduction
About Us
We’re a FinOps platform helping organizations optimize and control cloud spend across AWS and other major cloud providers. Our data-driven approach gives engineering and finance teams full visibility into usage, cost anomalies, and savings opportunities at scale.
We process hundreds of billions of lines daily, ingesting cloud bills, Kubernetes cluster details, and utilization metrics to derive actionable insights. If you're passionate about building scalable systems that make cloud usage smarter and more cost-efficient, this is the place for you.
About The Role
We’re looking for a Staff Software Engineer with strong experience with Apache Spark, large-scale data processing, data partitioning and cluster performance tuning. You’ll play a key role in scaling our data processing pipelines, optimizing cost-to-performance, and driving architectural decisions for our big data platform.
You will also mentor a team of engineers and be a technical leader in our mission to help customers take control of their cloud spend.
Your Role And Responsibilities
Key Responsibilities:
- Architect and implement scalable batch data pipelines using Apache Spark in Scala.
- Drive optimization of massive datasets including AWS, GCP, Azure biling cost and usage data—with attention to input partitioning, join strategies, and memory/shuffle tuning.
- Reduce platform processing costs while improving performance, aligning directly with FinOps principles.
- Lead the design of fault-tolerant, cost-efficient big data infrastructure on AWS
- Mentor team members, lead technical design reviews, and contribute to the evolution of the engineering culture.
- Collaborate with product, platform, and FinOps experts to develop new capabilities that provide cost insights and recommendations to customers.
Tech Stack
- Languages: Scala (primary), Java (secondary)
- Big Data: Apache Spark (on EMR), Snowflake
- Cloud: AWS (EMR, S3, Lambda, CloudWatch,)
- CI/CD: GitHub Actions, Terraform, Kubernetes, DataDog, Splunk
Preferred Education
Master's Degree
Required Technical And Professional Expertise
- 8+ years of software engineering experience, with 5+ years building distributed data pipelines in Spark.
- Strong command of Scala and functional programming.
- Deep expertise in cluster resource tuning, data partitioning, and performance benchmarking in Spark.
- Experience with AWS data services including S3, CloudWatch, and IAM.
- Proven track record of designing systems that balance performance with cost-efficiency — ideally in a FinOps or similar optimization-focused context.
- Ability to lead projects end-to-end and mentor junior engineers.
Preferred Technical And Professional Experience
- Usage of AWS EMR
- Exposure to cloud billing/usage data (AWS CUR, Cost Explorer, etc.)
- Familiarity with FinOps practices and cloud cost management principles.
- Experience working with data formats like Parquet or Apache Iceberg.
- Prior experience in startups or high-growth SaaS environments.