As a Specialist Solutions Engineer (SSE), you will guide customers in building big data solutions on Databricks that span a large variety of use cases. These are customer-facing roles, working with and supporting the Solution Architects, requiring hands-on production experience with Apache Spark™ and expertise in other data technologies. SSEs help customers through the design and successful implementation of essential workloads while aligning their technical roadmap for expanding the usage of the Databricks Data Intelligence Platform. As a deep go-to-expert reporting to the Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in an area of speciality - whether that be performance tuning, machine learning, industry expertise, or more. You will be reporting to Senior Manager, Field Engineering (Specialist Team)
The impact you will have:
- Provide technical leadership to guide strategic customers to successful implementations on big data projects, ranging from architectural design to data engineering to model deployment
- Architect production-level workloads, including end-to-end pipeline load performance testing and optimisation
- Provide technical expertise in an area such as data management, cloud platforms, Data Warehousing, and Data Engineering
- Assist Solution Architects with more advanced aspects of the technical sale including custom proof of concept content, estimating workload sizing, and custom architectures
- Improve community adoption (through tutorials, training, hackathons, conference presentations)
- Contribute to the Databricks Community
What we look for:
- Experience in technical customer-facing and with a background in Data Engineering, looking to learn and develop in a customer-facing technical role in a pre-sales environment as a Subject Matter Expert (SME).
- Pre-sales or post-sales experience working with external clients across a variety of industry markets
- Experience designing and implementing production-grade distributed Big Data solutions.
Data Engineer Skills
- Experience as a Data Engineer: designing and implementing production grade spark based solutions.
- Experience query tuning, performance tuning, troubleshooting, and debugging Spark or other big data solutions.
- Experience with big data technologies such as Spark/Delta, Hadoop, NoSQL, MPP, and OLAP.
- Experience with cloud architecture, systems, and principles.
- Production programming experience in Python, R, Scala or Java.
- Deep expertise in at least one of the following areas:
- Scaling ETL pipelines that are performant and cost-effective.
- Tuning queries on big data.
- Development tools and best practices for data engineers including CI/CD, unit and integration testing, and automation and orchestration
- Building and scaling streaming pipelines
- Knowledgeable in a core Big Data Analytics domain with some exposure to advanced proofs-of-concept and an understanding of a major public cloud platform (AWS, GCP, Azure)
- Nice to have: Databricks Certification
- Travelling approx. 30% of the time