What You'll Accomplish
- Architect high-throughput solutions that power our most critical operations, ensuring scalability and efficiency
- Expand and enhance our self-service platform, collaborating with cross-functional teams to fuel our AI, ML, and analytics goals
- Tackle complex distributed data challenges, streamline system integrations, and uphold high standards of quality and governance
- Champion cutting-edge technologies, keeping our platform at the forefront of industry advancements and enabling strategic outcomes
- Unify data from diverse systems, paving the way for experimentation and innovation while empowering teams with intuitive tools and frameworks
Your Expertise
- Track record of debugging issues across layers—from application logic to infrastructure bottlenecks—and understand the tradeoffs in system design, not just the settings in a managed UI
- You don’t just know how to run jobs in tools — you understand how they operate under the hood and how connected storage impacts performance
- You understand how simple storage API semantics (e.g. consistency, eventual visibility, multi-part uploads) affect job execution
- You recognize the impact of network I/O and data locality on performance and cost
- You understand how resource scheduling and JVM tuning influence distributed job behavior
- Proven experience as a Software Engineer with a focus on high throughput scalable systems
- In-depth knowledge of high throughput processing technologies such as Spark, Flink, and/or Kafka. Proficiency in Java and strong understanding of object-oriented design, data structures, algorithms, and optimization
- You have development experience working with data warehouse tools like Snowflake, Clickhouse, Trino, etc
- You’ve experience with open source data storage formats such as Apache Iceberg, Parquet, Arrow, or Hudi
- You are knowledgeable about data modeling, data access, and data replication techniques, such as CDC
- You have a proven track record of architecting applications at scale and maintaining infrastructure as code via Terraform
- You are excited by new technologies but are conscious of choosing them for the right reasons
What We Use
- Our infrastructure runs primarily in Kubernetes hosted in AWS’s EKS
- Infrastructure tooling includes Istio, Datadog, Terraform, CloudFlare, and Helm
- Our backend is Java / Spring Boot microservices, built with Gradle, coupled with things like DynamoDB, Kinesis, AirFlow, Postgres, Planetscale, and Redis, hosted via AWS
- Our frontend is built with React and TypeScript, and uses best practices like GraphQL, Storybook, Radix UI, Vite, esbuild, and Playwright
- Our automation is driven by custom and open source machine learning models, lots of data and built with Python, Metaflow, HuggingFace 🤗, PyTorch, TensorFlow, and Pandas
Top Skills
What We Do
Attentive® is the AI marketing platform for leading brands, designed to optimize message performance through 1:1 SMS and email interactions. Infusing intelligence at every stage of the consumer’s purchasing journey, Attentive empowers businesses to achieve hyper-personalized communication with their customers on a large scale. Leveraging AI-powered tools, a mobile-first approach, two-way conversations, and enterprise-grade technology, Attentive drives billions in online revenue for brands around the globe. Trusted by over 8,000 leading brands such as CB2, Urban Outfitters, GUESS, Dickey’s Barbeque Pit, and Wyndham Resort, Attentive is the go-to solution for delivering powerful commerce experiences for consumers with the brands they love.
To learn more about Attentive or to request a demo, visit www.attentive.com or follow us on LinkedIn, X (formerly Twitter), or Instagram.
Why Work With Us
At Attentive, you'll connect with inspiring, high-caliber people, and be encouraged to take risks, get creative, and think bigger. We're solving big problems for our customers, through our innovative AI solutions, giving employees the opportunity to thrive along the journey. The sky's the limit when it comes to what's possible.
Gallery
