monday.com is looking for a DevOps Engineer to join the **BigBrain group** — the team that builds and operates monday.com's Data Platform, and is leading the company's internal AI innovation.
We own the infrastructure behind billions of daily events — from streaming pipelines and data orchestration to the AI Gateway and ML inference platform that power monday.com's intelligent features. We manage some of the most sensitive data in the company, operate across multiple global regions, and are responsible for keeping it all secure and running at scale.
The BigBrain group consists of diverse teams, including Data Scientists, Full-Stack Engineers, Data Engineers, and BI Engineers. As a DevOps engineer on this team, you'll work across many internal groups — partnering with product, security, data, and platform teams to build infrastructure that's secure, scalable, and increasingly autonomous.
This is an exciting time to join: we're building an AI Gateway to govern LLM usage across the company, scaling our ML inference platform, designing AIOps agents that automate infrastructure operations, and evolving our data platform with modern technologies — all while keeping the foundation rock-solid across multiple regions.
You'll join our DevOps team based in our headquarters in Tel Aviv, Israel.
Team podcasts -
https://www.startupforstartup.com/110-on-the-operations-behind-the-client-facing-teams-growth/
https://pod.link/1595260676/episode/fa4d99bbf8e536e7a139a1ba679692f0
BigBrain:
https://www.startupforstartup.com/on-the-bigbrain-that-makes-kpis-accessible-to-every-employee/
https://www.youtube.com/watch?v=x-m0ag0cty0
https://engineering.monday.com/ai-brain-ai-charged-tool-for-internal-usage/
#LI-DNI
About The Role- Own and scale platform infrastructure — Manage multi-region Kubernetes clusters (EKS), streaming pipelines (Kafka/MSK, Debezium CDC), and data orchestration (Airflow) that handles billions of daily events.
- Build and operate AI infrastructure— Deploy and maintain the AI Gateway (governance, observability, guardrails for LLM usage), ML inference platform, and the tooling that enables AI adoption across the company.
- Drive infrastructure automation with AIOps** — Design and build autonomous agents and intelligent tooling (n8n, LangChain, Claude Code) that automate infrastructure operations, analyze workflows, and reduce manual toil.
- Build and maintain CI/CD & GitOps pipelines — Design and operate deployment pipelines with GitHub Actions, Argo CD, Helm, and Terraform (CDKTF), enabling fast, safe, and reliable releases for product teams.
- Ensure data security—Protect the company's most sensitive data through access controls, data governance, and security-first infrastructure design.
- Evolve the data platform— Work with modern data technologies (Snowflake, ClickHouse, Apache Iceberg, EMR) and contribute to the next generation of our data infrastructure.
- Enable developer self-service — Improve our internal platform so engineering teams across the company can deploy, configure, and operate services independently.
- Provide observability and reliability— Build monitoring, alerting, and debugging tools (Datadog, OpenTelemetry, ClickHouse) across all data and AI processes.
- Our Stack — AWS, Kubernetes (EKS), Kafka/MSK, Debezium, Airflow, Snowflake, ClickHouse, Apache Iceberg, EMR, ArgoCD, Terraform/CDKTF, Helm, GitHub Actions, Docker, Datadog, OpenTelemetry, MLFlow, n8n, API Gateway, Redis, MySQL, Teleport, TypeScript, Node.js, Python.
- 3+ years of experience as a DevOps Engineer
- Strong technical skills and a good understanding of systems and infrastructure.
- Experienced with building the full application release cycle (CI/CD).
- Familiar with how modern web applications work and scale.
- Networking, firewall rules management, and application security knowledge.
- Familiar with Linux environment, scripting, and programming.
- Ability to see the bigger picture and carry out system architecture planning.
- Understanding of products and a passion for building software that impacts millions of users.
- Team player, egoless, strong communication skills, and empathetic.
- Big Data technologies such as Kafka/Flink - Advantage
DevOps Engineer (BigBrain)
Our TeamThe BigBrain group consists of diverse teams that include Data Scientists, Fullstack Engineers, Data Engineers, and BI Engineers.
We manage the data backbone of our company which allows monday.com to thrive off A/B testing and enable a decision-making process based on the data we collect and serve.
This is an opportunity to join us in a very exciting phase of designing and building new feature for our internal data platform - BigBrain.
What We Do
At monday.com, we help teams get more work done. We are the best AI work platform that empowers teams to automate, build, and scale their impact end-to-end with tools that actually execute the work for you. With over $1B in ARR, 250,000+ customers, and a global team, we’re serious about building a product people love to use and giving our employees the same ownership and flexibility to shape the way the world works.
Why Work With Us
At monday.com we believe in transparency, accountability, and impact. Together, those values have lent themselves to create a strong culture of professional and creative autonomy where every team member is encouraged to share ideas and help bring them to life!
Gallery
monday.com Teams
monday.com Offices
Hybrid Workspace
Employees engage in a combination of remote and on-site work.
monday.com embraces a flexible work environment with our hybrid model!












