Core Responsibilities:
- CI/CD Pipeline Support: Assist in building, maintaining, and improving CI/CD pipelines using tools like Jenkins and GitHub Actions to support faster and safer software delivery.
- Automation & DevOps Tooling: Contribute to automating infrastructure provisioning and deployments using tools like Terraform, Ansible, or similar.
- Infrastructure Management: Support the provisioning and management of AWS-based infrastructure, including Dockerized microservices running on ECS or Kubernetes.
- Monitoring & Observability: Set up and manage monitoring tools (e.g., Prometheus, Grafana, Datadog) to ensure system reliability and performance.
- Incident Response: Participate in incident resolution, root cause analysis, and postmortems to improve system reliability and response processes.
- Collaboration: Work with engineering and operations teams to implement DevOps best practices and promote a culture of operational excellence.
- Cloud Cost Optimization: Assist in tracking and reporting cloud resource usage and identifying opportunities to optimize costs.
- Continuous Learning: Stay up to date with industry trends and emerging tools/technologies in DevOps and cloud computing.
Qualifications/ Essential Skills/ Experience:
- 1–3 years of hands-on experience in a DevOps, SRE, or infrastructure-focused role.
- Bachelor's degree in Computer Science, Engineering, or related field (or equivalent practical experience).
- Exposure to AWS or other cloud providers (GCP, Azure) and understanding of cloud-based architecture.
- Practical knowledge of Infrastructure as Code (Terraform, CloudFormation, Ansible, etc.).
- Experience working with Docker and basic understanding of container orchestration with ECS or Kubernetes.
- Familiarity with CI/CD pipelines and tools (e.g., Jenkins, GitHub Actions).
- Basic scripting skills in Bash, Python, or similar.
- Exposure to monitoring and logging tools like Prometheus, Grafana, Datadog, or New Relic.
- Understanding of microservices and experience working with at least one message broker (e.g., Kafka, RabbitMQ).
- Exposure to SQL or NoSQL databases and knowledge of basic performance optimization techniques.
Preferred Skills (Nice to Have)
- Experience with Kubernetes (deployment, scaling, troubleshooting).
- Exposure to deploying or managing data pipelines, and AI models..
- Interest in cloud cost governance and optimization tools (e.g., AWS Cost Explorer, CloudHealth).
Similar Jobs
What We Do
Safe Security is a pioneer in the “Cybersecurity and Digital Business Risk Quantification” (CRQ) space. It helps organizations measure and mitigate enterprise-wide cyber risk in real-time using it’s ML Enabled API-First SAFE Platform by aggregating automated signals across people, process and technology, both for 1st & 3rd Party to dynamically predict the breach likelihood (SAFE Score) & $$ Value at Risk of an organization
Headquartered in Palo Alto, Safe Security has over 200 customers worldwide including multiple Fortune 500 companies averaging an NPS of 73 in 2020.
Backed by John Chambers and senior executives from Softbank, Sequoia, PayPal, SAP, and McKinsey & Co., it was also one of the Top Contributors to the National Vulnerability Database(NVD) of the U.S. Government in 2019 and the ATT&CK MITRE Contributor in 2020.
The company, since 2018, has also been working with MIT in joint research for the development of their SAFE Scoring Algorithm. Safe Security has received several awards including the Morgan Stanley CTO Innovation Award.