Job Summary:
We are looking for a Senior DevOps Engineer with 10+ years of experience to join our team. In this role, you will be responsible for designing, implementing, and managing the software delivery pipeline and infrastructure to ensure continuous delivery of high-quality software. The ideal candidate should have a strong background in software development, systems administration, cloud infrastructure, and automation.
Key Responsibilities:
- Infrastructure Automation:
- Design and implement Infrastructure as Code (IaC) using tools like Terraform, CloudFormation, Pulumi, or Ansible.
- Manage cloud environments (AWS, Azure, GCP) and automate resource provisioning, scaling, and monitoring.
- Create and manage Kubernetes clusters and containerized applications using Docker.
- CI/CD Pipeline Management:
- Develop, maintain, and optimize Continuous Integration/Continuous Deployment (CI/CD) pipelines using tools like Jenkins, GitHub Actions, or Azure DevOps or TeamCity.
- Ensure seamless code integration, testing, and deployment processes.
- Integrate automated testing, code quality checks, and security scans into CI/CD workflows.
- Monitoring & Performance Optimization:
- Implement and maintain monitoring and alerting systems using tools like Prometheus, Grafana, Datadog, AppD, or CloudWatch.
- Identify and troubleshoot performance bottlenecks in systems, applications, and infrastructure.
- Conduct regular audits to ensure optimal system health, uptime, and performance.
- Security & Compliance:
- Implement best practices for cloud security, identity, and access management (IAM), data protection, and network security.
- Perform vulnerability assessments, penetration testing, and remediation.
- Ensure compliance with industry standards and regulations such as GDPR, HIPAA, or SOC 2, ISO27001.
- Collaboration & Support:
- Work closely with development, QA, and operations teams to ensure smooth delivery of software.
- Provide technical guidance and mentorship to junior engineers and team members.
- Participate in on-call rotations to provide 24/7 support for critical systems and infrastructure.
- Configuration Management:
- Manage configuration and deployment of software and infrastructure using tools like Ansible, or Chef.
- Create and manage scripts for automation and task orchestration (Bash, Python, PowerShell).
Skills & Qualifications:
- Technical Skills:
- Cloud Platforms: Expertise in AWS, Azure, or GCP, including compute, networking, storage, and security services.
- Containerization: Strong experience with Docker and Kubernetes, EKS, GKE including container orchestration and management.
- CI/CD Tools: Deep knowledge of CI/CD tools like Jenkins, GitHub Actions, or ArgoCD and TeamCity .
- Infrastructure as Code (IaC): Hands-on experience with Terraform, CloudFormation, Pulumi or similar tools.
- Programming & Scripting: Proficiency in one or more programming languages (Python, Go, Java) and scripting languages (Bash, PowerShell).
- Version Control: Advanced skills in Git for version control and branching strategies.
- Monitoring & Automation:
- Familiarity with monitoring tools like Prometheus, Grafana, Datadog, New Relic, or AppD.
- Experience with logging, metrics collection, and observability best practices.
- Security:
- Knowledge of security best practices for cloud environments.
- Experience with DevSecOps tools for automated security testing (SonarQube, OWASP ZAP, Snyk).
Soft Skills:
- Strong communication and interpersonal skills, with the ability to collaborate effectively across teams.
- Problem-solving skills with a proactive attitude towards finding solutions.
- Time management skills and the ability to handle multiple projects simultaneously.
- Leadership and mentoring skills for guiding junior engineers.
Preferred Qualifications:
- Certification in cloud platforms (AWS Certified Solutions Architect, Azure DevOps Expert, GCP Professional Cloud DevOps Engineer).
- Familiarity with GitOps tools like Argo CD or Flux.
- Knowledge of serverless architecture (AWS Lambda, Azure Functions, Google Cloud Functions).
Education & Experience:
- Bachelor's degree in Computer Science, Information Technology, or related field.
- 10+ years of experience in DevOps, cloud engineering, or a related field.
About Picarro:
We are the world's leader in timely, trusted, and actionable data using enhanced optical spectroscopy. Our solutions are used in various applications, including natural gas leak detection, ethylene oxide emissions monitoring, semiconductor fabrication, pharmaceutical, petrochemical, atmospheric science, air quality, greenhouse gas measurements, food safety, hydrology, ecology, and more. Our software and hardware are designed and manufactured in Santa Clara, California. They are used in over 90 countries worldwide based on over 65 patents related to cavity ring-down spectroscopy (CRDS) technology. They are unparalleled in their precision, ease of use, and reliability.
At Picarro, we are committed to fostering a diverse and inclusive workplace. All qualified applicants will receive consideration for employment without regard to race, sex, color, religion, national origin, protected veteran status, gender identity, social orientation, or disability. Posted positions are not open to third-party recruiters/agencies, and unsolicited resume submissions will be considered free referrals.
Top Skills
What We Do
Empowering the world through timely, trusted and actionable data through enhanced optical spectroscopy