The Role
Design and maintain CI/CD pipelines for data workflows, monitor pipeline performance, automate deployment, and collaborate with engineers for integration.
Summary Generated by Built In
Key Responsibilities
− CI/CD Pipeline Development:
Design and maintain CI/CD pipelines for data workflows and machine learning jobs using tools like Azure DevOps, Jenkins, or GitHub Actions. For Databricks, implement automated deployment of notebooks, jobs, and Delta Live Tables, ensuring version control and environment consistency.
− Pipeline Monitoring & Reliability:
Implement monitoring solutions for data pipelines and Databricks jobs, track latency, throughput, and failures. Configure auto-scaling clusters and recovery strategies to guarantee high availability and resilience.
− Secrets & Environment Management:
Securely manage credentials, API keys, and Databricks Secret Scopes across development, staging, and production environments. Apply best practices for role-based access control (RBAC) and compliance.
− Deployment Automation:
Automate deployment of data infrastructure, Databricks clusters, and ML models using Infrastructure-as-Code (Terraform) and orchestration tools. Ensure reproducibility and reduce manual intervention.
− Observability & Alerting:
Set up end-to-end observability for pipelines using Databricks monitoring dashboards, integrate with Prometheus, Grafana, or cloud-native tools, and configure proactive alerting for SLA breaches and anomalies.
− Collaboration & Documentation:
Work closely with Data Engineers, AI Engineers, and Platform teams to ensure smooth integration. Document Databricks workflows, cluster configurations, and CI/CD processes for transparency and operational excellence.
Preferred Qualifications
− Databricks Expertise:
Hands-on experience with Databricks Workflows, Delta Lake, Unity Catalog, and MLflow for model tracking and deployment.
− Programming Skills:
Proficiency in Python (PySpark) and SQL for data processing and transformation. Ability to optimize queries for large-scale analytics.
− Database Knowledge:
Familiarity with Oracle, MySQL, PostgreSQL, and integration with Databricks for ingestion and analytics.
− Orchestration Tools:
Experience with Airflow, Prefect, Dagster, or Databricks Workflows for scheduling and monitoring complex pipelines.
− Data Transformation & Modeling:
Understanding of data modeling principles, Delta Lake architecture, and performance tuning for big data environments.
− CI/CD Tools:
Proficiency in Azure DevOps, GitHub Actions, Jenkins, and integration with Databricks for automated deployments.
− Infrastructure-as-Code:
Experience with Terraform or CloudFormation for provisioning Databricks resources and managing cloud infrastructure.
− Cloud Platforms:
Strong knowledge of AWS and Databricks, including S3 integration, IAM roles, and secure data access patterns.
More information about NXP in India...
#LI-29f4Top Skills
Airflow
AWS
Azure Devops
Dagster
Databricks
Github Actions
Grafana
Jenkins
MySQL
Oracle
Postgres
Prefect
Prometheus
Pyspark
Python
SQL
Terraform
Am I A Good Fit?
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.
Success! Refresh the page to see how your skills align with this role.
The Company
What We Do
NXP Semiconductors N.V. (NASDAQ: NXPI) enables a smarter, safer and more sustainable world through innovation. As a world leader in secure connectivity solutions for embedded applications, NXP is pushing boundaries in the automotive, industrial & IoT, mobile, and communication infrastructure markets. Built on more than 60 years of combined experience and expertise, the company has approximately 34,500 employees in more than 30 countries and posted revenue of $13.21 billion in 2022. Find out more at www.nxp.com.
Privacy Policy: https://www.nxp.com/company/about-nxp/privacy-policy-for-social-media-pages:PRIVACY-POLICY-SOCIAL-MEDIA

.png)





