Responsibilities:
- Translate business needs and architectural guidance into detailed designs, data contracts, and implementation plans that break down large initiatives into actionable engineering tasks with reliable estimates
- Create detailed pipeline designs covering schemas, transformations, partitioning, DLT configurations, orchestration, error handling, and observability that align with the platform architecture through close collaboration with the Data Architect
- Lead implementation and guide junior engineers on design, coding standards, and best practices
- Develop metadata-driven and configuration-driven pipeline patterns that reduce custom code and improve consistency
- Make technical decisions that ensure reliability, performance, maintainability, and scalability. Ensure production readiness with monitoring, lineage, alerting, observability, CI/CD and documentation
- Define and enforce engineering design patterns, coding standards, testing practices, and operational best practices
- Evaluate and incorporate new technologies and Databricks capabilities that improve reliability, performance, or developer productivity
- Validate new technologies with the Data Architect and operationalize them through documentation, examples, and enablement
- Implement automated data quality checks, rule enforcement, and exception handling
- Production support of both an existing and new platform including optimization of jobs, incident tracking and other analysis required for production
- Lead resolution of complex production issues and deliver durable root cause fixes
- Maintain SLAs for reliability, recovery, idempotency, performance, and cost efficiency
- Mentor Level 2–3 engineers through pairing, design guidance, code reviews, and technical coaching
Basic Skills:
• Bachelor’s degree preferred; equivalent experience accepted
• 10+ years in data engineering (12+ without a degree)
• 4+ years building production-grade batch/streaming pipelines using PySpark, Spark Structured Streaming, Python, and SQL
• Proven experience with data governance, schema evolution, data lineage, and secure access patterns
• Proven 2 years’ experience with maintaining and sustaining data pipelines
Preferred Skills:
• 3+ years hands-on with Databricks (Delta Lake, DLT, Unity Catalog, workflow jobs) within the last 6 years
• Experience building metadata-driven or configuration-driven pipelines
• Experience with data quality frameworks (DQX, Great Expectations, or equivalent)
• Experience with observability, metrics and query performance analysis
• Strong Spark optimization
Work Conditions:
• Team collaborative hours between 8am to 4pm EST
• Corporate office/lab environment
• Ability to travel 10% of the time
Top Skills
What We Do
Since 1992, Omnicell (NASDAQ: OMCL) has been transforming the pharmacy care delivery model through the Autonomous Pharmacy, a combination of hardware, software, and services that enables providers to improve quality, reduce costs, and increase human efficiencies. Through Omnicell’s industry-leading medication management platform and portfolio of technology-enabled services, health systems and retail pharmacies are realizing how connected technology and intelligence can help solve for the most pressing challenges in medication management. Over 7,000 facilities worldwide use Omnicell automation and analytics solutions to help increase operational efficiency, reduce medication errors, deliver actionable intelligence, and improve patient safety. More than 50,000 institutional and retail pharmacies across North America and the United Kingdom leverage Omnicell's innovative medication adherence and population health solutions to improve patient engagement and adherence to prescriptions, helping to reduce costly hospital readmissions. To learn more, visit www.omnicell.com








