What You'll Be Doing
- Lead the continued transition of legacy SAS-based ETL processes to SQL Server, completing remaining migrations and validating results through parallel processing and data reconciliation.
- Translate undocumented or minimally documented legacy ETL logic into maintainable, fault tolerant SQL Server and SSIS workflows.
- Improve and standardize incremental data processing patterns, reducing reliance on full data refreshes and destructive reload processes.
- Own the reliability and performance of ETL pipelines by identifying and resolving bottlenecks, particularly in high-volume and performance-sensitive workflows.
- Investigate and correct data flow issues that prevent records from consistently reaching downstream systems across environments.
- Support production data operations by partnering with product, engineering, and support teams to triage and resolve data-related issues and support tickets.
- Participate in regular operational check-ins and serve as a primary escalation point for ETL and data pipeline concerns.
- Document ETL logic, dependencies, and operational processes to reduce institutional knowledge risk and improve long-term maintainability.
- Introduce improved logging, monitoring, automation, and repeatability across data integration workflows.
- Collaborate with engineering peers and domain experts to establish clearer ownership and standards for ETL and data pipeline practices.
What We're Looking For
- Strong hands-on experience with SQL Server development, including advanced T-SQL, query optimization, and performance tuning in production environments.
- Demonstrated experience designing, maintaining, and modernizing data pipelines and ETL processes, particularly in environments transitioning from legacy architectures to more scalable, maintainable data platforms.
- Ability to analyze, interpret, and translate legacy data transformation logic (including SAS-based workflows) into modern, SQL-based implementations, with an emphasis on clarity, performance, and long-term maintainability.
- Experience with SQL Server–based data integration tooling or comparable modern data orchestration frameworks, including support for incremental processing, dependency management, and multi-step pipelines.
- Familiarity with modern data engineering concepts such as idempotent pipelines, incremental ingestion patterns, schema evolution, and environment-aware deployments.
- Comfort working within complex, partially undocumented systems and progressively improving them through refactoring, documentation, and automation.
- Experience supporting and operating production data pipelines, including diagnosing failures, resolving data quality issues, and partnering cross-functionally to restore and improve system reliability.
- Experience managing data workflows across multiple environments (development, scale, production) with attention to consistency, validation, and release coordination.
- Strong problem-solving skills with a systems-level mindset, particularly when identifying root causes of performance, scalability, or data integrity issues.
- Ability to work collaboratively with engineering, product, and support teams while maintaining clear ownership of data platform outcomes.
- Clear written and verbal communication skills, especially when documenting technical systems and explaining data flows to both technical and non-technical stakeholders.
What Will Make You Stand Out
- Prior experience working with public health data systems, including familiarity with HL7 or similar healthcare data standards.
- Hands-on experience migrating large, long-lived ETL systems from legacy technologies to SQL Server–based architectures.
- Deep understanding of ETL performance optimization at scale, including parallel processing and high-volume data loads.
- Experience designing or improving incremental data ingestion strategies in systems that historically relied on full refreshes.
- Demonstrated ability to bring structure to undocumented or tribal-knowledge-heavy systems through clear documentation and process improvement.
- Experience implementing robust logging, monitoring, and alerting for data pipelines.
- Comfort balancing project-based migration work with ongoing production support responsibilities.
- Ability to proactively identify risks in data workflows and address them before they impact downstream systems or customers.
- Experience serving as a technical owner or go-to expert for critical data infrastructure.
Similar Jobs
What We Do
InductiveHealth Informatics helps to keep people safe from infectious disease by solving complex public health technology problems for governments around the globe. Based in Atlanta, Georgia, InductiveHealth’s work can be seen in the United States supporting State and Federal health agencies, in Africa supporting PEPFAR and Global Fund initiatives, and elsewhere globally, delivering some of the most complex technology efforts in public health. Over a dozen states, jurisdictions, and foreign national governments entrust their systems and data to InductiveHealth.
InductiveHealth’s technology is implemented to rigorous US Federal Government standards for information security, in compliance with FISMA, HIPAA, and HITECH standards. InductiveHealth manages more clinical-to-public health integrations than any other firm, providing leading capabilities and advanced technology to understand and combat the spread of infectious disease
.png)








