DevSavant is an operating partner for startups and growth-stage companies, helping them turn ambition into execution.
We support founders and leadership teams with product engineering and global staffing, from early prototypes and MVPs to scaling high-performing teams. Our vetted talent across LATAM and Asia embeds directly into client teams, operating as true extensions rather than external vendors.
With over 8 years working in venture-backed ecosystems, DevSavant is trusted to accelerate delivery, scale teams efficiently, and support companies as they reach their next milestone.
About the RoleWe are seeking a Senior Data Engineer to join a cross-functional team working on scalable data systems and analytics infrastructure.
This is an individual contributor role focused on building, maintaining, and optimizing data pipelines and data models that power analytics and business-critical decision-making. The role requires a strong technical generalist mindset, combining software engineering principles with deep data expertise.
You will work closely with Data Science, Data Ops, and business stakeholders to ensure data is accurate, accessible, and structured for self-service analytics. The ideal candidate is someone who enjoys working with complex datasets, simplifying systems, and building scalable data infrastructure from the ground up.
Key ResponsibilitiesData Engineering & Pipeline DevelopmentOwn, build, maintain, and optimize scalable data pipelines
Design and implement data architectures that support analytics and operational use cases
Work with large, complex datasets to meet evolving business requirements
Ensure data quality, reliability, and performance across systems
Apply best practices for developing specialized datasets for analytics and modeling
Continuously improve data workflows, pipelines, and infrastructure
Develop a deep understanding of core data models and business logic
Partner with Data Science and Data Ops teams to maintain trusted, well-documented datasets
Enable self-service analytics by structuring and organizing data effectively
Support analytical workflows and downstream consumption of data
Assist analysts with query development and dataset preparation
Work with a wide range of stakeholders to gather requirements and translate them into technical solutions
Communicate complex technical concepts clearly to both technical and non-technical audiences
Collaborate closely with engineering, analytics, and product teams
Contribute to documentation and knowledge sharing across teams
Contribute to the design of scalable and maintainable systems
Optimize data delivery and infrastructure for performance and scalability
Support integration across multiple data platforms and tools
Maintain and improve existing systems, including search and indexing solutions
Independently troubleshoot complex systems and resolve data-related issues
Perform root cause analysis and implement long-term fixes
Improve system reliability and performance through monitoring and optimization
Ensure stability and efficiency of data platforms
SQL for querying, transformation, and data modeling
Python or other general-purpose programming languages (e.g., JavaScript/TypeScript, Java, C#, Go, Scala)
Experience with data pipeline tools such as Spark and DBT
Data warehouses such as BigQuery, Snowflake, or Databricks
Workflow orchestration tools such as Airflow or Dagster
Experience handling large-scale data processing and transformations
Familiarity with batch and/or streaming data systems
Cloud platforms such as GCP or AWS
Infrastructure as Code tools (Terraform, Pulumi, or CloudFormation)
Experience designing scalable and maintainable systems
Experience with backend engineering and web services is a plus
Familiarity with analytics and data visualization ecosystems
Exposure to transaction, receipt, or viewership data is beneficial
5+ years of experience in software engineering, with at least 3 years focused on data engineering or data infrastructure
Strong expertise in SQL and working with relational databases
Experience building and maintaining scalable data pipelines
Proficiency in at least one general-purpose programming language (Python preferred)
Experience with modern data stack tools (e.g., Spark, DBT, Airflow/Dagster)
Strong debugging and problem-solving skills in complex systems
Experience working with cloud data warehouses (BigQuery, Snowflake, or Databricks)
Ability to design data systems that support analytics and business intelligence
Strong communication skills and ability to work cross-functionally
Experience documenting and simplifying complex systems
Experience with backend engineering and API development
Experience with Infrastructure as Code (IaC) tools
Exposure to system design for customer-facing or high-scale platforms
Familiarity with analytics-heavy environments and data-driven products
Experience working with large-scale, real-world datasets (e.g., transactions, behavioral data)
Ownership mindset: Ability to take responsibility and drive systems end-to-end
Technical versatility: Strong foundation as a software engineer with data expertise
Problem-solving focus: Ability to navigate ambiguity and solve complex challenges
Communication skills: Clear and effective collaboration across teams
Execution-driven: Ability to move quickly and deliver results in a fast-paced environment
Continuous improvement: Desire to refine systems, processes, and technical approaches
Top Skills
What We Do
DevSavant provides comprehensive technology solutions to Savant Growth's portfolio companies. Our data scientists and developers are experts in the fields of data analytics, software development, and AI. DevSavant’s engineers work across the spectrum of full-stack technology solutions for the B2B SaaS industry, helping you growing company tackle its toughest challenges and giving you the freedom to focus on what matters most, the future of your business






.png)

