DevSavant is an operating partner for startups and growth-stage companies, helping them turn ambition into execution.
We support founders and leadership teams with product engineering and global staffing, from early prototypes and MVPs to scaling high-performing teams. Our vetted talent across LATAM and Asia embeds directly into client teams, operating as true extensions rather than external vendors.
With over 8 years working in venture-backed ecosystems, DevSavant is trusted to accelerate delivery, scale teams efficiently, and support companies as they reach their next milestone.
About the RoleWe are seeking a Data Engineer to join our growing team of data experts. This is an individual contributor role embedded within cross-functional teams, focused on building and maintaining the data infrastructure that powers our analytics and business intelligence platforms.
The role is heavily data-oriented, with a strong emphasis on designing and developing scalable, reliable data pipelines and systems using Python and SQL. You will be responsible for ensuring that business-critical data is accurate, accessible, and optimized for downstream use by software developers, analysts, data scientists, and other stakeholders.
The ideal candidate is a hands-on data engineer who thrives in fast-paced environments, takes ownership, and is comfortable working with evolving requirements. You enjoy building robust data systems from the ground up, integrating new datasets, and continuously optimizing data infrastructure for performance and scalability.
Key ResponsibilitiesAI & AutomationContribute to AI-enabled data workflows, including integration with agents and MCP servers
Leverage AI tools (e.g., Copilot, Openspec) to automate aspects of the software development lifecycle
Instrument data systems and pipelines with automation, monitoring, and intelligent workflows
Build and maintain scalable data pipelines using Python, SQL, and modern ETL frameworks
Design and implement robust data architectures that support business and analytical needs
Assemble large, complex datasets that meet functional and non-functional requirements
Optimize data systems for performance, reliability, and scalability
Write clean, maintainable, and well-tested code following best practices
Continuously improve data engineering standards and processes
Develop infrastructure for efficient extraction, transformation, and loading (ETL) of data from diverse sources
Integrate structured and unstructured data formats (e.g., CSV, Excel, Shapefiles) into centralized systems
Maintain and optimize databases containing customer usage, financial, and operational data
Integrate and optimize data access across platforms, including analytical tools such as QGIS
Maintain and improve search indices using both COTS and custom-built solutions
Collaborate with analysts, data scientists, and business stakeholders to support data needs
Build and maintain data tools that empower analysts to explore and optimize datasets
Assist stakeholders with data-related technical challenges and infrastructure needs
Support analytical workflows, including SQL query development and dataset preparation
Partner with data and analytics teams to enhance overall system capabilities
Monitor data systems and pipelines to ensure high availability and reliability
Perform root cause analysis on data and system issues and implement corrective actions
Improve observability and alerting for data infrastructure
Maintain operational excellence across data platforms
Work closely with cross-functional teams in a distributed, remote-first environment
Translate evolving business requirements into scalable data solutions
Take ownership of data systems from design through production
Operate effectively in a fast-paced environment with changing priorities
Python for data processing and pipeline development
SQL for querying and data transformation
PostgreSQL (preferred) and other relational databases
ETL orchestration tools such as Airflow or Cloud Composer
Data transformation tools such as DBT
Geospatial tools and analytical platforms (e.g., QGIS)
Handling of structured and unstructured data formats (CSV, Excel, Shapefiles)
Search and indexing technologies (COTS and custom solutions)
Cloud platforms (GCP preferred; BigQuery experience is a strong plus)
Familiarity with data warehouses, data lakes, and distributed data systems
Message queuing and stream processing systems
Linux-based environments and shell scripting
Version control, CI/CD practices, and automated workflows
AI-powered development tools (e.g., GitHub Copilot, Openspec)
3–5 years of experience in data engineering, data pipelines, or related fields
Strong proficiency in SQL and experience working with relational databases (PostgreSQL preferred)
Advanced experience using Python for data processing (including spatial and non-spatial data)
Experience building and optimizing data pipelines and architectures
Hands-on experience with ETL orchestration tools (Airflow or Cloud Composer preferred)
Experience with data transformation tools (DBT preferred)
Experience working with unstructured and legacy data formats
Strong analytical skills and experience working with large, complex datasets
Experience performing root cause analysis and improving data processes
Familiarity with distributed systems, message queues, or stream processing
Experience working in Linux environments and using command-line tools
Strong communication skills and ability to collaborate across teams
Proactive mindset with a focus on ownership and continuous improvement
Experience with GCP and BigQuery administration
Experience with geospatial data and tools
Familiarity with AI-enabled data systems and agent-based architectures
Experience automating SDLC processes using AI tools
Experience integrating data platforms with analytics and BI tools
Experience in high-growth or fast-paced environments
Results-driven mindset (GTD): Ability to identify next actions, communicate clearly, and execute efficiently
Ownership mentality: Strong sense of accountability and decision-making ability
Builder mindset: Passion for creating scalable, impactful data solutions
Curiosity and continuous improvement: Always seeking better ways to solve problems
Team collaboration: Comfortable working across teams and supporting diverse stakeholders
Bonus: You enjoy coffee, love software and products, and bring a good sense of humor
Top Skills
What We Do
DevSavant provides comprehensive technology solutions to Savant Growth's portfolio companies. Our data scientists and developers are experts in the fields of data analytics, software development, and AI. DevSavant’s engineers work across the spectrum of full-stack technology solutions for the B2B SaaS industry, helping you growing company tackle its toughest challenges and giving you the freedom to focus on what matters most, the future of your business




.png)

