Data Infrastructure Engineer

Reposted 18 Days Ago
2 Locations
In-Office or Remote
Senior level
3D Printing • Artificial Intelligence • Information Technology • Internet of Things
The Role
Design and operate distributed data systems for large-scale data ingestion, processing, and transformation. Collaborate with ML researchers for data preparation and ensure data quality and scalability.
Summary Generated by Built In
About Meshy

Headquartered in Silicon Valley, Meshy is the leading 3D generative AI company on a mission to Unleash 3D Creativity by transforming the content creation pipeline. Meshy makes it effortless for both professional artists and hobbyists to create unique 3D assets—turning text and images into stunning 3D models in just minutes. What once took weeks and cost $1,000 now takes just 2 minutes and $1.

Our world-class team of top experts in computer graphics, AI, and art includes alumni from MIT, Stanford, and Berkeley, as well as veterans from Nvidia and Microsoft. Our talent spans the globe, with team members distributed across North America, Asia, and Oceania, fostering a diverse and innovative multi-regional culture focused on solving global 3D challenges. Meshy is trusted by top developers, backed by premiere venture capital firms like Sequoia and GGV, and has successfully raised $52 Million in funding.

Meshy is the market leader, recognized as the No.1 in popularity among 3D AI tools (according to 2024 A16Z Games) and No.1 in website traffic (according to SimilarWeb, with 3 Million monthly visits). The platform boasts over 5 Million users and has generated 40 Million models.

Founder and CEO Yuanming (Ethan) Hu earned his Ph.D. in graphics and AI from MIT, where he developed the acclaimed Taichi GPU programming language (27K stars on GitHub, used by 300+ institutes). His work is highly influential, including an honorable mention for the SIGGRAPH 2022 Outstanding Doctoral Dissertation Award and over 2,700 research citations.

About the Role
We are seeking a Data Infrastructure Engineer to join our growing team. In this role, you will design, build, and operate distributed data systems that power large-scale ingestion, processing, and transformation of datasets used for AI model training. These datasets span traditional structured data as well as unstructured assets such as images and 3D models, which often require specialized preprocessing for pretraining and fine-tuning workflows.
 
This is a versatile role: you’ll own end-to-end pipelines (from ingestion to transformation), ensure data quality and scalability, and collaborate closely with ML researchers to prepare diverse datasets for cutting-edge model training. You’ll thrive in our fast-paced startup environment, where problem-solving, adaptability, and wearing multiple hats are the norm.
What You’ll Do:
  • Core Data Pipelines
    • Design, implement, and maintain distributed ingestion pipelines for structured and unstructured data (images, 3D/2D assets, binaries).
    • Build scalable ETL/ELT workflows to transform, validate, and enrich datasets for AI/ML model training and analytics.
  • Distributed Systems & Storage
    • Architect pipelines across cloud object storage (S3, GCS, Azure Blob), data lakes, and metadata catalogs.
    • Optimize large-scale processing with distributed frameworks (Spark, Dask, Ray, Flink, or equivalents).
    • Implement partitioning, sharding, caching strategies, and observability (monitoring, logging, alerting) for reliable pipelines.
  • Pretrain Data Processing
    • Support preprocessing of unstructured assets (e.g., images, 3D/2D models, video) for training pipelines, including format conversion, normalization, augmentation, and metadata extraction.
    • Implement validation and quality checks to ensure datasets meet ML training requirements.
    • Collaborate with ML researchers to quickly adapt pipelines to evolving pretraining and evaluation needs.
  • Infrastructure & DevOps
    • Use infrastructure-as-code (Terraform, Kubernetes, etc.) to manage scalable and reproducible environments.
    • Integrate CI/CD best practices for data workflows.
  • Data Governance & Collaboration
    • Maintain data lineage, reproducibility, and governance for datasets used in AI/ML pipelines.
    • Work cross-functionally with ML researchers, graphics/vision engineers, and platform teams.
    • Embrace versatility: switch between infrastructure-level challenges and asset/data-level problem solving.
    • Contribute to a culture of fast iteration, pragmatic trade-offs, and collaborative ownership.
What We’re Looking For:
  • Technical Background
    • 5+ years of experience in data engineering, distributed systems, or similar.
    • Strong programming skills in Python (plus Scala/Java/C++ a plus).
    • Solid skills in SQL for analytics, transformations, and warehouse/lakehouse integration.
    • Proficiency with distributed frameworks (Spark, Dask, Ray, Flink).
    • Familiarity with cloud platforms (AWS/GCP/Azure) and storage systems (S3, Parquet, Delta Lake, etc.).
    • Experience with workflow orchestration tools (Airflow, Prefect, Dagster).
  • Domain Skills (Preferred)
    • Experience handling large-scale unstructured datasets (images, video, binaries, or 3D/2D assets).
    • Familiarity with AI/ML training data pipelines, including dataset versioning, augmentation, and sharding.
    • Exposure to computer graphics or 3D/2D data processing is strongly preferred.
  • Mindset
    • Comfortable in a startup environment: versatile, self-directed, pragmatic, and adaptive.
    • Strong problem solver who enjoys tackling ambiguous challenges.
    • Commitment to building robust, maintainable, and observable systems.
Nice to Have:
  • Kubernetes for distributed workloads and orchestration.
  • Data warehouses or lakehouse platforms (Snowflake, BigQuery, Databricks, Redshift).
  • Familiarty GPU-accelerated computing and HPC clusters
  • Experience with 3D/2D asset processing (geometry transformations, rendering pipelines, texture handling).
  • Rendering engines (Blender, Unity, Unreal) for synthetic data generation.
  • Open-source contributions in ML infrastructure, distributed systems, or data platforms.
  • Familiarity with secure data handling and compliance
Our Values
  • Brain: We value intelligence and the pursuit of knowledge. Our team is composed of some of the brightest minds in the industry.
  • Heart: We care deeply about our work, our users, and each other. Empathy and passion drive us forward.
  • Gut: We trust our instincts and are not afraid to take bold risks. Innovation requires courage.
  • Taste: We have a keen eye for quality and aesthetics. Our products are not just functional but also beautiful.
Why Join Meshy?
  • Competitive salary, equity, and benefits package.
  • Opportunity to work with a talented and passionate team at the forefront of AI and 3D technology.
  • Flexible work environment, with options for remote and on-site work.
  • Opportunities for fast professional growth and development.
  • An inclusive culture that values creativity, innovation, and collaboration.
  • Unlimited, flexible time off.
Benefits
  • Stock options available for core team members.
  • 401(k) plan for employees.
  • Comprehensive health, dental, and vision insurance.
  • The latest and best office equipment.

Top Skills

Airflow
AWS
Azure
Dagster
Dask
Delta Lake
Flink
GCP
Kubernetes
Parquet
Prefect
Python
Ray
S3
Spark
SQL
Terraform
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
San Jose, California
27 Employees

What We Do

Meshy empowers artists, game developers, and creators to bring their visions to life with a toolkit for creating 3D models in seconds

Similar Jobs

Zocdoc Logo Zocdoc

Staff Software Engineer

Healthtech • Information Technology • Software • Telehealth
Easy Apply
Remote or Hybrid
2 Locations
900 Employees
179K-271K Annually

Zocdoc Logo Zocdoc

Infrastructure Engineer

Healthtech • Information Technology • Software • Telehealth
Easy Apply
Remote or Hybrid
2 Locations
900 Employees
179K-271K Annually

Webflow Logo Webflow

Senior Software Engineer

eCommerce • Software • Design
Easy Apply
Remote
U.S.
800 Employees
132K-207K Annually

SentinelOne Logo SentinelOne

Infrastructure Engineer

Information Technology • Security • Cybersecurity
Remote
United States
2830 Employees
128K-176K Annually

Similar Companies Hiring

Credal.ai Thumbnail
Software • Security • Productivity • Machine Learning • Artificial Intelligence
Brooklyn, NY
Standard Template Labs Thumbnail
Software • Information Technology • Artificial Intelligence
New York, NY
10 Employees
Scotch Thumbnail
Software • Retail • Payments • Fintech • eCommerce • Artificial Intelligence • Analytics
US
25 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account