What You'll Do
- Compose workflows: Build focused ComfyUI graphs that chain perception and diffusion steps to answer product questions.
- Integrate new models: Author/extend custom ComfyUI nodes in Python and create wrappers for new open-source models.
- Measure & compare: Produce side-by-side results with qualitative examples (paired frames) and simple, meaningful metrics to guide decisions.
- Scan & triage the model landscape: Track new diffusion and perception releases, curate a weekly shortlist, and convert promising ones into ComfyUI nodes for quick A/B tests.
- Document & hand off: Provide inputs, controls, expected artifacts, and failure modes with runnable .json graphs and enough version info for others to reproduce and extend.
- Collaborate & iterate: Partner with creative users to calibrate taste and with engineering to productionize proven workflows.
What you’ll bring:
- Portfolio: 2–3 ComfyUI graphs you authored (.json + screenshots) and at least one custom node or wrapper.
- ComfyUI depth: Experience structuring multi-step graphs and keeping them readable and reusable.
- Diffusion & perception know-how: Practical use of diffusion conditioning (e.g., ControlNet/IP-Adapter), perception tasks (segmentation, matting, depth), and sampler/scheduler trade-offs.
- Python: Strong scripting and ability to write/extend ComfyUI nodes.
- PyTorch (basic): Can read and lightly modify model code when needed.
- Evaluation & communication: Clear write-ups of experiments and decisions with reproducible artifacts.
- Model-scouting habit: You stay current on new perception/diffusion models and can recommend/justify trials with an integration plan in ComfyUI.
What will help you stand out:
- Training experience: Lightweight adapter training (e.g., LoRA) when it improves quality or speed.
- 3D Reconstruction: Advanced reconstruction pipeline experience (SfM, Gaussian splatting).
- VFX/Games exposure: Understanding of production pipelines and cross-functional collaboration.
How to apply
- “Please include links to your graphs (.json + screenshots) and one custom node/wrapper (code link).”
Top Skills
What We Do
Training and testing autonomous systems in the real world is a slow, expensive and cumbersome process. Parallel Domain is the smartest way to prepare both your machines and human operators for the real world, while minimizing the time and miles spent there. Connect to the Parallel Domain API and tap into the power of synthetic data to accelerate your autonomous system development.
Parallel Domain works with perception, machine learning, data operations, and simulation teams at autonomous systems companies, from autonomous vehicles to delivery drones. Our platform generates synthetic labeled data sets, simulation worlds, and controllable sensor feeds so they can develop, train, and test their algorithms safely before putting these systems into the real word.
#syntheticdata #autonomy #AI #computervision #AV #ADAS #machinelearning








