What You'll Do
- Compose workflows: You will run focused experiments that answer product questions using ComfyUI graphs that combine perception and generative steps.
- Measure and compare: You will produce side-by-side comparisons with qualitative frames and a set of meaningful metrics to guide decision-making.
- Custom nodes and integrations: You will develop, implement, and deploy custom ComfyUI nodes for novel functionality and create wrappers for new open-source models.
- Document and guide: You will document inputs, controls, expected artifacts, and typical failure modes in concise how-to notes and manifests so others can rerun and extend your work.
- Collaborate and iterate: You will partner closely with creative users to refine and validate the visual output and work with engineering to hand off proven workflows for scaling.
What you’ll bring:
- ComfyUI experience: You have demonstrable experience building multi-step ComfyUI graphs and compile and deploy open sourced projects through wrappers. This includes building projects from source, compiling them as needed, and integrating them to keep workflows readable, reusable.
- Creative and technical skill: You combine strong creative sensibilities with practical, hands-on machine learning skills.
- Python and ML proficiency: Solid Python and PyTorch fundamentals with hands-on experience, including tensor shapes, dtypes, autocast or no-grad, memory-aware batching, and devices.
- Workflow design and evaluation: You have experience assembling perception and generative workflows for visual tasks and evaluating tradeoffs with simple, meaningful metrics.
- VFX/Games production experience: Professional experience in VFX or game development with practical knowledge of content creation pipelines and cross-functional collaboration.
- Clear communication: You have the ability to explain complex findings clearly and effectively to both creative and engineering stakeholders.
What will help you stand out:
- Training experience: You have experience training small adapters, such as LoRAs, when it directly improves a workflow’s performance or quality.
- ComfyUI portfolio: A portfolio with one or more original ComfyUI nodes or wrappers and a short write-up of the design decisions.
- Generative editing quality: Examples of generative editing or scene adjustments that deliver high accuracy and visual fidelity, with speed gains a plus.
Top Skills
What We Do
Training and testing autonomous systems in the real world is a slow, expensive and cumbersome process. Parallel Domain is the smartest way to prepare both your machines and human operators for the real world, while minimizing the time and miles spent there. Connect to the Parallel Domain API and tap into the power of synthetic data to accelerate your autonomous system development.
Parallel Domain works with perception, machine learning, data operations, and simulation teams at autonomous systems companies, from autonomous vehicles to delivery drones. Our platform generates synthetic labeled data sets, simulation worlds, and controllable sensor feeds so they can develop, train, and test their algorithms safely before putting these systems into the real word.
#syntheticdata #autonomy #AI #computervision #AV #ADAS #machinelearning