Mobile Developer (iOS)

Posted Yesterday
Be an Early Applicant
London, Greater London, England, GBR
In-Office
Mid level
Artificial Intelligence • Computer Vision • Machine Learning
The Role
The Mobile Developer (iOS) will own the mobile layer, managing SDK integrations, on-device ML, and CI/CD improvements for real-time retail execution.
Summary Generated by Built In
Help us build the future of real-time retail execution

At Neurolabs, we’re helping the world’s leading CPG brands unlock real-time visibility across the supply chain. Our Image Recognition technology is powered by synthetic data and digital twins, giving customers scalable, accurate insights — fast. No manual audits. No delays. No retraining. Just immediate visibility into what’s actually happening in-store.

We’re looking for a Mobile Developer (iOS) who owns the full mobile layer - from the SDKs our partners integrate to the app our clients use in the field. You'll be deploying vision models on-device, shipping and supporting SDKs used by external engineering teams, and keeping a live product stable and improving. This is a hands-on role with real ownership - not a cog in a large mobile team, but the person who defines how we build and ship on mobile.

✨Why this role matters

Our mobile layer is the point where our technology meets the real world. The Mobile Developer owns that layer end to end - from the SDKs our partners integrate to the app our clients use in the field. What you build here directly determines how fast we can move with partners and how reliably our technology performs at scale.

Your mission: own the mobile layer across SDK integrations, on-device ML, and ZIA Capture - and make it the most reliable part of what we ship.

🧠 What you'll do

Lead SDK integrations with partners and clients

  • You own the integration between 375's platform and Neurolabs' technology, ensuring both products work seamlessly together.
  • You drive mobile app integrations with key clients - Smollan, CPM, and Randstad - working alongside Customer Success to deliver these successfully.
  • You manage versioning, documentation, and direct support for partner and client engineers integrating our SDKs into their own products.

Own our iOS camera SDK

  • You maintain and extend our camera SDK library, covering real-time image quality checks including blur, focus, lighting, exposure, framing overlays, and domain-specific capture guidance.
  • You manage distribution via SPM and keep the SDK stable, documented, and integration-ready.

Own the on-device ML pipeline

  • You deploy models end to end - PyTorch to Core ML conversion, quantisation, and performance profiling across device tiers.
  • You run vision models including YOLO, Grounding DINO, and similar VLMs natively on device for real-time image processing.

Maintain and extend ZIA Capture

  • You keep ZIA Capture stable - resolving bugs and shipping feature development that connects the mobile layer to our backend technology.
  • You own authentication, REST/GraphQL API integration, and custom camera capture flows within the app.

Improve our CI/CD pipeline

  • You improve and extend the existing CI/CD pipeline for builds and releases, and maintain Cordova plugin integrations for current clients.

Requirements

Experience and technical skills

You'll be working across a live iOS app, a camera SDK, and on-device vision models in active partner and client integrations. Here's what that requires.

  • Startup background - has worked in a pre-seed, seed, or Series A environment, with evidence of wearing multiple hats, moving fast, and delivering under resource constraints.
  • Standalone mobile developer - has operated as the sole or lead mobile developer on a product, with end-to-end ownership from scoping through to shipping and maintaining in production.
  • On-device ML in production - has shipped a mobile feature powered by an on-device ML model, not just a prototype. Evidence of model conversion pipelines, quantisation, performance profiling, and edge case handling. Bonus for vision models such as YOLO, Grounding DINO, or similar VLMs.
  • SDK built from scratch - has designed and shipped an SDK as a product, with evidence of versioning, documentation, and direct support of integrating engineers.
  • Live app ownership - has maintained a live application over time, fixing bugs, shipping incremental features, and keeping a product stable.
  • External collaboration - has worked directly with external engineers or partner teams on a shared technical goal.
  • CI/CD and release ownership - has set up or meaningfully improved build and release pipelines.
  • Evidence of curiosity - personal projects, experimental work, or active use of AI tooling. Someone who builds outside of their day job and stays ahead of what is coming.

Technical skills

  • Swift / iOS (native) - deep, primary platform focus.
  • On-device ML deployment - experience deploying vision models on-device using Core ML, ONNX Runtime, or TensorFlow Lite, with a solid understanding of model optimisation for mobile constraints.
  • Model conversion pipelines - PyTorch to Core ML or TFLite, with performance profiling across device tiers.
  • SDK development and distribution.
  • SPM packaging.
  • GraphQL integration.
  • CI/CD pipelines.
  • Active use of AI tooling.

How you work

  • Accountable - you take full ownership of the mobile layer end to end. When something breaks, you're the first to investigate and own the fix.
  • Proactive communicator - you share progress and blockers upfront without being asked, and keep the team informed as a default.
  • Collaborative - you work closely with the Product Owner and wider team, and bring a team player mentality that makes you easy to work with.
  • Adaptable - you adjust quickly when priorities shift and are comfortable in a fast-moving environment.
  • Focused - you understand what matters most at any given time, direct your energy accordingly, and are comfortable saying no to work outside current priorities.
  • Self-motivated - you take ownership without close oversight and proactively drive your own output.
  • Pushes back constructively - you understand technical limitations and have the courage to challenge direction at the right moment.

Benefits
  • Equity share options - we operate an EMI share option scheme, so when the company grows in value, you share in it. Options vest over 48 months with a 12-month cliff.
  • Pension - we contribute 3% through Penfold via salary sacrifice, with 5% employee contributions. Auto-enrolment applies after a 3-month postponement period, though you can join sooner if you'd like.
  • Hybrid-first - minimum 2 days a week in our London office near Liverpool Street Station with flexibility built around delivery and trust.
  • Perks at Work - access to thousands of discounts across 30,000+ brands via CharlieHR, including up to 44% off cinema tickets and 15% off high street retailers.
  • Cycle to Work scheme - purchase a bike and accessories in a tax-efficient way through salary sacrifice via Cyclescheme.
  • Holiday Purchase scheme - buy up to 5 additional days of annual leave per year, spread across monthly salary deductions.
  • 25 days annual leave plus bank holidays, with the option to carry forward up to 3 days.
  • Friday office lunch - every Friday in the London office, lunch is on us.
  • Company socials - throughout the year we take time to reflect, reconnect, and recharge together.
  • AXA private medical insurance - Neurolabs covers the full premium. Cover includes cancer treatment, diagnostics, physiotherapy, counselling, and European travel insurance. Dental and optical handbooks also available.

Please note: We are not accepting unsolicited CVs or candidate introductions from recruitment agencies. Any profiles submitted without prior agreement will be considered the property of Neurolabs, and no fees will be payable.

Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: London
27 Employees
Year Founded: 2018

What We Do

Neurolabs is the "go-to" Computer Vision vendor for the CPG space, with access to a product catalogue of >100K SKUs. We develop algorithms to visually interpret and understand the grocery & CPG world. Using images from cameras, our Synthetic Computer Vision (SCV) models accurately identify and classify objects of interest, helping computers “understand” what they “see”. Our technology is universally applicable across the CPG lifecycle. We help our partners automate CPG object recognition tasks in physical stores, logistic centres or manufacturing plants. Our approach substitutes the expensive and rare real-world data required to train Deep Learning algorithms, with synthetically generated data to provide massive low-cost data sets in a fraction of time. Our technology empowers both small and large customers to implement state-of-the-art CV applications with minimal incremental resources and costs — no coding required, no tedious data labelling, no image gathering.

Similar Jobs

In-Office or Remote
2 Locations
9574 Employees

Freetrade Logo Freetrade

Mobile Engineer (iOS / Mid-level)

Blockchain • Fintech • Information Technology • Software • Financial Services
Hybrid
London, Greater London, England, GBR
234 Employees
52K-75K Annually

SumUp Logo SumUp

Ios Engineer

Fintech • Payments
In-Office or Remote
4 Locations
2750 Employees

xAI Logo xAI

Ios Engineer

Information Technology
In-Office
London, Greater London, England, GBR
96 Employees
180K-440K Annually

Similar Companies Hiring

GC AI Thumbnail
Artificial Intelligence • Legal Tech
San Mateo, California
80 Employees
Idler Thumbnail
Artificial Intelligence
San Francisco, California
6 Employees
Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account