Overview:
Guidepoint seeks an experienced Senior AI Engineer as an integral member of the Toronto-based AI team. The Toronto Technology Hub serves as the base of our Data/AI/ML team, dedicated to building a modern data infrastructure for advanced analytics and the development of responsible AI. This strategic investment is integral to Guidepoint’s vision for the future, aiming to develop cutting-edge Generative AI and analytical capabilities that will underpin Guidepoint’s Next-Gen research enablement platform and data products.
This role demands exceptional leadership and technical prowess to drive the development of next-generation research enablement platforms and AI-driven data products. You will develop and scale Generative AI-powered systems, including large language model (LLM) applications and research agents, while ensuring the integration of responsible AI and best-in-class MLOps. The Senior AI/ML Engineer will be a primary contributor to building scalable AI/ML capabilities using Databricks and other state-of-the-art tools across all of Guidepoint’s products.
Guidepoint’s Technology team thrives on problem-solving and creating happier users. As Guidepoint works to achieve its mission of making individuals, businesses, and the world smarter through personalized knowledge-sharing solutions, the engineering team is taking on challenges to improve our internal application architecture and create new AI-enabled products to optimize the seamless delivery of our services.
This is a hybrid position based in Toronto.
What You'll Do
- Architect and Build Production Systems: Design, build, and operate scalable, low-latency backend services and APIs that serve Generative AI features, from retrieval-augmented generation (RAG) pipelines to complex agentic systems.
- Own the AI Application Lifecycle: Own the end-to-end lifecycle of AI-powered applications, including system design, development, deployment (CI/CD), monitoring, and optimization in production environments like Databricks and Azure Kubernetes Service (AKS).
- Optimize RAG Pipelines: Continuously improve retrieval and generation quality through techniques like retrieval optimization (tuning k-values, chunk sizes), using re-rankers, advanced chunking strategies, and prompt engineering for hallucination reduction.
- Integrate Intelligent Systems: Engineer solutions that seamlessly combine LLMs with our proprietary knowledge repositories, external APIs, and real-time data streams to create powerful copilots and research assistants.
- Champion LLMOps and Engineering Best Practices: Collaborate with data science and engineering teams to establish and implement best practices for LLMOps, including automated evaluation using frameworks like LLM Judges or MLflow, AI observability, and system monitoring.
- Evaluate and Implement AI Strategies: Systematically evaluate and apply advanced prompt engineering methods (e.g., Chain-of-Thought, ReAct) and other model interaction techniques to optimize the performance and safety of proprietary and open-source LLMs.
- Mentor and Lead: Provide technical leadership to junior engineers through rigorous code reviews, mentorship, and design discussions, helping to elevate the team's engineering standards.
- Influence the Roadmap: Partner closely with product and business stakeholders to translate user needs into technical requirements, define priorities, and shape the future of our AI product offerings.
What You'll Bring
- Experience: A Bachelor’s degree in Computer Science, Engineering, or a related technical field with 6+ years of professional experience; or a Master’s degree with 4+ years of professional experience in backend software engineering and Generative AI. This must include a proven track record of designing, building, and scaling distributed, production-grade systems.
- Strong Software Engineering Fundamentals: Deep expertise in Python, a major backend framework (e.g., FastAPI, Flask), and asynchronous programming (e.g., asyncio). Proficiency in designing RESTful APIs, microservices, and the complete operational lifecycle, including comprehensive testing, CI/CD (e.g., ArgoCD), observability, monitoring, alerting, maintaining high uptime, and executing zero-downtime deployments.
- Cloud & Infrastructure Proficiency: Hands-on experience deploying and managing applications on a major cloud platform (Azure preferred, AWS/GCP acceptable) using containerization (Docker) and orchestration (Kubernetes, Helm).
- Production AI Application Experience: 2+ years of experience building applications that leverage large language models from providers like OpenAI, Anthropic, or Google Gemini. Direct experience with modern LLM patterns such as retrieval-augmented generation (RAG), hybrid search using vector databases (e.g., Pinecone, Elasticsearch), multi-agent AI systems with tool calls, and prompt engineering is required.
- AI System Design and Evaluation: Experience designing and implementing robust evaluation frameworks for LLM-based systems, including rubric-based scoring, LLM Judges, or using tools like MLflow, alongside monitoring for performance and drift.
- Large-Scale Data Processing: Familiarity with large-scale data processing platforms and tools (e.g., Databricks, Apache Spark).
- Familiarity with the Modern AI Stack: Practical experience with libraries and frameworks like LangChain or LlamaIndex for building LLM-powered applications.
- Leadership and Mentorship: Demonstrated ability to lead complex technical projects and foster the growth of other engineers.
You will also be eligible for the following benefits:
- Paid Time Off
- Comprehensive benefits plan
- Company RRSP Match
- Development opportunities through the LinkedIn Learning platform
About Guidepoint:
Guidepoint is a leading research enablement platform designed to advance understanding and empower our clients’ decision-making process. Powered by innovative technology, real-time data, and hard-to-source expertise, we help our clients to turn answers into action.
Backed by a network of nearly 1.75 million experts, and Guidepoint’s 1,600 employees worldwide, we inform leading organizations’ research by delivering on-demand intelligence and research on request. With Guidepoint, companies and investors can better navigate the abundance of information available today, making it both more useful and more powerful.
At Guidepoint, our success relies on the diversity of our employees, advisors, and client base, which allows us to create connections that offer a wealth of perspectives. We are committed to upholding policies that contribute to an equitable and welcoming environment for our community, regardless of background, identity, or experience.
#LI-DH1
#LI-Hybrid
Top Skills
What We Do
Guidepoint connects clients with vetted subject matter experts—Advisors—from our global professional network. Our clients leverage the insights and perspectives shared by our Advisors to stay informed and make better business decisions.
Our multinational client list includes nine of the top 10 global consulting firms, hundreds of hedge funds (including five of the largest firms), and many of the largest private equity firms and Fortune-ranked companies. Guidepoint’s fourteen offices on three continents provide 24/7, quick and agile service.