About the role
Contextual AI’s RAG 2.0 technology enables enterprises to move beyond demos to actual production-grade AI applications. However, getting RAG 2.0 models to deliver best results is still an art and requires working with a mixture of programming, instructing, and finetuning. As a Prompt Engineer, you will bring discipline to this process by applying a range of techniques (such as DSPy) to systematically discover and document prompting best practices for customer-specific workloads and beyond. Collaborating closely with research engineers and platform engineers, you will design and optimize prompts that guide RAG 2.0 models to produce accurate, relevant, and contextually appropriate responses.
Given that prompt-engineering is still a new domain, we suggest that you share with us any prompt engineering project on LLMs that you’re proud of working on. Ideally, showcase how you systematically used the techniques to improve and evaluate an LLM’s behavior. If you have experience with ML projects involving dataset curation and processing that will work as a substitute as well.
What you'll do
- Work closely with cross-functional teams and customers to understand project requirements and objectives for RAG 2.0 applications.
- Design, develop and document prompts and best practices for a wide range of use cases related to our customers.
- Conduct thorough analyses of LLM performance in response to different prompts, iterating on prompt design to improve model accuracy and coherence.
- Build a set of tutorials and interactive tools that teach the art of prompt engineering to our customers.
What we're seeking
- Bachelor's degree in Computer Science, Engineering, or a related field (Master's degree preferred).
- Demonstrated expertise in NLP, particularly in the context of LLMs, with a deep understanding of model architectures, training methodologies, and evaluation metrics.
- Proficiency in programming languages commonly used in NLP, such as Python, along with experience working with deep learning frameworks like TensorFlow or PyTorch. Experience with DSPy and similar frameworks is a plus.
- Strong analytical and problem-solving skills, with a keen eye for detail and a passion for optimizing model performance through prompt engineering.
- Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams and convey complex concepts to non-technical stakeholders.
- Experience with version control systems (e.g., Git) and agile software development methodologies is a plus.
- Experience in the identification of vulnerabilities or security issues in LLMs or other AI models is a plus
Location: Mountain View, CA
Salary Range for California Based Applicants: $150,000 - $200,000 + equity + benefits (actual compensation will be determined based on experience, location, and other factors permitted by law).
Equal Opportunity
Contextual AI provides equal employment opportunities to all qualified individuals without regard to race, color, religion, sex, gender identity, sexual orientation, pregnancy, age, national origin, physical or mental disability, military or veteran status, genetic information, or any other protected classification. Equal employment opportunity includes but is not limited to, hiring, training, promotion, demotion, transfer, leaves of absence, and termination. Contextual AI takes allegations of discrimination, harassment, and retaliation seriously, and will promptly investigate when such behavior is reported.
Top Skills
What We Do
We are building the next generation of foundation models that provide fully customizable, trustworthy, privacy-aware AI that lets you focus on the work that matters. Contextual AI works on your stack, is secure by default, and lowers the barrier to state-of-the-art generative AI.