About the Role
At [Company Name], we believe that building great customer-facing AI features is both an art and a science. As our AI Product Quality Lead, you'll act as both artist and scientist, taking responsibility for interpreting customer feedback and actual generative AI system outputs, and transforming them into a systematic feedback loop and regular process of experimentation that leads to an objectively better customer experience over time. True north for this role will be improving customer satisfaction with our AI products, along with other quantifiable quality metrics you might help us define over time.
You’ll collaborate closely with our AI engineering team to bring this feedback loop to live, but we don’t expect you need to be an engineer by trade.
You'll Excel in This Role If You Have
- Deep Domain Understanding: You can quickly judge the quality of our customer-facing AI system outputs based on a thorough understanding of our customer domain and desired product experience.
- Systematic Quality Evaluation: You can evaluate and isolate specific attributes that make AI outputs "good" or "bad," turn them into more specific categories and objective criteria, and teach others to evaluate those same criteria.
- Experimental Mindset: You have an intuition for designing experiments and resolving issues that you discover through prompt and model changes you might try on your own, or broader system changes you might design in conjunction with your engineering partners. You're either already comfortable working with generative AI prompts, models, and tools to yield better results, or you’re excited to learn.
- Curiosity and Adaptability: You maintain a beginner's mindset and intense curiosity. As a voracious reader and consumer of industry trends, you're always ready to try new approaches you discover.
- Collaborative Spirit: You can work closely with the engineering team to implement your insights, iterate on our product evaluation suites for faster experimentation, and ensure the team has a system in place to maintain relevant data examples for reliable testing.
Key Responsibilities
- Build and manage a learning loop to surface and review real-world data from customers, then label and categorize the various attributes of what you discover.
- Conduct hands-on data analysis on a regular basis (at least weekly; perhaps daily).
- Define and iterate on the labels and categories used to quantify learnings. Update labeling criteria when new issues are identified.
- Stack rank issues and collaborate with engineering to write evals to catch them in the future, and to curate test datasets that will ensure edge cases don't get missed.
- Design prompt engineering experiments to improve on dimensions discovered through reviews. Quantify & report on learnings, and advocate for the adoption of clear performance improvements.
- Train others to review & label the generative AI outputs in our customer-facing product.
Qualifications
- Expert grasp of the customer domain and the product experience we aim to create.
- Systematic approach to data analysis and problem-solving.
- Familiarity with engineering processes and techniques, or willingness to learn.