// Offline // Montreal, QC //
[ TECH STACK ]
  • NLP (Pre-transformer era)
  • Intent Recognition
  • Medical Knowledge Graphs
  • Conversational Design

Built an AI chatbot for Merck Medical Affairs in 2019—before GPT-3, before ChatGPT, before “AI chatbot” meant copying OpenAI’s API docs.

Healthcare professionals needed fast answers to medical questions. Drug interactions, dosage guidelines, contraindications. Digging through PDFs and databases takes time they don’t have. We built a system that understood natural language medical queries and returned accurate, sourced answers from Merck’s medical information database.

This was pre-GenAI, so no foundation models to fine-tune. Intent classification, entity extraction, knowledge graph traversal—all built from scratch. The system had to understand that “can I give this to a pregnant patient?” and “pregnancy contraindications” were the same question. And it had to be right every time, because wrong answers in medicine have actual consequences.

Medical AI in 2019 meant no shortcuts. Every intent manually mapped. Every entity type defined. Every response validated by medical professionals. Slow, tedious, but it worked.

The patterns we established then—source attribution, confidence scoring, graceful degradation—are what everyone’s scrambling to implement for GenAI now. Turns out the fundamentals don’t change much, just the underlying tech.