# Portfolio Case Studies
## Case Study 1: Automated Lead Qualification with Zapier
- **Client:** Early-stage fintech startup struggling to keep up with incoming leads from their Typeform signup.
- **Challenge:** Sales reps were spending 2–3 hours daily manually reviewing, categorizing, and routing each new lead—leading to delayed follow-ups and lost opportunities.
- **Solution Flow:**
1. **Trigger:** New Typeform submission
2. **Action:** “Webhooks by Zapier” sends the form data to OpenAI’s ChatGPT API with a prompt to score lead quality (e.g. “On a scale of 1–10, how qualified is this prospect based on their answers?”)
3. **Filter:** Zapier filters out any lead scored below 5
4. **Action:** High-scoring leads automatically get:
- A new contact created in HubSpot CRM
- A Slack notification posted to the sales channel
- A row appended to a “Qualified Leads” Google Sheet
- **Tools Used:** Zapier, Typeform, Webhooks by Zapier, OpenAI API, HubSpot, Slack, Google Sheets
- **Outcome:**
- **80% reduction** in manual lead triage time
- **Immediate follow-up** on hot leads via Slack alerts
- **20% boost** in MQL-to-SQL conversion rate within the first month
## Case Study 2: Context-Aware Knowledge Assistant with LangChain
- **Client:** Mid-sized software company needing an internal “smart help desk” to surface documentation snippets and keep context across follow-up questions.
- **Challenge:** Employees bounced between Confluence pages and chat threads, losing conversation context and frequently asking repeat questions.
- **Solution Architecture:**
1. **Ingestion:** Imported ~5,000 Confluence pages into a FAISS vector store
2. **Agent Chain:** Built a LangChain `ConversationalRetrievalChain` that:
- Retrieves the most relevant docs for the current query
- Feeds them, plus the conversation history from a Redis-backed memory buffer, into OpenAI’s GPT-4o model
- Returns a concise, context-aware response
3. **Interface:** Deployed as a lightweight Flask app with a React frontend and a Redis store for session memory
- **Tools Used:** LangChain, FAISS, Redis (for memory), Python (Flask), React, OpenAI API
- **Outcome:**
- **60% faster** resolution of internal support queries
- **0 repeat questions** within the same session thanks to memory persistence
- **95% employee satisfaction** rating in a post-launch survey