← Back to Projects
AI Prototype Feedback Intelligence

AppPulse – Feedback Intelligence Prototype

Type: Private prototype · Concept exploration (non‑commercial)
My role: Product designer & prompt engineer – problem definition, workflows, and AI‑assisted implementation

AppPulse started as a way to solve my own problem while building HiLi Notes: I was collecting feedback in spreadsheets and it took weeks to turn that into clear product decisions. I used this project to explore how far I could go using AI tools to design and build a working prototype for feedback analysis. It is not a public SaaS and has no paying customers.

AppPulse Prototype Dashboard

The Problem

Product teams are drowning in feedback but can't act on it effectively. Feedback arrives through multiple channels, there's no clear prioritization, and critical insights get lost in volume.

I experienced this firsthand while building HiLi Notes—relying on spreadsheets to track feedback with a two-week lag before having actionable insights. By the time I prioritized and fixed issues, I'd missed the window to respond to frustrated users.

My Role

Product designer & prompt engineer

Timeline

Prototype exploration (Nov 2025 – Jan 2026)

Target Users (Hypothetical)

Product teams, SaaS companies, Startups

What AppPulse Explores

These are prototype capabilities and design goals — not a commercial product.

🤖

AI-Powered Analysis

Sentiment analysis, duplicate detection via semantic similarity, auto-generated replies, and theme extraction using Gemini AI.

No-Code Workflows

Visual 3-step workflow builder concept. The goal: trigger notifications, create tickets, and sync feedback—all without writing code.

🗳️

Public Roadmap Voting

Kanban-style board where users can vote on features. Designed to close the feedback loop.

🔒

Privacy-First Design

Explored GDPR compliance patterns, SSO, 2FA, and audit logging as design considerations.

Prototype Screens

Internal Test Results

Accuracy numbers and latency measurements are from small internal tests on sample datasets. They were useful for learning, but they are not production guarantees and were never used in a commercial setting.

🎯

~89.5% AI Accuracy

Overall accuracy on a 100-sample internal test. Sentiment detection reached ~92%, bug detection ~92% — useful for learning prompt design patterns.

~520ms Dashboard Load

Measured during local development after adding indexes and caching. A learning exercise in performance optimisation, not a production benchmark.

🔌

Performance Experiments

Experimented with composite indexes, Prisma query optimisation, and Redis caching to understand how these techniques improve response times.

🔗

Integration Exploration

Studied the API structures and quirks of Slack, Jira, Zendesk, and Salesforce. Built adapter patterns to understand how multi-platform integration works.

Build & Learn Timeline

Week 1–2 Early November

Foundation & Architecture

  • React + Node.js project setup with TypeScript
  • Database schema design (Prisma + PostgreSQL)
  • Design system creation
  • JWT authentication implementation
Week 3–4 Mid-Late November

Analytics Dashboard

  • Sentiment visualisation prototype
  • Trend charts for time-series data
  • NPS score tracking concept
  • Time-range filtering
Week 5–6 Early December

AI Prompt Engineering

  • Google Gemini API integration
  • Prompt template library with few-shot learning
  • Theme extraction & duplicate detection experiments
  • Validation layer for output quality
Week 7–8 Mid-Late December

Workflow & Integration Exploration

  • Visual 3-step workflow builder
  • Trigger engine concept (keywords, NPS thresholds, status)
  • Integration adapter pattern design
  • Learned about Slack, Jira, Zendesk, Salesforce API quirks
Week 9–10 January

Polish & Reflection

  • Performance experiments (indexes, caching)
  • Security patterns exploration (GDPR, audit logging)
  • Docker containerisation learning
  • Internal accuracy testing on sample datasets

Tech Stack Used

🎨 Frontend

React 19 TypeScript Vite CSS Variables

🔧 Backend

Node.js Express Prisma ORM PostgreSQL Redis

🤖 AI Integration

Google Gemini Sentiment Analysis Theme Extraction

🔌 Integrations (Explored)

Slack API Jira API Zendesk API Salesforce API

Key Technical Challenges & What I Learned

01

AI Consistency & Output Quality

Challenge

First prompt attempt achieved only ~65% accuracy on my test set. LLMs are non-deterministic, and feedback spans many domains with different analysis needs.

What I Did

Built a prompt template library with few-shot learning. Added a validation layer and spent ~40 hours on prompt engineering experiments.

✓ Reached ~89.5% on internal 100-sample test
02

Database Query Performance

Challenge

Dashboard initially took 5–10 seconds to load in development. N+1 queries and slow aggregations were the culprits.

What I Did

Added composite indexes, optimised Prisma queries, implemented a Redis caching layer, and pre-computed dashboard metrics — all as a learning exercise in performance engineering.

✓ Reduced to ~520ms in local dev (a useful learning exercise)
03

Third-Party Integration Complexity

Challenge

Each platform has different API structures, rate limits, and error handling. Slack, Jira, Zendesk, and Salesforce all have unique quirks I had to learn about.

What I Did

Built a generic IntegrationAdapter pattern. Implemented exponential backoff, webhook verification, and rate limit detection as design explorations.

✓ Learned how multi-platform integration architecture works

What I Learned

1

AI Prompt Engineering is a Real Skill

The API is 10% of the work. The other 90% is engineering reliable systems—prompt design, testing, monitoring, error handling.

2

Database Design is Foundational

Spent 3 hours on schema in Week 2, had to refactor in Week 8. 2 more hours of planning would have saved weeks of rework.

3

Deploy Early & Often

Didn't try containerisation until late in the project and immediately discovered configuration issues. Should have experimented with Docker much earlier.

If I Were to Take This Further

Hypothetical next steps if this prototype were developed into a real product.

Step 1

Email Integration

Connect Gmail/Outlook to auto-pull emails containing feedback, bug reports, and feature requests.

Step 2

Team Collaboration

Comments, @mentions, and voting on feedback items — keeping discussions in one place.

Step 3

Multi-Model AI

Compare outputs from multiple LLMs (Gemini, Claude, GPT) to improve accuracy and reduce hallucinations.

Step 4

Custom Taxonomy

Let teams define their own feedback categories and sentiment rules tailored to their domain.

Other Case Studies

Want to Discuss This Project?

Let's talk about AI prompt engineering, feedback systems, or product design thinking.