A progressive web application designed to provide real-time feedback on your speech. Whether you're preparing for a presentation, practicing for an interview, or refining a speech, SpeakSense helps you stay on track by monitoring your speech patterns and providing actionable insights.
SpeakSense

the challenge.
Help speakers cut filler words and improve delivery without leaving their practice flow. We needed real-time alerts for banned words, clear post-practice stats, and AI-powered insights—delivered in a lightweight, browser-based experience.
the results.
We shipped Banned Words Alert so users can set words to avoid and get a phone vibration whenever they say one. Post-practice Statistics show total practices, total banned-word counts, and common sentence starters so users can track habits over time. Insights (via the OpenAI API) analyze the transcript to surface strengths, improvement areas, speaking tips, tone analysis, and an overall confidence score.
Frame the MVP. We defined three pillars—real-time alerting, post-practice stats, and AI insights—and scoped success to delivering each reliably in a single session.
Design the UX. I created Figma mockups and component states, then mapped flows for “not listening” and “listening” modes to keep the practice loop simple and focused.
Build the front end. I implemented the UI with Next.js, React, TypeScript, and NextUI, keeping components modular for quick iteration.
Add speech + alerts. Using the Web Speech API, we transcribed practice sessions, detected banned words, and triggered a vibration cue when any were spoken.
Store local data. We used Dexie.js to log per-session metrics—total practices, banned-word counts, and favorite sentence starters—for fast, private retrieval.
Generate insights. With the OpenAI API, we analyzed transcripts to produce strengths, improvement areas, speaking tips, tone analysis, and a confidence score, completing the feedback loop.
SpeakSense turns the goal of improving delivery without breaking practice flow into a simple, browser-based loop. With real-time banned-word vibration alerts, clear session statistics, and AI-generated insights from transcripts, speakers get immediate nudges during practice and concrete next steps after each run. The MVP delivers the core loop end-to-end and is ready for deeper user testing and iteration (e.g., tuning detection accuracy, refining coaching prompts, and expanding mobile behaviors).
the conclusion.
the process.
SpeakSense
A progressive web application designed to provide real-time feedback on your speech. Whether you're preparing for a presentation, practicing for an interview, or refining a speech, SpeakSense helps you stay on track by monitoring your speech patterns and providing actionable insights.

the challenge.
Help speakers cut filler words and improve delivery without leaving their practice flow. We needed real-time alerts for banned words, clear post-practice stats, and AI-powered insights—delivered in a lightweight, browser-based experience.
the results.
We shipped Banned Words Alert so users can set words to avoid and get a phone vibration whenever they say one. Post-practice Statistics show total practices, total banned-word counts, and common sentence starters so users can track habits over time. Insights (via the OpenAI API) analyze the transcript to surface strengths, improvement areas, speaking tips, tone analysis, and an overall confidence score.
SpeakSense turns the goal of improving delivery without breaking practice flow into a simple, browser-based loop. With real-time banned-word vibration alerts, clear session statistics, and AI-generated insights from transcripts, speakers get immediate nudges during practice and concrete next steps after each run. The MVP delivers the core loop end-to-end and is ready for deeper user testing and iteration (e.g., tuning detection accuracy, refining coaching prompts, and expanding mobile behaviors).
the conclusion.
the process.
Frame the MVP. We defined three pillars—real-time alerting, post-practice stats, and AI insights—and scoped success to delivering each reliably in a single session.
Design the UX. I created Figma mockups and component states, then mapped flows for “not listening” and “listening” modes to keep the practice loop simple and focused.
Build the front end. I implemented the UI with Next.js, React, TypeScript, and NextUI, keeping components modular for quick iteration.
Add speech + alerts. Using the Web Speech API, we transcribed practice sessions, detected banned words, and triggered a vibration cue when any were spoken.
Store local data. We used Dexie.js to log per-session metrics—total practices, banned-word counts, and favorite sentence starters—for fast, private retrieval.
Generate insights. With the OpenAI API, we analyzed transcripts to produce strengths, improvement areas, speaking tips, tone analysis, and a confidence score, completing the feedback loop.
A progressive web application designed to provide real-time feedback on your speech. Whether you're preparing for a presentation, practicing for an interview, or refining a speech, SpeakSense helps you stay on track by monitoring your speech patterns and providing actionable insights.
SpeakSense

the challenge.
Help speakers cut filler words and improve delivery without leaving their practice flow. We needed real-time alerts for banned words, clear post-practice stats, and AI-powered insights—delivered in a lightweight, browser-based experience.
the results.
We shipped Banned Words Alert so users can set words to avoid and get a phone vibration whenever they say one. Post-practice Statistics show total practices, total banned-word counts, and common sentence starters so users can track habits over time. Insights (via the OpenAI API) analyze the transcript to surface strengths, improvement areas, speaking tips, tone analysis, and an overall confidence score.
Frame the MVP. We defined three pillars—real-time alerting, post-practice stats, and AI insights—and scoped success to delivering each reliably in a single session.
Design the UX. I created Figma mockups and component states, then mapped flows for “not listening” and “listening” modes to keep the practice loop simple and focused.
Build the front end. I implemented the UI with Next.js, React, TypeScript, and NextUI, keeping components modular for quick iteration.
Add speech + alerts. Using the Web Speech API, we transcribed practice sessions, detected banned words, and triggered a vibration cue when any were spoken.
Store local data. We used Dexie.js to log per-session metrics—total practices, banned-word counts, and favorite sentence starters—for fast, private retrieval.
Generate insights. With the OpenAI API, we analyzed transcripts to produce strengths, improvement areas, speaking tips, tone analysis, and a confidence score, completing the feedback loop.
SpeakSense turns the goal of improving delivery without breaking practice flow into a simple, browser-based loop. With real-time banned-word vibration alerts, clear session statistics, and AI-generated insights from transcripts, speakers get immediate nudges during practice and concrete next steps after each run. The MVP delivers the core loop end-to-end and is ready for deeper user testing and iteration (e.g., tuning detection accuracy, refining coaching prompts, and expanding mobile behaviors).
the conclusion.