Record, transcribe, identify speakers, and summarize meetings—all locally on your machine. No cloud. No subscriptions. Your words stay yours.
$ ./hushnote full --diarize --speakers 3
[INFO] Recording started... (Ctrl+C to stop)
# Meeting in progress...
[INFO] Recording stopped (45m 32s)
[INFO] Transcribing with whisper base model...
[INFO] Running speaker diarization...
[INFO] Identifying speakers...
[DONE] Transcription complete!
Sarah: "Let's discuss the Q4 roadmap..."
John: "I think we should prioritize..."
Mike: "The database migration is ready..."
[INFO] Generating summary with llama3.1:8b...
[DONE] Summary saved to meeting_summary.md
Complete meeting intelligence that respects your privacy
All processing happens locally using Whisper and Ollama. No cloud uploads, no API calls, no telemetry. Works without internet after setup.
Automatically identify who spoke when. Interactive labeling lets you assign real names to speakers for clean, attributed transcripts.
Generate meeting notes, action items, and decisions using local LLMs via Ollama. Choose from llama3, mistral, qwen, or any model you prefer.
Capture system audio and microphone using PulseAudio or PipeWire. Supports WAV and MP3 with automatic compression.
Supports AMD ROCm and NVIDIA CUDA for fast transcription. Process an hour of audio in just a few minutes.
Export transcripts as TXT, JSON, SRT, or VTT. Summaries in Markdown or JSON for easy integration.
From recording to actionable meeting notes in one command
Capture audio from your meeting
Convert speech to text with Whisper
Identify who said what
Generate notes and action items
Balance speed and accuracy for your hardware
| Whisper Model | Size | Speed | Accuracy | Best For |
|---|---|---|---|---|
tiny |
75 MB | Quick drafts, testing | ||
base Default |
150 MB | Most users, balanced | ||
small |
500 MB | Higher accuracy needs | ||
medium |
1.5 GB | Professional use | ||
large-v3 |
3 GB | Maximum accuracy |
Models download automatically on first use. GPU acceleration recommended for medium and large models.
Get up and running in minutes
# Install system dependencies
yay -S ffmpeg pulseaudio-utils python ollama
# Or for PipeWire
yay -S ffmpeg pipewire-pulse python ollama
# Install system dependencies
sudo apt install ffmpeg pulseaudio-utils python3 python3-venv
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Install system dependencies
sudo dnf install ffmpeg pulseaudio-utils python3
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Clone the repository
git clone https://github.com/peteonrails/hushnote.git
cd hushnote
# Create virtual environment and install dependencies
python -m venv venv
./venv/bin/pip install -r requirements.txt
# Pull an Ollama model for summarization
ollama pull llama3.1:8b
# Test the installation
./hushnote --help
# Record, transcribe, and summarize a meeting
./hushnote full --diarize --speakers 3
HushNote adapts to your workflow
Automatically generate summaries, action items, and decisions from team meetings.
Transcribe interviews with speaker attribution for easy reference and analysis.
Create searchable transcripts from lectures and educational content for study.
Generate captions and subtitles in SRT/VTT format for video content.
Keep your meetings private. Start transcribing locally today.