← Brayden WatkinsAI Briefing App

Live work · AI Briefing App

AI Briefing App.

The app I built to keep up with AI without doomscrolling.

LivePWANext.js 16SupabaseOracle CloudWhisperGoogle TTSClaude HaikuClaude Sonnet

Real product on my iPhone home screen. Scrapes the AI accounts I trust on X and Instagram, has Claude classify what's signal vs. noise, and reads the signal back to me aloud while I work.

Demo · Try it

See it in action.

How I keep up with AI

The X timeline lies.
So I built my own filter.
Narrated, on my home screen.

The phone on the right is the actual app, running live in the iframe. Five briefings I curated from this week of AI news, narrated by a British TTS voice. Tap the instructions to start the audio. Same code as my real app, trimmed to a showcase set.

Built for one user. Me.

An hourly worker on Oracle Cloud scrapes the AI accounts I trust on X and Instagram. Anthropic, model labs, builders I respect. No algorithmic timeline.

Claude does the editorial

Haiku classifies every raw post into signal, news, learning, interests, or misc. Only noise drops; borderline content goes to a softer bucket.

Narrated with karaoke captions

Google Cloud TTS with word-level timestamps. The captions highlight in sync as the audio plays. Hands-free AI intake while I work.

Feedback loops back into the filter

Save, good, and noise buttons train the next classifier run. Go-deeper opens a Sonnet chat scoped to the story.

Same stack I use for client work

Next.js 16, Tailwind 4, and Supabase on the PWA. Node and Python (twikit, Whisper) on the Oracle worker. All free-tier infra.

Briefing app

Run the app locally to see the demo:

cd ai-briefing/app && npm run dev

Live app embedded · iPhone 16 frame · tap to interact

I · About

I read X for AI, but the timeline is mostly noise. I wanted a feed that only surfaces what's actually worth knowing: model releases, real research, build patterns from people I respect. And reads it to me hands-free. The PWA installs to my iPhone home screen, pulls from a worker that runs every hour, and plays narrated MP3s with word-level karaoke captions. The five-bucket system (signal · news · learning · interests · misc) lets borderline content survive without polluting the main feed. Built on the same primitives I use for client work: Next.js, Tailwind, Supabase, Claude.

II · How it works

The pipeline.

  1. 01Oracle Cloud ARM VM runs an hourly worker that scrapes 8 X accounts and 5 IG accounts I hand-pick (twikit for X, Playwright for IG)
  2. 02Local Whisper transcribes video posts; Claude Vision OCRs images so nothing in a screenshot or clip is missed
  3. 03Claude Haiku classifies every raw post into one of signal, news, learning, interests, misc, or noise. Only noise gets dropped
  4. 04Google Cloud TTS renders an MP3 with word-level timestamps for the karaoke captions you see in the demo
  5. 05PWA reads from Supabase (anon key) and plays audio on tap, autoplay-safe across iOS Safari
  6. 06Save, good, noise, and go-deeper buttons feed back into the classifier prompt so the filter sharpens over time

III · Sample

Why this is on my résumé.

Most AI engineering candidates can name the latest models. I built a daily AI-intake habit on top of them: scraper, classifier, narrator, feedback loop, all running on infra I pay nothing for. The honest signal here isn't the stack. It's that I dogfood my own AI tooling every morning, the loop has been running on Oracle Cloud for weeks, and I'm shipping refinements based on what I actually wish it did differently.