From building a voice AI widget to mapping the entire Voice AI ecosystem (Introducing echostack)



This content originally appeared on DEV Community and was authored by Ayoola Solomon

Hey everyone,

I’m Solomon — the creator of GetEchoSpace, a voice AI widget that lets any website host real-time audio conversations for support, live shopping, or community.

While building it, I constantly had to combine tools for ASR, text-to-speech, and LLMs — juggling APIs from different vendors and testing pipelines just to get a working flow.

At some point, it hit me:

Everyone building in voice AI is reinventing the same workflows from scratch.

The Problem

There are incredible voice AI tools out there — from OpenAI’s speech APIs to ElevenLabs, Whisper, Speechmatics, and more.
But there’s no central place to discover, compare, and see how they connect in real-world setups.

Builders like me spend hours figuring out:

  • which ASR integrates best with Twilio,
  • how to pass data between TTS and LLMs,
  • and how to deploy these flows in production.

Enter echostack

So I started building echostack — a public directory of voice AI tools and ready-made “stacks.”

Think of it as Zapier templates or Stack Overflow for voice AI workflows.
Each stack shows how to combine tools (e.g., Retell + OpenAI + Twilio + GCP ASR) to achieve real outcomes — like multilingual dubbing, customer triage bots, or AI-powered voice assistants.

The goal:

help developers and AI builders spend less time wiring tools, and more time shipping value.

Tech Behind the MVP

The MVP is built with:

  • Next.js 15 (App Router)
  • TypeScript + Tailwind
  • Supabase (for data)
  • Zapier & n8n export support planned for v0.2

What’s Live Now

stack detail page

You can explore:

  • Featured voice AI tools
  • Early “stacks” (like multilingual dubbing or real-time triage bots)
  • Newsletter signup for updates as new stacks drop

https://getechostack.com

I’d love your feedback

If you’re building with Voice-AI or integrating ASR/TTS/LLM tools, I’d love to hear:

  • What workflows or “stacks” you’d want to see next
  • Which tools are must-haves for you
  • Whether you prefer no-code or code-level examples

What’s Next?

  • Expand to more tools and stacks
  • Add semantic search and tagging
  • Support Zapier/n8n exports
  • Launch the curated Voice-AI Stacks Newsletter

If that sounds interesting, you can check it out or share feedback directly on echostack.


This content originally appeared on DEV Community and was authored by Ayoola Solomon