Add a Voice Search to your Nuxt3 App in 6 Easy Steps
In a world dominated by "Hey Siri" and "Okay Google," integrating voice search into your web application isn’t just an option—it’s a necessity. Imagine enabling your users to interact with your Nuxt 3 app hands-free, providing a seamless and futuristic experience. By leveraging the Web Speech API, we’ll transform your app into a voice-powered assistant that listens, understands, and reacts.
Setup
First, let’s prepare your Nuxt 3 project. If you don’t already have one, it’s time to get started. Fire up your terminal and create a fresh Nuxt 3 app:
npx nuxi init voice-search-app
cd voice-search-app
npm install
npm run dev
This will spin up the Nuxt development server. Open http://localhost:3000
in your browser, and you should see the Nuxt welcome page. With our environment ready, we’re set to introduce some voice-powered magic.
Building the Voice Search Component
To begin, let’s create a dedicated component to handle voice recognition. Inside the components
directory, create a file called VoiceSearch.vue
:
touch components/VoiceSearch.vue
This component will manage everything: starting and stopping the microphone, processing voice input, and displaying the transcript. Inside the file, define a reactive setup using Vue’s Composition API:
<script setup>
import { ref, onMounted, onUnmounted } from 'vue';
const transcript = ref('');
const isListening = ref(false);
const isSupported = ref(false);
let recognition;
const startListening = () => {
if (!recognition) {
console.error('SpeechRecognition instance is unavailable.');
return;
}
isListening.value = true;
recognition.start();
};
const stopListening = () => {
if (!recognition) {
console.error('SpeechRecognition instance is unavailable.');
return;
}
isListening.value = false;
recognition.stop();
};
onMounted(() => {
const SpeechRecognition =
window.SpeechRecognition || window.webkitSpeechRecognition;
if (!SpeechRecognition) {
console.warn('SpeechRecognition is not supported in this browser.');
isSupported.value = false;
return;
}
isSupported.value = true;
recognition = new SpeechRecognition();
recognition.continuous = true;
recognition.interimResults = false;
recognition.lang = 'en-US';
recognition.onresult = (event) => {
const result = event.results[event.results.length - 1][0].transcript;
transcript.value = result;
};
recognition.onerror = (event) => {
console.error('Recognition error:', event.error);
};
});
onUnmounted(() => {
if (recognition) {
recognition.abort();
}
});
</script>
This setup initializes a SpeechRecognition instance, listens for results, and handles errors gracefully. The reactive variables transcript
and isListening
keep track of the user’s input and the system’s state.
1400+ Free HTML Templates
359+ Free News Articles
69+ Free AI Prompts
323+ Free Code Libraries
52+ Free Code Snippets & Boilerplates for Node, Nuxt, Vue, and more!
25+ Free Open Source Icon Libraries
Designing the User Interface
With the logic in place, it’s time to craft the interface. The component template includes buttons to start and stop listening, as well as a transcript display:
<template>
<div class="voice-search">
<button
@click="startListening"
:disabled="isListening"
class="start-button">
🎙️ Start Voice Search
</button>
<button
@click="stopListening"
:disabled="!isListening"
class="stop-button">
🛑 Stop
</button>
<p v-if="isSupported">
<strong>Transcript:</strong> {{ transcript || 'Say something...' }}
</p>
<p v-else>Your browser does not support voice search.</p>
</div>
</template>
Add some simple styles to ensure a clean and user-friendly layout:
<style scoped>
.voice-search {
text-align: center;
padding: 20px;
font-family: Arial, sans-serif;
}
button {
padding: 10px 20px;
margin: 5px;
border: none;
border-radius: 5px;
color: white;
font-size: 16px;
cursor: pointer;
}
.start-button {
background-color: #4caf50;
}
.start-button:disabled {
background-color: #ccc;
cursor: not-allowed;
}
.stop-button {
background-color: #f44336;
}
.stop-button:disabled {
background-color: #ccc;
cursor: not-allowed;
}
p {
margin-top: 20px;
font-size: 18px;
color: #333;
}
</style>
Bringing It All Together in Nuxt
To use the voice search component, import it into your app’s main page. Open pages/index.vue
and replace its contents with:
<template>
<div class="app">
<h1>Nuxt 3 Voice Search</h1>
<VoiceSearch />
</div>
</template>
<script setup>
import VoiceSearch from '~/components/VoiceSearch.vue';
</script>
<style scoped>
.app {
display: grid;
place-items: center;
height: 100vh;
text-align: center;
}
</style>
Start your app with npm run dev
, and visit http://localhost:3000
to see the magic unfold. Click "Start Voice Search," speak into your microphone, and watch as your words appear on the screen in real time.
Enhancing the Experience
Voice search is already impressive, but you can make it even better:
Handle Fallbacks for Unsupported Browsers: Ensure users can still interact with the app even if their browser doesn’t support the Web Speech API:
<p v-else>Your browser does not support voice search. Please type your query manually.</p>
Link the Transcript to a Search: Add a button to perform a search based on the transcript:
<button @click="handleSearch" class="search-button">🔍 Search</button>
In the script setup, define the search function:
const handleSearch = () => {
console.log('Searching for:', transcript.value);
};
With just a few lines of code, you’ve transformed your Nuxt 3 app into a cutting-edge tool that listens to users’ voices. Voice search isn’t just a trendy feature—it’s a testament to the power of modern web APIs and their ability to make technology more accessible and interactive.
By embracing tools like the Web Speech API, you’re not just building apps; you’re shaping the future of user interactions. So go ahead, deploy your app, and let your users experience the magic of voice search.
For more tips on web development, check out DailySandbox and sign up for our free newsletter to stay ahead of the curve!
Comments ()