“text-1x text-gray pb-8”>
Connect, collaborate, and celebrate from anywhere with Moogle Meet
{isSignedIn && (
New meeting
)}
{!isSignedIn && }
name= “code”
placeholder= “Enter a code or link”
value={code}
onChange={(e) => setCode(e.target.value)}
icon={
/>
alt= “Get a link you can share”
width={IMAGE_SIZE}
height={IMAGE_SIZE}
/>
“text-2xl tracking-normal text-black”>
Get a link you can share
“font-roboto text-sm text-black pb-8 grow”>
Click “font-bold”>New meeting to get a link
you can send to people you want to meet with
);
};
export default Home;
In the code above:
-
We set up the main UI of our home page.
-
Like our header, we’re currently using a hard-coded
isSignedIn
value. -
We add a
code
state to hold the meeting code entered by a user. -
We display either a “New meeting” or “Sign in” button based on
isSignedIn
.
And with that, your home page should look something like this:
Generating a Meeting ID
When a user clicks the “New meeting” button, we want to generate a unique meeting ID. We can use the Nano ID library to achieve this.
Run the following command to install Nano ID into your project:
npm install nanoid
Next, update your page.tsx
file to include the following code:
'use client';
import { useState, useContext } from 'react';
import { useRouter } from 'next/navigation';
import { customAlphabet } from 'nanoid';
import Image from 'next/image';
import { AppContext } from '@/contexts/AppProvider';
...
const generateMeetingId = () => {
const alphabet = 'abcdefghijklmnopqrstuvwxyz';
const nanoid = customAlphabet(alphabet, 4);
return `${nanoid(3)}-${nanoid(4)}-${nanoid(3)}`;
};
const Home = () => {
const isSignedIn = true;
const { setNewMeeting } = useContext(AppContext);
const [code, setCode] = useState('');
const router = useRouter();
const handleNewMeeting = () => {
setNewMeeting(true);
router.push(`/${generateMeetingId()}`);
};
};
export default Home;
Here, we create a new function generateMeetingId
. This function uses customAlphabet
from nanoid
to generate unique IDs in the format abc-defg-hij
.
We also modify the handleNewMeeting
function to update the newMeeting
state to true
and redirect the user to another page. This page will be our lobby page, which will use meeting IDs as its dynamic route.
Implementing Authentication with Clerk
What is Clerk?
Clerk is a user management platform that provides various tools for authentication and user profiles. These tools include pre-built UI components, flexible APIs, and admin dashboards. Clerk makes it easy to integrate authentication features into your application without spending time building them from scratch.
We’ll use Clerk to implement authentication in our app and separate guests from sign-in users.
Creating Your Clerk Account
Let’s begin by creating a free Clerk account. Visit the Clerk sign-up page and create a new account using your email or a social login option.
Creating a New Clerk Project
Once you’ve signed in, you can proceed by creating a new Clerk project for your app:
-
Navigate to the dashboard and click on the “Create application” button.
-
Enter “Moogle” as your application name.
-
Under “Sign in options,” select Email, Username, and Google to allow users multiple ways to sign in.
-
Finally, click the “Create application” button to proceed.
After following the steps above, you’ll be redirected to your application’s overview page. Here, you can find your Publishable Key and Secret Key, which we’ll use later.
Next, let’s add first and last names as required attributes during the sign-up process. To make these fields required:
-
Navigate to the “Configure” tab in your Clerk dashboard.
-
Under “User & Authentication“, select “Email, Phone, Username“.
-
In the “Personal Information” section, find the “Name” option and toggle it on.
-
Click the settings icon (gear icon) next to “Name” to access additional settings.
-
Enable the “Require” option and click “Continue” to save your changes.
Installing Clerk in Your Project
Now, let’s install Clerk into our Next.js project:
-
Install the Clerk package by running the command below:
npm install @clerk/nextjs
-
Create a
.env.local
file in the root of your project and add the following variables:NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=your_clerk_publishable_key CLERK_SECRET_KEY=your_clerk_secret_key
Replace
your_clerk_publishable_key
andyour_clerk_secret_key
with the actual keys from your Clerk application’s overview page. -
Next, we need to wrap our application with the
ClerkProvider
to make authentication available throughout the app. Update yourapp/layout.tsx
file as follows:import type { Metadata } from 'next'; import { ClerkProvider } from '@clerk/nextjs'; ... export default function RootLayout({ children, }: Readonly<{ children: React.ReactNode; }>) { return (
"en"> {children}
After following the above steps, you can now use Clerk in your application.
Adding Sign-Up and Sign-In Pages
Next, we’ll create sign-up and sign-in pages using Clerk’s
and
components. These components will handle all the UI and logic for user authentication.
Follow the steps below to add the pages to your app:
-
Configure Your Authentication URLs: Clerk’s
and
components need to know the routes where they are mounted. We can provide these paths via environment variables. In your.env.local
file, add the following:NEXT_PUBLIC_CLERK_SIGN_IN_URL=/sign-in NEXT_PUBLIC_CLERK_SIGN_UP_URL=/sign-up
-
Add Your Sign-Up Page: Create a new file at
app/sign-up/[[...sign-up]]/page.tsx
with the following code:import { SignUp } from '@clerk/nextjs'; export default function Page() { return (
"w-svw h-svh flex items-center justify-center">
); } -
Add Your Sign-In Page: Similarly, create a sign-in page at
app/sign-in/[[...sign-in]]/page.tsx
with the code below:import { SignIn } from '@clerk/nextjs'; export default function Page() { return (
"w-svw h-svh flex items-center justify-center">
); }
And with that, you should have fully functional sign-in and sign-up pages.
Next, let’s replace the hard-coded values in our app and add more functionality.
Updating the Home Page
We’ll start by modifying our home page. Update the page.tsx
file with the following code:
'use client';
...
import clsx from 'clsx';
import { SignInButton, useUser } from '@clerk/nextjs';
...
const Home = () => {
const { setNewMeeting } = useContext(AppContext);
const { isLoaded, isSignedIn } = useUser();
...
const handleNewMeeting = () => {
...
};
const handleCode = async () => {
...
};
return (
...
'flex flex-col items-center justify-center px-6',
isLoaded ? 'animate-fade-in' : 'opacity-0'
)}
>
...
"w-full max-w-xl flex justify-center">
"flex flex-col items-start sm:flex-row gap-6 sm:gap-2 sm:items-center justify-center">
...
{!isSignedIn && (
)}
...
...
);
};
export default Home;
In the code above:
-
We import
SignInButton
anduseUser
from@clerk/nextjs
to manage the user authentication state. -
The
useUser
hook providesisLoaded
andisSignedIn
properties to check if the user data has loaded and whether the user is signed in respectively. -
The
element’s
className
uses theclsx
utility to apply a fade-in animation once the user data is loaded. -
We wrap the “Sign In” button with Clerk’s
component. When clicked, the component redirects the user to the sign-in page.
Finally, we’ll update our Header
component to reflect the user’s authentication status. Update the Header.tsx
file with the following code:
import { SignInButton, UserButton, useUser } from '@clerk/nextjs';
import clsx from 'clsx';
...
const Header = ({ navItems = true }: HeaderProps) => {
const { isLoaded, isSignedIn, user } = useUser();
const { currentDateTime } = useTime();
const email = user?.primaryEmailAddress?.emailAddress;
return (
...
'w-[3.04rem] grow flex items-center justify-end [&_img]:w-9 [&_span]:w-9 [&_img]:h-9 [&_span]:h-9',
isLoaded ? 'animate-fade-in' : 'opacity-0'
)}
>
{isSignedIn ? (
<>
{!navItems && (
"hidden sm:block mr-3 font-roboto leading-4 text-right text-meet-black">
"text-sm leading-4">{email}
"text-sm hover:text-meet-blue cursor-pointer">
Switch account
)}
"relative h-9">
"absolute left-0 top-0 flex items-center justify-center pointer-events-none">
undefined,
}}
width={AVATAR_SIZE}
/>
>
) : (
"sm">Sign In
)}
...
);
};
export default Header;
In the code above:
-
We import
SignInButton
,UserButton
, anduseUser
from@clerk/nextjs
. -
The
component displays the user’s avatar and provides a menu with account options. -
We wrap Clerk’s
around
to redirect the user when clicked.
And with that, we have authentication set up in our app.
Integrating Stream into Your Application
What is Stream?
Stream is a developer-friendly platform that offers various APIs and SDKs to quickly build scalable and feature-rich chat and video experiences within your application. With Stream, you can add these features reliably without the complexity of building them from scratch.
We will use Stream’s React SDK for Video and their React Chat SDK to add real-time video and chat capabilities to our Google Meet clone.
Creating your Stream Account
Let’s get started by setting up a Stream account:
-
Sign Up: Visit the Stream sign-up page and create a new account using your email or social login.
-
Complete Your Profile:
-
After signing up, you’ll be prompted to provide additional details like your role and industry.
-
Select the “Chat Messaging” and “Video and Audio” options to tailor your experience.
-
Finally, click “Complete Signup” to proceed.
-
You should now be redirected to your Stream dashboard.
Creating a New Stream Project
After setting up your Stream account, you need to create an app for your project.
Follow the steps below to set up a Stream project:
-
Create a New App: In your Stream dashboard, click the “Create App” button.
-
Configure Your App:
-
App Name: Enter “google-meet-clone” or a name of your choice.
-
Region: Select the region closest to you for optimal performance.
-
Environment: Leave it set to “Development” for now.
-
Click the “Create App” button to submit the form.
-
-
Retrieve API Keys: After creating your app, you’ll be redirected to the app’s dashboard. Locate the “App Access Keys” section. We’ll use these keys to integrate Stream into our project.
Installing Stream SDKs
Now, let’s add Stream’s SDKs to our Next.js project:
-
Install Stream SDKs: Run the following command to install the necessary packages:
npm install @stream-io/node-sdk @stream-io/video-react-sdk stream-chat-react stream-chat
-
Update Environment Variables: Add your Stream API keys to your
.env.local
file:NEXT_PUBLIC_STREAM_API_KEY=your_stream_api_key STREAM_API_SECRET=your_stream_api_secret
Replace
your_stream_api_key
andyour_stream_api_secret
with your Stream app’s dashboard keys. -
Import the Stylesheets: The
@stream-io/video-react-sdk
andstream-chat-react
packages include a CSS stylesheet with a pre-built theme for their components. Let’s import them into ourapp/layout.tsx
file:import '@stream-io/video-react-sdk/dist/css/styles.css'; import 'stream-chat-react/dist/css/v2/index.css'; import './globals.css';
Creating the MeetProvider
Next, we’ll create a provider to manage our Stream video and chat clients.
Create a new MeetProvider.tsx
file in the contexts
directory with the following code:
import { useEffect, useState } from 'react';
import { useUser } from '@clerk/nextjs';
import { nanoid } from 'nanoid';
import {
Call,
StreamCall,
StreamVideo,
StreamVideoClient,
User,
} from '@stream-io/video-react-sdk';
import { User as ChatUser, StreamChat } from 'stream-chat';
import { Chat } from 'stream-chat-react';
import LoadingOverlay from '../components/LoadingOverlay';
type MeetProviderProps = {
meetingId: string;
children: React.ReactNode;
};
export const CALL_TYPE = 'default';
export const API_KEY = process.env.NEXT_PUBLIC_STREAM_API_KEY as string;
export const GUEST_ID = `guest_${nanoid(15)}`;
export const tokenProvider = async (userId: string = '') => {
const response = await fetch('/api/token', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ userId: userId || GUEST_ID }),
});
const data = await response.json();
return data.token;
};
const MeetProvider = ({ meetingId, children }: MeetProviderProps) => {
const { user: clerkUser, isSignedIn, isLoaded } = useUser();
const [loading, setLoading] = useState(true);
const [chatClient, setChatClient] = useState();
const [videoClient, setVideoClient] = useState();
const [call, setCall] = useState();
useEffect(() => {
if (!isLoaded) return;
const customProvider = async () => {
const token = await tokenProvider(clerkUser?.id);
return token;
};
const setUpChat = async (user: ChatUser) => {
await _chatClient.connectUser(user, customProvider);
setChatClient(_chatClient);
setLoading(false);
};
let user: User | ChatUser;
if (isSignedIn) {
user = {
id: clerkUser.id,
name: clerkUser.fullName!,
image: clerkUser.hasImage ? clerkUser.imageUrl : undefined,
custom: {
username: clerkUser?.username,
},
};
} else {
user = {
id: GUEST_ID,
type: 'guest',
name: 'Guest',
};
}
const _chatClient = StreamChat.getInstance(API_KEY);
const _videoClient = new StreamVideoClient({
apiKey: API_KEY,
user,
tokenProvider: customProvider,
});
const call = _videoClient.call(CALL_TYPE, meetingId);
setVideoClient(_videoClient);
setCall(call);
setUpChat(user);
return () => {
_videoClient.disconnectUser();
_chatClient.disconnectUser();
};
}, [clerkUser, isLoaded, isSignedIn, loading, meetingId]);
if (loading) return ;
return (
{children}
);
};
export default MeetProvider;
There’s a lot going on here, so let’s break things down:
-
The
MeetProvider
component sets up the video meeting and chat functionalities using Stream’s SDKs. -
We define a
tokenProvider
function that fetches an authentication token from the/api/token
endpoint. -
Inside the
useEffect
:-
We terminate the function if the user is not loaded yet.
-
We create a user object using Clerk’s user data if signed in or generate a guest user otherwise.
-
We initialize the Stream Chat client (
StreamChat
) and the Stream Video client (StreamVideoClient
) with theAPI_KEY
, user information, and a custom token provider. -
We set up a call instance using
_videoClient.call
with the specified call type andmeetingId
. -
We also display a
LoadingOverlay
while setting up using theloading
state.
-
-
Once ready, the component renders the
Chat
,StreamVideo
, andStreamCall
components to provide chat and video call capabilities to its children components.
Creating the Token API Route
In the previous section, we added a token provider that sends a request to /api/token
to generate Stream user tokens. Let’s create the API route for this functionality. Create a new file at app/api/token/route.ts
with the following code:
import { StreamClient } from '@stream-io/node-sdk';
const API_KEY = process.env.NEXT_PUBLIC_STREAM_API_KEY!;
const SECRET = process.env.STREAM_API_SECRET!;
export async function POST(request: Request) {
const client = new StreamClient(API_KEY, SECRET);
const body = await request.json();
const userId = body?.userId;
if (!userId) {
return Response.error();
}
const token = client.generateUserToken({ user_id: userId });
const response = {
userId: userId,
token: token,
};
return Response.json(response);
}
Here, we create a Stream client from Stream’s Node SDK. We then use the client to generate and return a user token for a given userId
.
With this route in place, our token provider should now work correctly.
Syncing Clerk with Your Stream App
Like Clerk, our Stream app also accepts users. It keeps track of their information during video or chat sessions, such as name, role, permissions, etc.
So whenever we create or update a user using Clerk, we must also reflect this change in our Stream app. To do this, we’ll set up a webhook that listens for user updates and syncs the information.
To create the webhook, follow the steps below:
-
Set Up ngrok: Since webhooks require a publicly accessible URL, we’ll use ngrok to expose our local server. Follow the steps below to set up an ngrok tunnel for your app:
-
Install ngrok:
-
Visit the ngrok website and sign up for a free account.
-
Download and install ngrok following their installation guide.
-
-
Start ngrok: Start a tunnel pointing to your local server (assuming it’s running on port 3000):
ngrok http 3000 --domain=YOUR_DOMAIN
Replace
YOUR_DOMAIN
with your generated domain (e.g.,your-subdomain.ngrok.io
) from ngrok.
-
-
Create a Webhook Endpoint in the Clerk Dashboard:
-
Navigate to Webhooks: Click the “Configure” tab in your Clerk dashboard and select “Webhooks.”
-
Add a New Endpoint:
-
Click on “Add Endpoint“.
-
Paste your ngrok URL followed by
/api/webhooks
(e.g.,https://your-subdomain.ngrok.io/api/webhooks
). -
Under the “Subscribe to events” field, select “user.created” and “user.updated“.
-
Click the “Create” button.
-
-
Retrieve the Signing Secret: After creating the endpoint, copy the Signing Secret provided. We’ll use this to verify webhook requests.
-
-
Add Your Signing Secret to Your
.env.local
File: Update your.env.local
file with the following:WEBHOOK_SECRET=your_clerk_webhook_signing_secret
Replace
your_clerk_webhook_signing_secret
with the signing secret from your endpoint. -
Install Svix: We need Svix to verify and handle incoming webhook requests. Run the following command to install it:
npm install svix
-
Create the Endpoint in Your Application: Next, need to create a route handler to receive the webhook’s payload. Create a new file at
app/api/webhooks/route.ts
with the following code:import { Webhook } from 'svix'; import { headers } from 'next/headers'; import { WebhookEvent } from '@clerk/nextjs/server'; import { StreamClient } from '@stream-io/node-sdk'; const API_KEY = process.env.NEXT_PUBLIC_STREAM_API_KEY!; const SECRET = process.env.STREAM_API_SECRET!; const WEBHOOK_SECRET = process.env.WEBHOOK_SECRET; export async function POST(req: Request) { const client = new StreamClient(API_KEY, SECRET); if (!WEBHOOK_SECRET) { throw new Error( 'Please add WEBHOOK_SECRET from Clerk Dashboard to .env or .env.local' ); } const headerPayload = headers(); const svix_id = headerPayload.get('svix-id'); const svix_timestamp = headerPayload.get('svix-timestamp'); const svix_signature = headerPayload.get('svix-signature'); if (!svix_id || !svix_timestamp || !svix_signature) { return new Response('Error occured -- no svix headers', { status: 400, }); } const payload = await req.json(); const body = JSON.stringify(payload); const wh = new Webhook(WEBHOOK_SECRET); let evt: WebhookEvent; try { evt = wh.verify(body, { 'svix-id': svix_id, 'svix-timestamp': svix_timestamp, 'svix-signature': svix_signature, }) as WebhookEvent; } catch (err) { console.error('Error verifying webhook:', err); return new Response('Error occured', { status: 400, }); } const eventType = evt.type; switch (eventType) { case 'user.created': case 'user.updated': const newUser = evt.data; await client.upsertUsers([ { id: newUser.id, role: 'user', name: `${newUser.first_name} ${newUser.last_name}`, custom: { username: newUser.username, email: newUser.email_addresses[0].email_address, }, image: newUser.has_image ? newUser.image_url : undefined, }, ]); break; default: break; } return new Response('Webhook processed', { status: 200 }); }
In the code above:
-
We use Svix’s
Webhook
class to verify incoming requests using the signing secret. If verification fails, we return an error response. -
For
user.created
anduser.updated
events, we sync the user data with Stream usingupsertUsers
.
-
Joining a Meeting
Now that we’ve fully set up Stream in our app, let’s update our home page to handle joining meetings.
Update your page.tsx
file with the following code:
'use client';
import { useState, useContext, useEffect } from 'react';
...
import {
ErrorFromResponse,
GetCallResponse,
StreamVideoClient,
User,
} from '@stream-io/video-react-sdk';
...
import { API_KEY, CALL_TYPE } from '@/contexts/MeetProvider';
import { AppContext, MEETING_ID_REGEX } from '@/contexts/AppProvider';
...
const GUEST_USER: User = { id: 'guest', type: 'guest'};
const Home = () => {
const { setNewMeeting } = useContext(AppContext);
const { isLoaded, isSignedIn } = useUser();
const [code, setCode] = useState('');
const [checkingCode, setCheckingCode] = useState(false);
const [error, setError] = useState('');
const router = useRouter();
useEffect(() => {
let timeout: NodeJS.Timeout;
if (error) {
timeout = setTimeout(() => {
setError('');
}, 3000);
}
return () => {
clearTimeout(timeout);
};
}, [error]);
const handleNewMeeting = () => {
...
};
const handleCode = async () => {
if (!MEETING_ID_REGEX.test(code)) return;
setCheckingCode(true);
const client = new StreamVideoClient({
apiKey: API_KEY,
user: GUEST_USER,
});
const call = client.call(CALL_TYPE, code);
try {
const response: GetCallResponse = await call.get();
if (response.call) {
router.push(`/${code}`);
return;
}
} catch (e: unknown) {
let err = e as ErrorFromResponse;
console.error(err.message);
if (err.status === 404) {
setError("Couldn't find the meeting you're trying to join.");
}
}
setCheckingCode(false);
};
return (
...
"flex flex-col items-center justify-center gap-8">
...
{checkingCode && (
"z-50 fixed top-0 left-0 w-full h-full flex items-center justify-center text-white text-3xl bg-[#000] animate-transition-overlay-fade-in">
Joining...
)}
{error && (
"z-50 fixed bottom-0 left-0 pointer-events-none m-6 flex items-center justify-start">
"rounded p-4 font-roboto text-white text-sm bg-dark-gray shadow-[0_3px_5px_-1px_rgba(0,0,0,.2),0_6px_10px_0_rgba(0,0,0,.14),0_1px_18px_0_rgba(0,0,0,.12)]">
{error}
)}
);
};
export default Home;
In the code above:
-
State Management:
-
checkingCode
: Indicates if we’re currently validating the meeting code. -
error
: Stores any error messages to display to the user.
-
-
handleCode
function:-
Validates the meeting code format using
MEETING_ID_REGEX
. -
Initializes a Stream Video client with a guest (temporary) user.
-
Attempts to retrieve the call information.
-
If successful, it navigates to the meeting.
-
If the meeting doesn’t exist, it displays an error.
-
-
Handles any exceptions and updates the UI accordingly.
-
-
UI Updates:
-
We display a loading overlay when checking the code.
-
We show error messages from the
error
state at the bottom of the screen.
-
And with that, our home page should now be fully functional!
Note: The newly created route is redirecting to a 404 page because we haven’t created a page for it yet.
Building the Lobby Page
In this section, we’ll create a lobby page for our app. This page will allow users to preview and configure their video and audio before joining a call. The lobby page features include:
-
Selecting input/output devices
-
Toggling the microphone and camera
-
Displaying a video preview
-
Displaying the current participants in the call
-
Prompting guests to enter their names before joining
Creating the Layout
Let’s start by creating a layout for our page. This layout will contain the MeetProvider
component we created earlier.
Create a [meetingId]
folder in the app
directory, and then add a layout.tsx
file with the following code:
'use client';
import { ReactNode } from 'react';
import MeetProvider from '@/contexts/MeetProvider';
type LayoutProps = {
children: ReactNode;
params: {
meetingId: string;
};
};
export default function Layout({ children, params }: LayoutProps) {
return {children} ;
}
With the code above, any page created under the [meetingId]
segment will have access to the current Stream video and client data.
Building the Meeting Preview
Next, let’s work on the core component of our page: the meeting preview. This component will give the user a preview of their video and audio and provide tools for adjusting them.
First, let’s create a speech indicator component to indicate when the user speaks visually. In the components
directory, create a SpeechIndicator.tsx
file with the following code:
import clsx from 'clsx';
interface SpeechIndicatorProps {
isSpeaking: boolean;
isDominantSpeaker?: boolean;
}
const SpeechIndicator = ({
isSpeaking,
isDominantSpeaker = true,
}: SpeechIndicatorProps) => {
return (
'str-video__speech-indicator',
isDominantSpeaker && 'str-video__speech-indicator--dominant',
isSpeaking && 'str-video__speech-indicator--speaking'
)}
>
"str-video__speech-indicator__bar" />
"str-video__speech-indicator__bar" />
"str-video__speech-indicator__bar" />
);
};
export default SpeechIndicator;
The component accepts isSpeaking
to indicate if the user is speaking and isDominantSpeaker
to highlight the main speaker. We also use conditional class names from Stream’s default theme to animate the bars when the user speaks.
Next, let’s modify the default styling to resemble Google Meet’s speech indicator. In your globals.css
file, add the following code:
...
@layer components {
.root-theme .str-video__speech-indicator {
gap: 1.5px;
}
.str-video__speech-indicator__bar,
.root-theme .str-video__speech-indicator.str-video__speech-indicator--dominant .str-video__speech-indicator__bar,
.root-theme .str-video__speech-indicator .str-video__speech-indicator__bar {
background-color: white !important;
width: 4px !important;
border-radius: 999px !important;
}
...
}
...
Next, let’s create a hook to detect when the user is speaking.
In the hooks
directory, create a useSoundDetected.tsx
file with the following code:
import {
createSoundDetector,
useCallStateHooks,
} from '@stream-io/video-react-sdk';
import { useEffect, useState } from 'react';
const useSoundDetected = () => {
const [soundDetected, setSoundDetected] = useState(false);
const { useMicrophoneState } = useCallStateHooks();
const { status: microphoneStatus, mediaStream } = useMicrophoneState();
useEffect(() => {
if (microphoneStatus !== 'enabled' || !mediaStream) return;
const disposeSoundDetector = createSoundDetector(
mediaStream,
({ isSoundDetected: sd }) => setSoundDetected(sd),
{ detectionFrequencyInMs: 80, destroyStreamOnStop: false }
);
return () => {
disposeSoundDetector().catch(console.error);
};
}, [microphoneStatus, mediaStream]);
return soundDetected;
};
export default useSoundDetected;
The useSoundDetected
custom hook utilizes the @stream-io/video-react-sdk
to detect sound activity from the user’s microphone. Let’s break down how it works:
-
It retrieves the microphone status and media stream using
useMicrophoneState
from theuseCallStateHooks
. -
Within a
useEffect
, it checks if the microphone is enabled and a media stream exists. If so, it sets up a sound detector usingcreateSoundDetector
. -
The sound detector listens to the media stream and updates the
soundDetected
state based on whether a sound is detected.
Next, we’ll create components to select audio input/output devices and video input devices.
In your components
directory, create a DeviceSelector.tsx
file with the following code:
import { ReactNode } from 'react';
import { useCallStateHooks } from '@stream-io/video-react-sdk';
import Dropdown from './Dropdown';
import Mic from './icons/Mic';
import Videocam from './icons/Videocam';
import VolumeUp from './icons/VolumeUp';
type DeviceSelectorProps = {
devices: MediaDeviceInfo[] | undefined;
selectedDeviceId?: string;
onSelect: (deviceId: string) => void;
icon: ReactNode;
disabled?: boolean;
className?: string;
dark?: boolean;
};
type SelectorProps = {
disabled?: boolean;
className?: string;
dark?: boolean;
};
export const DeviceSelector = ({
devices,
selectedDeviceId,
onSelect,
icon,
disabled = false,
className = '',
dark = false,
}: DeviceSelectorProps) => {
const label =
devices?.find((device) => device.deviceId === selectedDeviceId)?.label! ||
'Default - ...';
return (
'Permission needed': label}
value={selectedDeviceId}
icon={icon}
onChange={(value) => onSelect(value)}
options={
devices?.map((device) => ({
label: device.label,
value: device.deviceId,
}))!
}
disabled={disabled}
className={className}
dark={dark}
/>
);
};
export const AudioInputDeviceSelector = ({
disabled = false,
className = '',
dark,
}: SelectorProps) => {
const { useMicrophoneState } = useCallStateHooks();
const { microphone, devices, selectedDevice } = useMicrophoneState();
return (
(deviceId) => microphone.select(deviceId)}
icon={20} height={20} color="var(--meet-black)" />}
disabled={disabled}
className={className}
dark={dark}
/>
);
};
export const VideoInputDeviceSelector = ({
disabled = false,
className = '',
dark = false,
}: SelectorProps) => {
const { useCameraState } = useCallStateHooks();
const { camera, devices, selectedDevice } = useCameraState();
return (
(deviceId) => camera.select(deviceId)}
icon={18} height={18} color="var(--meet-black)" />}
disabled={disabled}
className={className}
dark={dark}
/>
);
};
export const AudioOutputDeviceSelector = ({
disabled = false,
className = '',
dark = false,
}: SelectorProps) => {
const { useSpeakerState } = useCallStateHooks();
const { speaker, devices, selectedDevice, isDeviceSelectionSupported } =
useSpeakerState();
if (!isDeviceSelectionSupported) return null;
return (
0]?.deviceId
: 'Default - ...'
}
onSelect={(deviceId) => speaker.select(deviceId)}
icon={20} height={20} color="var(--meet-black)" />}
disabled={disabled}
className={className}
dark={dark}
/>
);
};
In the code above:
-
We defined a generic component,
DeviceSelector
that renders a dropdown for selecting devices. -
We then utilized the
DeviceSelector
and Stream SDK’s call state hooks to manage device states and interactions for specific components:-
AudioInputDeviceSelector
: Allows the user to select a microphone. -
VideoInputDeviceSelector
: Allows the user to select a camera. -
AudioOutputDeviceSelector
: Allows the user to select a speaker if supported.
-
Finally, let’s put everything together in our meeting preview component.
In the components
folder, create a MeetingPreview.tsx
file with the following code:
import { useEffect, useState } from 'react';
import {
VideoPreview,
useCallStateHooks,
useConnectedUser,
} from '@stream-io/video-react-sdk';
import {
AudioInputDeviceSelector,
AudioOutputDeviceSelector,
VideoInputDeviceSelector,
} from './DeviceSelector';
import IconButton from './IconButton';
import MoreVert from './icons/MoreVert';
import Mic from './icons/Mic';
import MicOff from './icons/MicOff';
import SpeechIndicator from './SpeechIndicator';
import Videocam from './icons/Videocam';
import VideocamOff from './icons/VideocamOff';
import VisualEffects from './icons/VisualEffects';
import useSoundDetected from '../hooks/useSoundDetected';
const MeetingPreview = () => {
const user = useConnectedUser();
const soundDetected = useSoundDetected();
const [videoPreviewText, setVideoPreviewText] = useState('');
const [displaySelectors, setDisplaySelectors] = useState(false);
const [devicesEnabled, setDevicesEnabled] = useState(false);
const { useCameraState, useMicrophoneState } = useCallStateHooks();
const {
camera,
optimisticIsMute: isCameraMute,
hasBrowserPermission: hasCameraPermission,
} = useCameraState();
const {
microphone,
optimisticIsMute: isMicrophoneMute,
hasBrowserPermission: hasMicrophonePermission,
status: microphoneStatus,
} = useMicrophoneState();
useEffect(() => {
const enableMicAndCam = async () => {
try {
await camera.enable();
} catch (error) {
console.error(error);
}
try {
await microphone.enable();
} catch (error) {
console.error(error);
}
setDevicesEnabled(true);
};
enableMicAndCam();
}, [camera, microphone]);
useEffect(() => {
if (hasMicrophonePermission === undefined) return;
if (
(hasMicrophonePermission && microphoneStatus) ||
!hasMicrophonePermission
) {
setDisplaySelectors(true);
}
}, [microphoneStatus, hasMicrophonePermission]);
const toggleCamera = async () => {
try {
setVideoPreviewText((prev) =>
prev === '' || prev === 'Camera is off'
? 'Camera is starting'
: 'Camera is off'
);
await camera.toggle();
setVideoPreviewText((prev) =>
prev === 'Camera is off'? 'Camera is starting ': 'Camera is off'
);
} catch (error) {
console.error(error);
}
};
const toggleMicrophone = async () => {
try {
await microphone.toggle();
} catch (error) {
console.error(error);
}
};
return (
"w-full max-w-3xl lg:pr-2 lg:mt-8">
"relative w-full rounded-lg max-w-185 aspect-video mx-auto shadow-md">
{}
"absolute z-0 left-0 w-full h-full rounded-lg bg-meet-black" />
{}
"absolute z-2 bg-gradient-overlay left-0 w-full h-full rounded-lg" />
{}
"absolute w-full h-full [&>div]:w-auto [&>div]:h-auto z-1 flex items-center justify-center rounded-lg overflow-hidden [&_video]:-scale-x-100">
() => DisabledVideoPreview(videoPreviewText)}
/>
{devicesEnabled && (
"z-3 absolute bottom-4 left-1/2 -ml-17 flex items-center gap-6">
{}
: }
title={
isMicrophoneMute ? 'Turn on microphone': 'Turn off microphone'
}
onClick={toggleMicrophone}
active={isMicrophoneMute}
alert={!hasMicrophonePermission}
variant= "secondary"
/>
{}
: }
title={isCameraMute ? 'Turn on camera' : 'Turn off camera'}
onClick={toggleCamera}
active={isCameraMute}
alert={!hasCameraPermission}
variant="secondary"
/>
)}
{}
{microphoneStatus && microphoneStatus === 'enabled' && (
"z-2 absolute bottom-3.5 left-3.5 w-6.5 h-6.5 flex items-center justify-center bg-primary rounded-full">
)}
{}
{devicesEnabled && hasCameraPermission && (
"z-3 max-w-94 h-8 absolute left-0 top-3 mt-1.5 mb-1 mx-4 truncate text-white text-sm font-medium leading-5 flex items-center justify-start cursor-default select-none">
{user?.name}
)}
{devicesEnabled && (
<>
"z-2 absolute top-2.5 right-1 [&>button]:w-12 [&>button]:h-12 [&>button]:border-none [&>button]:transition-none [&>button]:hover:bg-[rgba(255,255,255,.2)] [&>button]:hover:shadow-none">
"More options"
icon={ }
variant= "secondary"
/>
"z-3 absolute bottom-4 right-2.5">
}
title= "Apply visual effects"
variant= "secondary"
/>
>
)}
"hidden lg:flex h-17 items-center gap-1 mt-4 ml-2">
{displaySelectors && (
<>
>
)}
);
};
export const DisabledVideoPreview = (videoPreviewText: string) => {
return (
"text-2xl font-roboto text-white">{videoPreviewText}
);
};
export default MeetingPreview;
From the code above:
When the component mounts, we enable the user's mic and cam to begin the preview.
We use Stream's VideoPreview
component to preview the user's video with a gradient overlay and add a background when no preview is available.
The component also displays the SpeechIndicator
when the user is speaking (using useSoundDetected
), shows the user's name if available, and includes UI controls for additional options like applying visual effects.
We use useCallStateHooks
to enable users to toggle their camera and microphone and check for device permissions.
We display the device selectors so users can select their input/output devices for the call.
Building the Call Participants UI
To display the participants in a call, we'll update our existing Avatar
component to handle more participant types. This change is necessary because participants can come from different sources and have slightly different structures, and we need our component to be flexible enough to display any participant correctly.
Open the Avatar.tsx
file in the components
directory and update it as follows:
import { useMemo } from 'react';
import {
CallParticipantResponse,
StreamVideoParticipant,
} from '@stream-io/video-react-sdk';
import clsx from 'clsx';
import Image from 'next/image';
import useUserColor from '../hooks/useUserColor';
interface AvatarProps {
width?: number;
text?: string;
participant?: StreamVideoParticipant | CallParticipantResponse | {};
}
export const avatarClassName = 'avatar';
const IMAGE_SIZE = 160;
const Avatar = ({ text = '', width, participant = {} }: AvatarProps) => {
const color = useUserColor();
const name = useMemo(() => {
if ((participant as CallParticipantResponse)?.user) {
return (
(participant as CallParticipantResponse).user.name ||
(participant as CallParticipantResponse).user.id
);
}
return (
(participant as StreamVideoParticipant).name ||
(participant as StreamVideoParticipant).userId
);
}, [participant]);
const randomColor = useMemo(() => {
if (text) return color('Anonymous');
return color(name);
}, [color, name, text]);
const image = useMemo(() => {
if ((participant as CallParticipantResponse)?.user) {
return (participant as CallParticipantResponse).user?.image;
}
return (participant as StreamVideoParticipant)?.image;
}, [participant]);
if (image)
return (
...
);
return (
...
);
};
export default Avatar;
In the updated Avatar
component, we handle multiple participant types by checking the properties of the participant object. We use conditional logic to determine whether the participant has a user
property (indicating it's a CallParticipantResponse
) or properties like name
and image
(indicating it's a StreamVideoParticipant
).
Next, we'll create the CallParticipants
component to display a list of participants in the call using the updated Avatar
component.
Create a file named CallParticipants.tsx
in the components
directory and add the following code:
import { CallParticipantResponse } from '@stream-io/video-react-sdk';
import Avatar from './Avatar';
interface CallParticipantsProps {
participants: CallParticipantResponse[];
}
const AVATAR_SIZE = 24;
const CallParticipants = ({ participants }: CallParticipantsProps) => {
const getText = () => {
if (participants.length === 1) {
return `${
participants[0].user.name || participants[0].user.id
} is in this call`;
} else {
return (
participants
.slice(0, 3)
.map((p) => p.user.name || p.user.id)
.join(', ') +
(participants.length > 4
? ` and ${participants.length - 3} more`
: participants.length === 4
? ` and ${participants[3].user.name || participants[3].user.id}`
: '') +
'are in this call'
);
}
};
return (
"flex flex-col items-center justify-center gap-2">
"flex items-center justify-center gap-2">
{participants.slice(0, 3).map((p) => (
))}
{participants.length === 4 && (
3]} width={AVATAR_SIZE} />
)}
{participants.length > 4 && (
`+${participants.length - 3}`} width={AVATAR_SIZE} />
)}
{getText()}
);
};
export default CallParticipants;
The CallParticipants
component uses the Avatar
component to display a list of participants. It formats the participant names and handles cases with many participants by summarizing the additional participants. This feature helps users quickly see who is on the call.
Building the Meeting End Page
We'll also create a meeting end page to inform users when a meeting has ended or if they've entered an invalid meeting ID.
Create a meeting-end
folder in the [meetingId]
directory and add a page.tsx
file with the following code:
'use client';
import { useEffect, useRef, useState } from 'react';
import Image from 'next/image';
import { useRouter } from 'next/navigation';
import { CallingState, useCallStateHooks } from '@stream-io/video-react-sdk';
import Button from '@/components/Button';
import PlainButton from '@/components/PlainButton';
interface MeetingEndProps {
params: {
meetingId: string;
};
searchParams?: {
invalid: string;
};
}
const MeetingEnd = ({ params, searchParams }: MeetingEndProps) => {
const { meetingId } = params;
const router = useRouter();
const { useCallCallingState } = useCallStateHooks();
const callingState = useCallCallingState();
const audioRef = useRef(null);
const [countdownNumber, setCountdownNumber] = useState(60);
const invalidMeeting = searchParams?.invalid === 'true';
useEffect(() => {
if (!invalidMeeting && callingState !== CallingState.LEFT) {
router.push(`/`);
}
audioRef.current?.play();
setCountdownNumber(59);
const interval = setInterval(() => {
setCountdownNumber((prev) => (prev ? prev - 1 : 0));
}, 1000);
return () => clearInterval(interval);
}, []);
useEffect(() => {
if (countdownNumber === 0) {
returnHome();
}
}, [countdownNumber]);
const rejoinMeeting = () => {
router.push(`/${meetingId}`);
};
const returnHome = () => {
router.push("https://tropicolx.hashnode.dev/");
};
if (!invalidMeeting && callingState !== CallingState.LEFT) return null;
return (
"w-full">
"m-5 h-14 flex items-center justify-start gap-2">
"relative w-14 h-14 p-2 flex items-center justify-center text-center">
"text-meet-black font-normal text-sm font-roboto select-none">
{countdownNumber}
"font-roboto text-sm tracking-loosest">
Returning to home screen
"mt-6 px-4 flex flex-col items-center gap-8">
{invalidMeeting && (
)}
"flex flex-col items-center justify-center gap-3">
"flex items-center justify-center gap-2">
{!invalidMeeting && (
"sm"
className="border border-hairline-gray px-[23px] shadow-[border_.28s_cubic-bezier(.4,0,.2,1),box-shadow_.28s_cubic-bezier(.4,0,.2,1)]"
onClick={rejoinMeeting}
>
Rejoin
)}
"sm">Submit feedback
"max-w-100 flex flex-wrap flex-col rounded items-center pl-4 pr-3 pt-4 pb-1 border border-hairline-gray text-left">
" flex items-center">
"Your meeting is safe"
width={58}
height={58}
src="https://www.gstatic.com/meet/security_shield_356739b7c38934eec8fb0c8e93de8543.svg"
/>
"pl-4">
"text-meet-black text-lg leading-6 tracking-normal font-normal">
Your meeting is safe
"font-roboto text-sm text-meet-gray tracking-loosest">
No one can join a meeting unless invited or admitted by the host
"pt-2 w-full flex grow justify-end whitespace-nowrap">
"sm">Learn more
);
};
export default MeetingEnd;
In this component, we display a message indicating the meeting has ended. It includes a countdown timer that redirects users back to the home page after a certain period.
We also provide buttons to rejoin the meeting or return to the home screen, along with an audio cue to signal that the meeting has ended.
Putting it all Together
Finally, we'll assemble these components on the lobby page.
Create a file named page.tsx
inside the app/[meetingId]
directory and add the following code:
'use client';
import { useContext, useEffect, useMemo, useState } from 'react';
import { useRouter } from 'next/navigation';
import {
CallingState,
CallParticipantResponse,
ErrorFromResponse,
GetCallResponse,
useCall,
useCallStateHooks,
useConnectedUser,
} from '@stream-io/video-react-sdk';
import { useChatContext } from 'stream-chat-react';
import { useUser } from '@clerk/nextjs';
import { AppContext, MEETING_ID_REGEX } from '@/contexts/AppProvider';
import { GUEST_ID, tokenProvider } from '@/contexts/MeetProvider';
import Button from '@/components/Button';
import CallParticipants from '@/components/CallParticipants';
import Header from '@/components/Header';
import MeetingPreview from '@/components/MeetingPreview';
import Spinner from '@/components/Spinner';
import TextField from '@/components/TextField';
interface LobbyProps {
params: {
meetingId: string;
};
}
const Lobby = ({ params }: LobbyProps) => {
const { meetingId } = params;
const validMeetingId = MEETING_ID_REGEX.test(meetingId);
const { newMeeting, setNewMeeting } = useContext(AppContext);
const { client: chatClient } = useChatContext();
const { isSignedIn } = useUser();
const router = useRouter();
const connectedUser = useConnectedUser();
const call = useCall();
const { useCallCallingState } = useCallStateHooks();
const callingState = useCallCallingState();
const [guestName, setGuestName] = useState('');
const [errorFetchingMeeting, setErrorFetchingMeeting] = useState(false);
const [loading, setLoading] = useState(true);
const [joining, setJoining] = useState(false);
const [participants, setParticipants] = useState(
[]
);
const isGuest = !isSignedIn;
useEffect(() => {
const leavePreviousCall = async () => {
if (callingState === CallingState.JOINED) {
await call?.leave();
}
};
const getCurrentCall = async () => {
try {
const callData = await call?.get();
setParticipants(callData?.call?.session?.participants || []);
} catch (e) {
const err = e as ErrorFromResponse;
console.error(err.message);
setErrorFetchingMeeting(true);
}
setLoading(false);
};
const createCall = async () => {
await call?.create({
data: {
members: [
{
user_id: connectedUser?.id!,
role: 'host',
},
],
},
});
setLoading(false);
};
if (!joining && validMeetingId) {
leavePreviousCall();
if (!connectedUser) return;
if (newMeeting) {
createCall();
} else {
getCurrentCall();
}
}
}, [call, callingState, connectedUser, joining, newMeeting, validMeetingId]);
useEffect(() => {
setNewMeeting(newMeeting);
return () => {
setNewMeeting(false);
};
}, [newMeeting, setNewMeeting]);
const heading = useMemo(() => {
if (loading) return 'Getting ready...';
return isGuest ? "What's your name?": 'Ready to join?';
}, [loading, isGuest]);
const participantsUI = useMemo(() => {
switch (true) {
case loading:
return "You'll be able to join in just a moment";
case joining:
return "You'll join the call in just a moment";
case participants.length === 0:
return 'No one else is here';
case participants.length > 0:
return ;
default:
return null;
}
}, [loading, joining, participants]);
const updateGuestName = async () => {
try {
await fetch('/api/user', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
user: { id: connectedUser?.id, name: guestName },
}),
});
await chatClient.disconnectUser();
await chatClient.connectUser(
{
id: GUEST_ID,
type: 'guest',
name: guestName,
},
tokenProvider
);
} catch (error) {
console.error(error);
}
};
const joinCall = async () => {
setJoining(true);
if (isGuest) {
await updateGuestName();
}
if (callingState !== CallingState.JOINED) {
await call?.join();
}
router.push(`/${meetingId}/meeting`);
};
if (!validMeetingId)
return (
"w-full h-full flex flex-col items-center justify-center mt-[6.75rem]">
);
if (errorFetchingMeeting) {
router.push(`/${meetingId}/meeting-end?invalid=true`);
}
return (
false} />
"lg:h-[calc(100svh-80px)] p-4 mt-3 flex flex-col lg:flex-row items-center justify-center gap-8 lg:gap-0">
"flex flex-col items-center lg:justify-center gap-4 grow-0 shrink-0 basis-112 h-135 mr-2 lg:mb-13">
"text-black text-3xl text-center truncate">
{heading}
{isGuest && !loading && (
"Name"
name= "name"
placeholder= "Your name"
value={guestName}
onChange={(e) => setGuestName(e.target.value)}
/>
)}
"text-meet-black font-medium text-center text-sm cursor-default">
{participantsUI}
{!joining && !loading && (
)}
{(joining || loading) && (
"h-14 pb-2.5">
)}
);
};
export default Lobby;
A lot is going on here, so let's break down the essential components:
-
User Authentication: The code checks whether the user is signed in using Clerk's useUser
hook. If the user is a guest (not signed in), they are prompted to enter their name.
-
State Management: We manage state variables for loading, joining, participant information, and the guest's name.
-
Meeting Validation: The meeting ID is validated to ensure it's in the correct format before proceeding.
-
Fetching or Creating Calls: Depending on whether it's a new meeting (using the newMeeting
state), the code either fetches the existing call data or creates a new call. The code also includes handling participants and setting up the call with the Stream Video SDK.
-
Joining the Call: When the user clicks "Join now", the code handles updating the guest's name (if applicable), joins the call, and navigates the user to the meeting page.
-
User Interface: The lobby displays the MeetingPreview
, participant information, and provides controls for the user to adjust their settings before joining.
Let's also add the API route for updating the guest's name. Create a new file at app/api/user/route.ts
with the following code:
import { StreamClient } from '@stream-io/node-sdk';
const API_KEY = process.env.NEXT_PUBLIC_STREAM_API_KEY!;
const SECRET = process.env.STREAM_API_SECRET!;
export async function POST(request: Request) {
const client = new StreamClient(API_KEY, SECRET);
const body = await request.json();
const user = body?.user;
if (!user) {
return Response.error();
}
const response = await client.updateUsersPartial({
users: [
{
id: user.id,
set: {
name: user.name,
role: 'user',
},
},
],
});
return Response.json(response);
}
Here, we create a Stream client and use it to update the guest user's name with the updateUsersPartial
function.
And with that, we've created a fully functional meeting lobby experience!
Note: The meeting route redirects to a 404 page because we haven't created a page for it yet.
Conclusion
In this first part of our series, we have laid the foundation for building a Google Meet clone using Next.js, TailwindCSS, and Stream. We covered the initial setup of the Next.js project, integrated TailwindCSS, and set up authentication with Clerk. We also built the home page and implemented the functionality to create and join meetings.
In the next part, we will build the meeting page and add features like messaging, screen sharing, and recording.
Stay tuned!