The @revrag-ai/embed-react-native SDK adds a voice AI agent to your app: a floating action button (FAB) backed by LiveKit, optional navigation-aware visibility, and a user-context channel to your embed backend (PUT .../user-context/update).Latest published package version:1.0.35 (install with npm install @revrag-ai/embed-react-native@latest or your package manager’s equivalent). Confirm the current version on npm before you pin a release in production.What you get out of the box
Realtime voice with the agent through the FAB
Screen and app context for richer conversations (route tracking via EmbedProvider, optional explicit SCREEN_STATE)
Event tracking: host-driven analytics and custom payloads via Embed.Event, plus agent lifecycle signals (AgentEvent)
Best-effort click tracking on touchables when the widget visibility rules allow it (importing the package wires this safely; failures should not crash your app)
Server-driven UI for the FAB via widget_config from device registration
Advanced FAB behavior (route groups, show delays, insets, per-group rules): covered in EmbedProvider advanced - read it once you move past a simple includeScreens list
Import only from the package entry (@revrag-ai/embed-react-native). Do not rely on deep imports from src/ unless your team explicitly supports them.
GestureHandlerRootView at the app root (react-native-gesture-handler)
Peer libraries listed in Installation (LiveKit, Reanimated, Gesture Handler, Async Storage, Lottie, Safe Area, Linear Gradient)
The SDK runs polyfills on import for Hermes / LiveKit safety. Audio and networking must be correctly configured or voice will fail silently or with native errors.
Android and iOS need LiveKit native initialization, permissions, Lottie on Android, Reanimated Babel config, and related steps. Those are easy to miss - work through Appendix: Native platform setup once, then rebuild the app.After native steps, run:
Call useInitialize near the root (for example in App.tsx). It registers the device, prepares LiveKit on the JS side, and returns { isInitialized, error }.
import { useInitialize } from '@revrag-ai/embed-react-native';export default function App() { const { isInitialized, error } = useInitialize({ apiKey: 'YOUR_EMBED_API_KEY', // embedUrl: 'https://your-embed-host', // optional; omit to use SDK default host }); if (error) { // Show an error UI or retry } if (!isInitialized) { // Optional: splash / loading until the SDK is ready } return <YourApp />;}
Wrap the whole app in GestureHandlerRootView (required for gesture handler).
Wrap NavigationContainer with EmbedProvider, passing the same ref you attach to NavigationContainer. The provider mounts the FAB for you. You usually do not import EmbedButton separately.
import { useRef } from 'react';import { NavigationContainer } from '@react-navigation/native';import { GestureHandlerRootView } from 'react-native-gesture-handler';import { useInitialize, EmbedProvider } from '@revrag-ai/embed-react-native';import packageJson from './package.json';export default function App() { const navigationRef = useRef(null); const { isInitialized, error } = useInitialize({ apiKey: 'YOUR_EMBED_API_KEY' }); if (error || !isInitialized) { return null; // replace with loading / error UI } return ( <GestureHandlerRootView style={{ flex: 1 }}> <EmbedProvider navigationRef={navigationRef} appVersion={packageJson.version} includeScreens={['Home', 'Settings']} // omit or [] = all routes (subject to server config) > <NavigationContainer ref={navigationRef}>{/* navigators */}</NavigationContainer> </EmbedProvider> </GestureHandlerRootView> );}
Until USER_DATA succeeds with a valid app_user_id, many backend updates for other event types may be skipped or cannot be built. The FAB can still render; fix registration if analytics or context look empty.
Mount EmbedButton yourself on screens where you want the FAB, and send SCREEN_STATE manually when the step changes. You still need useInitialize, GestureHandlerRootView, and USER_DATA for full backend behavior.
Basic integration uses includeScreens (and optionally embedButtonDelayMs). For route groups, per-group delays, insets, and continuity rules, you need the expanded API. Full prop tables, JSON examples, and behavior notes are in EmbedProvider advanced - treat it as the companion doc whenever the FAB must behave differently by flow or screen.
Recommended: full EmbedProvider advanced guide
Important for production UX. Same guide as the Recommended card in Step 3 - Wrap the app above. Use when you configure groups, delays, and insets, not only includeScreens.
After device registration, the SDK reads widget_config to style the FAB (avatar Lottie/image, colors, copy, corner position, paddings, nudge / inactivity behavior). EmbedProvider props control when the FAB is shown and delays/insets in your app; they do not replace widget_config.Typical top-level JSON sections map to parsed types such as agentAvatar, agentTextContent, colorPalette, collapsedView (nudge / popup), and position (corner and edge padding). Exact aliases and parsing live in the package under src/api/types/widget.config.types.ts (use that file as the backend contract).
EmbedEventKeys (data events) - You call Embed.Event(key, data, onResult?). On success, the SDK **PUT**s user-context updates with type matching the key (user_data, screen_state, custom_event, analytics_data).
AgentEvent (agent events) - The SDK emits these on Embed.event for FAB / voice / mic / popup moments. Subscribe with embedOnAgent or Embed.event.on. On the wire they appear as analytics_data with event_name set to the agent string.
The SDK does not push HTTP acknowledgements back into JS. For “same moment as the write”, use local callbacks below.
import Embed, { EmbedEventKeys } from '@revrag-ai/embed-react-native';const onScreen = (data: unknown) => { /* runs only after a successful API send */};Embed.on(EmbedEventKeys.SCREEN_STATE, onScreen);// Embed.off(EmbedEventKeys.SCREEN_STATE, onScreen);
User-context PUT payloads use a type aligned with EmbedEventKeys. Agent lifecycle is mirrored as analytics_data with event_name. For field-level contracts, open BACKEND_EVENTS.md in node_modules/@revrag-ai/embed-react-native after install.
“User identity not found” or empty backend context
Cause:USER_DATA not sent or failed; other events need a stored app_user_id.Fix: Send USER_DATA right after login with onResult. Send other Embed.Event calls after you know registration succeeded (or handle failures explicitly).
Use HTTPS in production. For development-only HTTP, add careful NSAppTransportSecurity exceptions (never ship NSAllowsArbitraryLoads: true for production). See plist examples in the appendix.
Initialize once at the app root with useInitialize; avoid calling it from every screen.
Send USER_DATA as soon as you have a stable app_user_id (typically immediately after login). Use onResult to surface failures.
Debounce high-frequency SCREEN_STATE or analytics calls if your navigation updates rapidly.
Subscribe to embedOnAgent in useEffect and always call embedOffAgent(handle) on cleanup (Strict Mode safe).
Use HTTPS and valid TLS in production; keep cleartext exceptions dev-only.
Hide the FAB on sensitive flows (auth, payments) with includeScreens or grouped visibility config.
Plan FAB visibility early: If product needs groups, delays, or insets beyond a flat screen list, read EmbedProvider advanced before locking UI - retrofitting rules is harder than wiring them during integration.
Log event.type in development when integrating AgentEvent; payloads can vary by call site.
<key>NSMicrophoneUsageDescription</key><string>This app needs access to microphone for voice communication with AI agent</string><key>NSAppTransportSecurity</key><dict> <key>NSAllowsArbitraryLoads</key> <false/> <key>NSAllowsLocalNetworking</key> <true/></dict>
For dev-only HTTP to specific hosts, add NSExceptionDomains entries. Avoid NSAllowsArbitraryLoads: true in production.