coming soon

AllOnEars API

Real-time speech intelligence infrastructure

Quick Start

Get up and running with AllOnEars in under 5 minutes. Install the React SDK, connect to the WebSocket stream, and start receiving real-time context cards.

javascript
# Install the React SDK
npm install @allonears/react-sdk

# Use in your React app
import { useAllOnEars } from '@allonears/react-sdk'

const { startStream, transcript, cards } =
  useAllOnEars('your_api_key', {
    domain: 'general',
    aggressiveness: 'medium'
  })

// Start streaming audio
await startStream()
// Cards arrive automatically via WebSocket
// as the conversation flows

Authentication

All API requests require a valid JWT (JSON Web Token) generated via the /auth/token endpoint. Pass the token in the Authorization header for REST, or as a query parameter for WebSocket connections.

⚠ Never expose your API key in client-side code. Use environment variables and generate JWT tokens server-side.

bash
# Generate a JWT token
curl -X POST https://api.allonears.ai/auth/token \
  -H "Content-Type: application/json" \
  -d '{"api_key": "your_api_key"}'

# Use in REST requests
curl -X GET https://api.allonears.ai/v1/sessions \
  -H "Authorization: Bearer <jwt_token>"

# Use in WebSocket connections
wss://api.allonears.ai/v1/stream?token=<jwt_token>

Sessions

A session is configured via a WebSocket event after connecting to the stream. Send a session.config message to set the domain, aggressiveness, and buffer behavior for your use case.

WSsession.config
domainreq
stringContext domain: medical, sales, lecture, collaboration, entertainment, general
aggressiveness
stringSuggestion frequency: low, medium, high (default: medium)
language
stringISO 639-1 language code or 'auto' for detection (default: en)
buffer_size
numberAudio buffer size in bytes (default: 2048)
data_residency
stringData region: us, eu, apac (default: us)
javascript
// Configure session after WebSocket connect
ws.send(JSON.stringify({
  type: 'session.config',
  domain: 'medical',
  aggressiveness: 'low',
  language: 'en',
  buffer_size: 2048,
  overflow_behavior: 'drop_oldest'
}))

// Send user feedback for RLHF
ws.send(JSON.stringify({
  type: 'interaction.feedback',
  card_id: 'c-998877',
  action: 'clicked' // or 'dismissed'
}))

Streaming

Stream audio via WebSocket for real-time transcription and context card generation. Send 16-bit PCM audio chunks at 16kHz sample rate.

GETwss://api.allonears.ai/v1/stream

Server → Client Events:

transcript.partial
eventInterim transcription result (real-time)
transcript.final
eventFinal transcription with confidence score
context.card
eventRelevant visual context card (image, text, graph, map)
system.status
eventPipeline status: idle, processing, or drift_detected
javascript
// Connect to WebSocket
const ws = new WebSocket(
  'wss://api.allonears.ai/v1/stream?token=<jwt>'
)

// Configure session on connect
ws.onopen = () => {
  ws.send(JSON.stringify({
    type: 'session.config',
    domain: 'medical',
    aggressiveness: 'medium'
  }))
}

// Stream audio chunks (PCM16 @ 16kHz)
mediaRecorder.ondataavailable = (e) => {
  ws.send(e.data)
}

// Receive events
ws.onmessage = (event) => {
  const data = JSON.parse(event.data)
  switch (data.type) {
    case 'transcript.partial':
      console.log('Interim:', data.text)
      break
    case 'transcript.final':
      console.log('Final:', data.text)
      break
    case 'context.card':
      console.log('Card:', data.title)
      break
    case 'system.status':
      console.log('Latency:', data.latency_ms)
      break
  }
}

Insights

Context cards are streamed in real-time via WebSocket as the conversation flows. Each card contains a trigger entity, visual content, source information, and a relevance score.

WScontext.card
trigger_entity
stringThe spoken entity that triggered this card
card_type
stringVisual type: image, text, graph, or map
content_url
stringURL of the visual content asset
relevance_score
numberRelevance confidence score (0-1)
source
stringSource attribution for the content
json
// context.card event payload
{
  "type": "context.card",
  "id": "c-998877",
  "trigger_entity": "aerodynamic loads",
  "card_type": "image",
  "title": "Aerodynamic Load Distribution",
  "content_url": "https://cdn.allonears.ai/...",
  "summary": "Distribution of pressure...",
  "source": "NASA Technical Reports",
  "relevance_score": 0.88
}

// Card types: image, text, graph, map
// Cards are pushed automatically —
// no polling or REST calls needed

Webhooks

Receive server-side event notifications via HTTP webhooks. Configure a webhook URL in your session config to receive context card and transcript events for backend processing.

Event Types:

context.card.created
eventNew context card generated from speech analysis
transcript.final
eventFinal transcript segment available
session.ended
eventSession completed, full transcript ready
json
// Webhook payload example
{
  "event": "context.card.created",
  "session_id": "ses_a1b2c3d4e5",
  "timestamp": "2026-03-15T14:32:00Z",
  "data": {
    "id": "c-998877",
    "trigger_entity": "myocardial infarction",
    "card_type": "image",
    "title": "ECG Reference Chart",
    "relevance_score": 0.94,
    "source": "PubMed"
  },
  "signature": "sha256=a1b2c3..."
}

SDKs & Libraries

Official client libraries for integrating AllOnEars into your application.

React (Web)stable
React Nativebeta
Swiftbeta
Kotlinalpha
bash
# React (Web)
npm install @allonears/react-sdk

# React Native (Mobile)
npm install @allonears/react-native-sdk

# Swift (iOS/macOS)
# Add to Package.swift:
.package(
  url: "https://github.com/allonears/swift-sdk",
  from: "0.1.0"
)

# Kotlin (Android)
// build.gradle
implementation 'com.allonears:sdk:0.1.0'