There's a working webcam heart rate monitor running at beamos-2.polsia.app/try. You can open it right now in your browser — no install, no API key required. If you want to build your own instead, this post walks through the complete JavaScript implementation, line by line.

The code is on GitHub at polsia-inc/beamos-2/examples/heart-rate-demo. It's a single index.html file with embedded CSS and JS — nothing to install, nothing to compile. Open it in any modern browser and it works.

Prerequisites: A device with a webcam and a modern browser (Chrome, Firefox, Safari, Edge). No API key needed for the demo endpoint. For production use, grab a key at /keys (free, instant).

What This Builds

A real-time vitals display that reads your heart rate, HRV, SpO2, stress index, and respiration rate from a webcam feed. The architecture is simple: camera → frame capture → API call → render vitals. Every step is under 20 lines of JavaScript.

The underlying technology is remote photoplethysmography (rPPG) — detecting subtle skin color changes caused by blood flow through facial capillaries. The full science is explained here; this post focuses on the code.

1 Configuration

Two variables control everything. API_ENDPOINT points to the Beam AI demo server (no auth required, 10 req/hr limit). SCAN_INTERVAL_MS controls how often we capture and analyze a frame.

JavaScript — Config
// Point at the Beam AI hosted demo (no auth needed for demo usage)
const API_ENDPOINT = 'https://beamos-2.polsia.app/api/demo/analyze';
const API_KEY = null; // set to 'beam_YOUR_KEY' for production

// How often to take a reading, in milliseconds
const SCAN_INTERVAL_MS = 5000;

Swap API_ENDPOINT for your own backend that proxies to the Beam AI API, and set API_KEY to authenticate. The demo endpoint is rate-limited; production usage requires an API key.

2 Opening the Camera

getUserMedia is the Web API for camera and microphone access. We request a video stream with facingMode: 'user' (front-facing camera on mobile), targeting 640×480 resolution — sufficient for rPPG without inflating the payload size.

JavaScript — Camera access
const stream = await navigator.mediaDevices.getUserMedia({
  video: { facingMode: 'user', width: { ideal: 640 }, height: { ideal: 480 } }
});
video.srcObject = stream;
await video.play();

The returned stream is an object URL assigned to the video element's srcObject property. Calling video.play() waits for the first frame to render — necessary before we try to capture anything.

getUserMedia is a browser API, not a JavaScript library. It's available in all modern browsers with no polyfill needed. If the user denies camera permission, it throws — we catch that and surface the error message.

3 Capturing a Frame

The video element streams continuous frames. To capture one, we draw the current frame onto a hidden <canvas> element and export it as a base64 JPEG.

JavaScript — Frame capture
const ctx = canvas.getContext('2d');
canvas.width  = video.videoWidth  || 640;
canvas.height = video.videoHeight || 480;

// Un-mirror the canvas (the video preview is mirrored with CSS)
ctx.save();
ctx.scale(-1, 1);
ctx.drawImage(video, -canvas.width, 0, canvas.width, canvas.height);
ctx.restore();

// Export as base64 JPEG at 85% quality
const frameDataUrl = canvas.toDataURL('image/jpeg', 0.85);

The ctx.scale(-1, 1) flip undoes the CSS mirror transform on the video preview. Without this, captured frames would be laterally inverted — which would degrade face detection accuracy on the API side.

The JPEG quality setting (0.85) is a tradeoff. Lower quality means smaller payloads and faster API calls, but JPEG artifacts can introduce noise into the color signal. 0.85 is a good default — small file, clean signal.

4 Calling the API

The analyzeFrame() function sends the captured frame to the Beam AI endpoint and renders the response. It's the core of the app.

JavaScript — API call
async function analyzeFrame() {
  const frameDataUrl = captureFrame();

  // Show scanning indicator while the API processes the frame
  scanRing.classList.remove('hidden');
  setStatus('Analyzing…');

  try {
    const headers = { 'Content-Type': 'application/json' };
    if (API_KEY) headers['X-API-Key'] = API_KEY;

    const response = await fetch(API_ENDPOINT, {
      method: 'POST',
      headers,
      body: JSON.stringify({ frame: frameDataUrl })
    });

    // Handle rate limit gracefully — show reset time, pause scanning
    if (response.status === 429) {
      const data = await response.json();
      setStatus(`Rate limit — resets at ${new Date(data.reset_at).toLocaleTimeString()}`, true);
      clearInterval(scanTimer);
      return;
    }

    if (!response.ok) throw new Error(`HTTP ${response.status}`);

    const data = await response.json();
    renderVitals(data.vitals, data.meta);
    setStatus(`Updated · ${new Date().toLocaleTimeString()} · ${data.meta.calls_remaining} scans remaining`);

  } catch (err) {
    setStatus(`Request failed: ${err.message}`, true);
  } finally {
    scanRing.classList.add('hidden');
  }
}

The request body is a simple JSON object with a frame field containing the base64 JPEG string. The API returns a JSON response with two top-level keys:

API Response Shape
{
  "vitals": {
    "heart_rate": 72,
    "confidence": 0.93,
    "hrv": 48,
    "spo2": 98,
    "stress_index": 3.2,
    "respiration_rate": 15
  },
  "meta": {
    "calls_remaining": 8,
    "processing_time_ms": 847
  }
}

5 Rendering the Results

renderVitals() receives the API response and updates the DOM. The confidence score gates whether we display a reading — low-confidence results are surfaced with a prompt to retry under better conditions.

JavaScript — Vitals rendering
function renderVitals(vitals, meta) {
  // Primary metric — large BPM display
  set(bpmValue,   vitals.heart_rate,       null, '--placeholder');
  set(hrvValue,   vitals.hrv,              'ms', '--placeholder');
  set(spo2Value,  vitals.spo2,             '%',  '--placeholder');
  set(stressValue, vitals.stress_index,   null, '--placeholder');
  set(respValue,  vitals.respiration_rate, 'br/min', '--placeholder');

  // Confidence bar — visual quality indicator for the user
  const pct = Math.round((vitals.confidence || 0) * 100);
  confidencePct.textContent = `${pct}%`;
  confidenceBar.style.width = `${pct}%`;
}

// Helper: update a metric element, removing placeholder style
function set(el, value, unit, placeholderClass) {
  if (value == null) return;
  el.classList.remove(placeholderClass);
  const unitEl = el.querySelector('.metric-unit');
  if (unitEl) {
    el.childNodes[0].textContent = value;
  } else {
    el.textContent = unit ? `${value}` : value;
    if (unit) {
      const span = document.createElement('span');
      span.className = 'metric-unit';
      span.textContent = unit;
      el.appendChild(span);
    }
  }
}

The set() helper strips the placeholder styling (dimmed gray text) and injects the actual value. When confidence is low, you can choose to leave the values as placeholders and show a "Try again in better lighting" message instead of displaying potentially inaccurate readings.

The Continuous Scanning Loop

After starting the camera, we wait one second for the feed to stabilize, then start a repeating timer that calls analyzeFrame() every SCAN_INTERVAL_MS milliseconds.

JavaScript — Scanning loop
function startScanning() {
  analyzeFrame(); // immediate first scan
  scanTimer = setInterval(analyzeFrame, SCAN_INTERVAL_MS);
}

// In the button click handler:
setTimeout(startScanning, 1000); // warm up camera first

The setInterval approach gives you periodic readings. For a single-snapshot app (like a health check widget), you'd call analyzeFrame() once on a button click instead. The demo uses continuous scanning so users can watch their vitals update in real-time as they relax.

Production tip: For real-world apps, collect multiple frames over 15–30 seconds and average the results to smooth out noise. The quickstart guide covers the multi-frame averaging pattern and confidence gating in detail.

What the Full App Looks Like

All together — camera setup, frame capture, API call, and rendering — the JavaScript is under 130 lines. The entire project is a single HTML file with no build step, no npm packages, and no server-side code.

You can find the complete file at github.com/Polsia-Inc/beamos-2/tree/main/examples/heart-rate-demo. Clone it, open index.html, and it runs.

To use your own API key and bypass the demo rate limit, edit the two config variables at the top:

JavaScript — Production config
const API_ENDPOINT = 'https://beamos-2.polsia.app/api/demo/analyze';
const API_KEY = 'beam_YOUR_KEY_HERE'; // from /keys

Production keys get higher rate limits and unlock the full vitals response including HRV, stress index, and SpO2.

See It Running Right Now

The live demo has the complete working implementation. Open it, point your camera at your face, and watch the vitals update. Or grab an API key and build your own version.

What's Next

From here you can extend the demo in several directions: