Skip to main content
If you can build a web page, you can build a webAI app. Apps are single HTML files that run inside the platform and get access to on-device AI, real-time collaboration, user identity, and more — with zero server infrastructure. This guide walks you through the entire process, starting with the simplest possible app and progressively adding platform features.

Before you start

You don’t need any special tooling. For the quick-start path, all you need is a text editor. For framework-based apps (React, Vue), you’ll want Node.js installed. Here’s a quick look at the two paths:
PathBest forBuild step?
Vanilla HTMLPrototypes, simple tools, learning the platformNo
Framework (React / Vue)Production apps, complex UIs, team projectsYes (Vite)
Start with vanilla HTML to learn the concepts, then move to a framework when your app outgrows a single hand-written file.

Step 1: Build your first app

A webAI app is just an HTML file. Here’s a complete, working example that demonstrates AI inference, identity, and theme integration — all in a single file:
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <title>My First webAI App</title>
  <style>
    :root { --bg: #ffffff; --text: #1a1a1a; --border: #e0e0e0; --accent: #3b82f6; }
    [data-theme="dark"] { --bg: #1a1a1a; --text: #f0f0f0; --border: #333; --accent: #60a5fa; }
    * { box-sizing: border-box; margin: 0; padding: 0; }
    body { font-family: system-ui, sans-serif; background: var(--bg); color: var(--text); padding: 24px; }
    h1 { font-size: 1.5rem; margin-bottom: 8px; }
    .status { font-size: 0.85rem; color: var(--accent); margin-bottom: 16px; }
    textarea { width: 100%; height: 80px; padding: 12px; border: 1px solid var(--border);
      border-radius: 8px; background: var(--bg); color: var(--text); font-size: 1rem; resize: vertical; }
    button { margin-top: 8px; padding: 8px 16px; border: none; border-radius: 6px;
      background: var(--accent); color: #fff; font-size: 0.9rem; cursor: pointer; }
    button:disabled { opacity: 0.5; cursor: not-allowed; }
    pre { margin-top: 16px; padding: 12px; background: var(--border); border-radius: 8px;
      white-space: pre-wrap; font-size: 0.9rem; min-height: 40px; }
  </style>
</head>
<body>
  <h1 id="greeting">webAI App</h1>
  <p class="status" id="status">Checking AI status...</p>
  <textarea id="prompt" placeholder="Ask the AI something..."></textarea>
  <button id="run" disabled>Ask</button>
  <pre id="output"></pre>

  <script>
    // --- Platform access ---
    const getHost = () => window.OasisHost ?? window.parent?.OasisHost ?? null;
    const getIdentity = () => window.UserIdentityManager ?? window.parent?.UserIdentityManager ?? null;

    // --- Theme support ---
    window.addEventListener('message', (e) => {
      if (e.data?.type === 'canvas:theme') {
        document.documentElement.setAttribute('data-theme', e.data.theme);
      }
    });

    // --- Identity ---
    (async () => {
      const im = getIdentity();
      if (im) {
        const { displayName } = await im.getOrCreateIdentity();
        document.getElementById('greeting').textContent = `Hello, ${displayName}`;
      }
    })();

    // --- AI status polling ---
    function probeState() {
      const host = getHost();
      if (!host?.getStatus) return 'waiting';
      const s = host.getStatus();
      if (s?.lastModel) return 'ready';
      if (s?.loadingModel || s?.isGenerating) return 'loading';
      return 'waiting';
    }

    setInterval(() => {
      const state = probeState();
      document.getElementById('status').textContent =
        state === 'ready' ? 'AI ready' : state === 'loading' ? 'AI loading...' : 'No model loaded';
      document.getElementById('run').disabled = state !== 'ready';
    }, 1200);

    // --- Ask AI ---
    document.getElementById('run').addEventListener('click', async () => {
      const host = getHost();
      if (!host) return;
      const prompt = document.getElementById('prompt').value.trim();
      if (!prompt) return;

      const btn = document.getElementById('run');
      const output = document.getElementById('output');
      btn.disabled = true;
      output.textContent = '';

      const release = await host.acquire({ warmRuntime: true });
      try {
        await host.request(prompt, {
          systemPrompt: 'You are a helpful assistant. Be concise.',
          maxTokens: 512,
          temperature: 0.7,
          onToken: (token) => { output.textContent += token; },
        });
      } catch (err) {
        output.textContent = 'Error: ' + err.message;
      } finally {
        if (release) release();
        btn.disabled = false;
      }
    });
  </script>
</body>
</html>
Save this as index.html and upload it to webAI — you’ll have a working AI chat app with theme support and user identity. The sections below break down each platform API in detail.

Step 2: Understand the app model

Before adding platform features, it helps to know how your app fits into webAI.

How apps run

When you upload an app, the webAI shell:
  1. Stores your HTML in the browser’s localStorage
  2. Registers it in the launcher alongside built-in apps
  3. When launched, loads it into a sandboxed iframe
  4. Injects platform globals (OasisHost, CollaborationManager, etc.) into the iframe’s window
Your app runs entirely client-side. There is no server.

Key constraints

ConstraintWhat it means
Single fileAll JS, CSS, and assets must be inlined into one HTML file
No external fetchesDon’t depend on CDN-hosted libraries at runtime
Iframe sandboxYour app runs in an iframe — some browser APIs may be restricted
Graceful degradationShell APIs are null outside webAI. Always check before calling

Deep dive: App architecture

Learn more about the single-file model, iframe sandbox, and recommended project structure.

Step 3: Connect to platform APIs

The webAI shell injects JavaScript globals into your app’s window. These give you access to AI inference, collaboration, identity, encryption, and navigation — no imports or installs needed.

The access pattern

Every API follows the same pattern: check window, then fall back to window.parent:
const getShellAPI = (name) =>
  window[name] ?? window.parent?.[name] ?? null;
Use this helper to access any platform API:
const OasisHost            = getShellAPI('OasisHost');       // On-device AI
const CollaborationManager = getShellAPI('CollaborationManager'); // P2P spaces
const UserIdentityManager  = getShellAPI('UserIdentityManager'); // Identity
const ApogeeShell          = getShellAPI('ApogeeShell');     // Navigation
const E2ECrypto            = getShellAPI('E2ECrypto');       // Encryption
All of these return null when your app runs outside the webAI shell (e.g., during local development). Always check for null before calling any methods.

Available platform APIs

APIWhat it doesReference
OasisHostRun AI inference on-device — stream completions, check model statusOasisHost API →
CollaborationManagerHost/join spaces, sync state between usersCollaboration API →
UserIdentityManagerAccess the user’s device identity () and display nameIdentity API →
ApogeeShellNavigate between shell views (launcher, whiteboard, etc.)Navigation API →
E2ECryptoEncrypt and decrypt data between peersEncryption →

Deep dive: Accessing shell APIs

See the full access pattern, the recommended integration module, and framework-specific examples.

Step 4: Add platform features

Now that you know how to access APIs, let’s put them to use. Each section below is independent — add only what your app needs.

Add on-device AI

The OasisHost API lets your app run language model inference directly on the user’s device. No cloud, no API keys. The flow is: acquire the runtime, send a prompt, stream tokens, and release the runtime.
async function askAI(prompt) {
  const host = window.OasisHost ?? window.parent?.OasisHost;
  if (!host) {
    console.warn('AI not available outside webAI.');
    return null;
  }

  const release = await host.acquire({ warmRuntime: true });
  try {
    let result = '';
    await host.request(prompt, {
      systemPrompt: 'You are a helpful assistant.',
      maxTokens: 2048,
      temperature: 0.7,
      onToken: (token) => {
        result += token;
        document.getElementById('output').textContent = result;
      },
    });
    return result;
  } finally {
    if (release) release();
  }
}
Key points:
  • acquire() locks the AI runtime for your app. Always release it when done.
  • onToken fires for each generated token, enabling real-time streaming UI.
  • getStatus() tells you whether a model is loaded (ready), loading (loading), or unavailable (waiting).

Full reference: OasisHost API

Covers runtime status polling, request parameters, streaming patterns, and framework examples.

Identify the user

Access the current user’s display name and device identity (ODID) with UserIdentityManager:
async function getIdentity() {
  const im = window.UserIdentityManager ?? window.parent?.UserIdentityManager;
  if (!im) return { odid: 'local-dev', displayName: 'You (dev mode)' };
  return im.getOrCreateIdentity();
}

const user = await getIdentity();
console.log(`Hello, ${user.displayName}`);
There are no accounts or usernames in webAI. Every device has a unique, auto-generated ODID that serves as its identity.

Full reference: Identity & encryption

Covers ODID, auth headers for P2P requests, and the E2ECrypto API.

Make it collaborative

CollaborationManager lets your app create and join real-time peer-to-peer spaces. The platform handles all networking, state sync, persistence, and conflict resolution.
const collab = window.CollaborationManager ?? window.parent?.CollaborationManager;

async function startRoom() {
  if (!collab) return;
  const state = await collab.hostRoom({ roomName: 'My Space', password: null });
  console.log('Space created! Code:', state.roomCode);
}

async function enterRoom(code) {
  if (!collab) return;
  return collab.joinRoom(code, null);
}
The collaboration flow:
  1. One user hosts a space — this generates a space code
  2. Others join using that code
  3. Participants exchange state through the platform
  4. The platform broadcasts changes, persists state, and resolves conflicts via

Full reference: Collaboration API

Covers hosting, joining, leaving spaces, state management, and framework integration examples.

Step 5: Set up a framework project (optional)

For anything beyond a simple prototype, a framework with a bundler gives you a better development experience. Here’s how to set up a project that produces the single-file output webAI requires.

Scaffold the project

npm create vite@latest my-app -- --template react
cd my-app
npm install
npm install --save-dev vite-plugin-singlefile

Configure the bundler

Update vite.config.js to inline all assets into a single HTML file:
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';
import { viteSingleFile } from 'vite-plugin-singlefile';

export default defineConfig({
  plugins: [react(), viteSingleFile()],
  build: { outDir: 'dist' },
});

Create the integration module

Instead of scattering window.OasisHost ?? window.parent?.OasisHost throughout your codebase, create a src/webai.js file as your single integration layer:
export const getShellAPI = (name) =>
  window[name] ?? window.parent?.[name] ?? null;

export const getOasisHost            = () => getShellAPI('OasisHost');
export const getApogeeShell          = () => getShellAPI('ApogeeShell');
export const getCollaborationManager = () => getShellAPI('CollaborationManager');
export const getUserIdentityManager  = () => getShellAPI('UserIdentityManager');
export const getE2ECrypto            = () => getShellAPI('E2ECrypto');

export function getOasisState() {
  const host = getOasisHost();
  if (!host?.getStatus) return 'waiting';
  const s = host.getStatus();
  if (s?.lastModel) return 'ready';
  if (s?.loadingModel || s?.isGenerating) return 'loading';
  return 'waiting';
}

export async function streamCompletion(prompt, systemPrompt, onToken) {
  const host = getOasisHost();
  if (!host) throw new Error('Oasis AI is not available in this environment.');
  const release = await host.acquire({ warmRuntime: true });
  try {
    return await host.request(prompt, {
      systemPrompt: systemPrompt ?? '',
      maxTokens: 2048,
      temperature: 0.7,
      onToken,
    });
  } finally {
    if (release) release();
  }
}

Then import from it anywhere in your app:
import { getOasisHost, streamCompletion } from './webai';

Develop locally

npm run dev
Your app opens in a normal browser tab. Shell APIs will be null — this is expected. Design your UI with graceful fallbacks so you can iterate without the full shell running.
Show a subtle banner in dev mode like “Running outside webAI — AI and collaboration unavailable.” This makes it obvious which features need the shell while you focus on building your UI.

Deep dive: App lifecycle

Covers the full development workflow, project structure, bundler config, and the upload script in detail.

Step 6: Upload your app

When your app is ready, you upload it to the webAI shell. The process depends on how you built it.

Vanilla HTML apps

Use the New App flow directly in the webAI launcher:
  1. Open the launcher
  2. Click New App
  3. Select your .html file
  4. Choose an icon and color
  5. Give it a name
Your app appears in the launcher immediately.

Framework apps

Framework apps need a build step first to produce the single-file output.
1

Build the single-file output

npm run build
This produces dist/index.html — a single self-contained HTML file with all JS, CSS, and assets inlined.
2

Generate the upload script

npm run upload
This reads your built HTML and generates a browser console script, or installs directly if the webAI desktop app is running.
3

Install via browser console (if needed)

If direct install isn’t available:
  1. Copy the generated script
  2. Open your webAI shell in the browser
  3. Open DevTools (Cmd+Option+I or F12)
  4. Paste the script into the Console and press Enter
  5. Refresh the launcher

Deep dive: Upload scripts and versioning

See how to create the upload script, how versioning works, and direct install via the desktop app.

Step 7: Share it

Once your app is uploaded, sharing is as simple as being in a space:
  1. Open your app in a space
  2. Right-click it and choose Share App, or pin it for the group
  3. Other space members receive it automatically
There’s no app store, no publish step, no approval process. The space is the distribution channel.

Guide: Share apps

Learn all the ways to distribute apps — direct sharing, pinning, and responding to requests.

Putting it all together

Here’s a complete example of a framework-based app that uses AI, identity, and navigation — the most common combination:
import { useState, useEffect } from 'react';
import { getOasisState, streamCompletion } from './webai';

const getUserIdentityManager = () =>
  window.UserIdentityManager ?? window.parent?.UserIdentityManager ?? null;

export default function App() {
  const [aiState, setAiState] = useState('waiting');
  const [userName, setUserName] = useState('');
  const [prompt, setPrompt] = useState('');
  const [output, setOutput] = useState('');
  const [generating, setGenerating] = useState(false);

  useEffect(() => {
    setAiState(getOasisState());
    const id = setInterval(() => setAiState(getOasisState()), 1200);
    return () => clearInterval(id);
  }, []);

  useEffect(() => {
    const im = getUserIdentityManager();
    if (im) {
      im.getOrCreateIdentity().then((id) => setUserName(id.displayName));
    } else {
      setUserName('Developer');
    }
  }, []);

  async function handleSubmit(e) {
    e.preventDefault();
    if (!prompt.trim() || generating) return;
    setGenerating(true);
    setOutput('');
    try {
      await streamCompletion(
        prompt,
        'You are a helpful assistant.',
        (token) => setOutput((prev) => prev + token)
      );
    } catch (err) {
      setOutput('Error: ' + err.message);
    } finally {
      setGenerating(false);
    }
  }

  return (
    <div style={{ padding: '1.5rem', fontFamily: 'system-ui, sans-serif' }}>
      <header>
        <h1>My AI App</h1>
      </header>

      <p>Hello, {userName}! AI status: <strong>{aiState}</strong></p>

      <form onSubmit={handleSubmit}>
        <textarea
          value={prompt}
          onChange={(e) => setPrompt(e.target.value)}
          placeholder="Ask something..."
          rows={3}
          style={{ width: '100%', fontSize: '1rem' }}
        />
        <button type="submit" disabled={generating || aiState !== 'ready'}>
          {generating ? 'Generating...' : 'Ask AI'}
        </button>
      </form>

      {output && <pre style={{ whiteSpace: 'pre-wrap', marginTop: '1rem' }}>{output}</pre>}
    </div>
  );
}

Best practices

Every shell API returns null outside of webAI. Wrap every call in a check so your app works during local development and doesn’t crash when a feature isn’t available.
Apps under 1MB load quickly. If your build exceeds 5MB, consider optimizing images, removing unused dependencies, or lazy-loading heavy components.
Always call the release function after acquire(), even if the request fails. Use try/finally to guarantee cleanup — otherwise other apps can’t use the AI runtime.
Peers can drop out at any time. Design your state model to handle partial participation. See the Collaboration API best practices.
Build your UI and core logic so it works in a normal browser tab. Add platform features on top with graceful fallbacks. This makes development much faster.

Quick reference

Everything you need in one place:
TopicGuideAPI reference
How apps are structuredThis pageApp architecture
Accessing platform APIsThis pageShell APIs
On-device AIThis pageOasisHost API
User identityThis pageIdentity & encryption
Real-time collaborationThis pageCollaboration API
NavigationThis pageNavigation API
Build & deployThis pageApp lifecycle
SharingShare apps guide

Next steps