Skip to content

System architecture

This page is the technical overview of how Sylva is structured in production. For per-repository clone paths, Wolfram/LTI satellites, LoopBack fork dependencies, and a deprecation/legacy table, use Repositories and workspace map.

Sylva is a multi-tenant learning and assessment platform. The browser SPA (Sylva Enterprise) talks to a LoopBack REST API (Identity Manager) backed by MongoDB and integrated with Firebase, Google Cloud services, Stripe, and email/SMS providers. api-models defines the API as LoopBack models shared via a git submodule. Background work splits between LoopBack Worker methods, Cloud Tasks/Pub/Sub handlers in the API codebase, and standalone services in the sylva-worker repository.

sequenceDiagram
  participant U as User browser
  participant SE as Sylva Enterprise
  participant API as Identity Manager /api
  participant M as MongoDB
  participant F as Firebase Auth / Firestore
  U->>SE: Load app (Quasar SPA)
  SE->>API: GET cloud-config + REST calls
  API->>M: Persist org/project/content metadata
  SE->>F: signInWithCustomToken (after login flow)
  SE->>API: Authenticated /api/* with token + org context
  API->>M: Read/write domain models
  1. Login: Email/password (or org SSO) flows hit Identity Manager; successful login yields a token stored client-side; Firebase custom token may be issued per organization for realtime features.
  2. Org context: The client attaches organization scope (e.g. _org header) so the API can enforce ACLs and multi-tenancy.
  3. Editor and runtime: Course structure, modules, and content metadata are primarily MongoDB via LoopBack; some features use Firestore for project-scoped or high-churn data (implementation details live in api-models).
  4. Files: Uploads and generated assets typically flow through GCS (and CDN URLs) with File model records pointing at storage locations.
flowchart TB
  subgraph client [Client tier]
    Enterprise[Sylva Enterprise Quasar SPA]
  end
  subgraph api [Application tier]
    IM[Identity Manager Node + LoopBack]
    AM[api-models as common/models]
  end
  subgraph data [Data & identity]
    Mongo[(MongoDB)]
    Firebase[(Firebase Auth / Firestore)]
    GCS[Google Cloud Storage]
  end
  subgraph async [Async & integrations]
    Tasks[Cloud Tasks / Scheduler]
    PubSub[Pub/Sub]
    Stripe[Stripe]
    SendGrid[SendGrid / email]
    Wolfram[Wolfram WEPC / WWE APIs]
  end
  subgraph workers [Workers]
    LBWorker[LoopBack Worker methods]
    SW[sylva-worker services]
  end
  Enterprise -->|HTTPS| IM
  AM --> IM
  IM --> Mongo
  IM --> Firebase
  IM --> GCS
  IM --> Stripe
  IM --> SendGrid
  Enterprise --> Wolfram
  Tasks --> IM
  PubSub --> LBWorker
  PubSub --> SW
  SW --> GCS
  SW --> Mongo
ConcernPrimary storeNotes
Users, orgs, roles, projects, modules, content metadataMongoDBLoopBack models from api-models
Sessions / Firebase identityFirebaseCustom tokens from API; client SDK
Real-time or large structured trees (where used)FirestoreProject features in api-models
Blobs, exports, PDFsGCSOften referenced by File or job outputs
Analytics / BIBigQuery (optional)Via loaders in sylva-worker or pipelines

Exact collection and field layouts are defined in model JSON and migrations—not duplicated here.

  • Transport: HTTPS everywhere for production clients.
  • API: LoopBack ACLs + custom remote method checks; org access resolver middleware (see api-models middleware/).
  • Frontend: Route and component guards using normalized effective roles (see Authentication and access control).
  • Secrets: API keys and service accounts must remain server-side or in CI secrets—not in Sylva Enterprise client bundles.
UnitTypical hostingRepo
SPA static assets + Quasar buildCDN / Firebase Hosting / GCS bucketsylva-enterprise
REST API + EJS/SSO viewsApp Engine, VM, or containeridentity-manager
Worker processesCloud Functions, Cloud Run, GKE, VMssylva-worker
Speedtest binaryAny Linux/macOS serverspeedtest

Environment-specific hostnames (beta, production, regional) are configured outside this doc; use your environment’s API_HOST and cloud config for each deployment.