Coreal.
Book a working session →
← PLATFORM·P2 · Banking OS · BaaS

Core Banking Platform

The same core that runs Coreal's own products, exposed as a multi-tenant white-label backend. Each tenant gets its own ledger partition, secrets, environments and audit trail.

Tenants
Multi
Region
UK + EU
Mode
BaaS
LEDGERBPMTREASURYWHITE-LABEL
01Capabilities

What ships in the box.

C-01

Fincore backend

Double-entry ledger with idempotent posting and reconciliation

C-02

Accounting & treasury

GL, balance sheet, cash management, FX hedging hooks

C-03

BPM workflows

Money-movement orchestration with replay-safe state machines

C-04

Sponsor-bank controls

Velocity, exposure, KYT thresholds enforced at gateway

C-05

On/off-ramp rails

Card, IBAN, SEPA, swift, crypto under one router

C-06

Multi-tenant isolation

Per-tenant tokens, schemas, audit logs, environments

02Technical deep dive

How it works under the hood.

Five sections covering the structural decisions, the data model under them, and the operational characteristics you can show to an architecture review or a regulator.

§ 01

What "multi-tenant" actually means here

Multi-tenant in core banking is a phrase that means very different things to different vendors. For Coreal it means tenant isolation at four layers simultaneously: (1) IAM — every tenant has its own OAuth client space, role definitions and token issuer; (2) network — tenant API traffic terminates on tenant-scoped endpoints with separate rate limits; (3) data — every ledger row carries a tenant_id and every query is scoped through a tenant predicate enforced by row-level security; (4) operations — separate audit logs, separate alert routing, separate runbooks per tenant. An incident in tenant A cannot leak data, exhaust resources or create regulatory contamination for tenant B.

§ 02

Why the ledger is partitioned, not sharded

Sharding spreads one logical ledger across many physical nodes for scale. Partitioning per tenant means each tenant has its own logical ledger inside the same physical database, with isolation guaranteed by schema boundaries and explicit row-level security policies. Coreal uses partitioning rather than full sharding because tenants are a fundamentally different unit of isolation than customers — each tenant has its own audit, regulatory perimeter, and recovery profile. A tenant should be restorable, exportable, and auditable as a unit. With shared physical infrastructure but partitioned data, this is achievable; with full sharding, it becomes a multi-region operations problem.

§ 03

BPM as the spine

Every money-movement flow in the core banking platform is a BPM process — onboarding, KYC step-up, withdrawal approval, card issuance, dispute lifecycle, AML case management. Process definitions are versioned artefacts, executed by a Camunda-class engine. Each instance carries a journal of state transitions, actor IDs, decision metadata and elapsed time. When the regulator asks "show me every withdrawal over £10k that required dual approval in March", the answer is a query against the BPM event log, not a custom report against application code. The process model is the source of truth for what should happen; the event log is the source of truth for what did happen.

§ 04

Sponsor-bank controls as gateway policy

A BaaS deployment depends on the sponsor bank's acceptance of the platform's control envelope. In Coreal, sponsor-bank-level controls — exposure caps per customer, velocity limits, KYT thresholds for crypto in/out — are encoded as policies in the provider gateway, not as application-layer checks. This means the bank's risk team can audit the policy file directly, see the active version, and reason about coverage without reading product code. When the bank requires a tightening (a new sanctions list, an exposure ceiling change, a new merchant category block), it is a policy commit with an approval flow, not an engineering ticket.

§ 05

Idempotency as a hard invariant

Every API request that mutates the ledger carries an idempotency-key. The platform stores key→outcome bindings for 24 hours; a duplicate key returns the original outcome without re-processing. This is not a courtesy feature — it is a hard requirement for double-entry correctness. Without it, network retries from clients, scheme replays, or queue redelivery can result in duplicated postings, broken balances, and a reconciliation problem that surfaces hours or days later. Coreal's gateway rejects unkeyed mutating requests at the boundary; internal services must surface a key on every retryable operation.

03Data model

Core entities & key fields.

The handful of entities you would design first if you were building this from scratch. Naming and field shape match what an audit firm or counterparty would expect.

ENTITYPURPOSEKEY FIELDS
tenantTop-level isolation unit (one BaaS customer)id · name · region · status · plan · sponsor_bank_id
ledger_accountAccount in tenant's GL (asset, liability, equity, P&L)id · tenant_id · type · currency · parent_id · code
movementIdempotent money-movement instructionid · idempotency_key · tenant_id · type · status · payload · created_at
process_instanceBPM workflow executionid · tenant_id · definition · version · state · vars · journal_ref
policyVersioned policy artefact (limits, blocks, KYT)id · tenant_id · scope · version · body · activated_at · approved_by
audit_eventImmutable per-tenant event log entryid · tenant_id · ts · actor · action · resource · before · after
04Request lifecycle

From input to settled state.

The path a single operation takes through the system. Every step is journaled and replayable.

1. Movement requestTenant API call with idempotency-key + payload
2. Policy gateProvider gateway evaluates active policy: limits, sanctions, KYT, exposure
3. BPM dispatchIf policy permits, BPM process starts (or resumes) with the movement
4. Ledger postingWorkflow steps post journal entries in transaction with idempotency-key
5. External executionProvider gateway routes to sponsor bank / scheme / custodian
6. Settlement reconciliationProvider response matched to journal entry; outcome closed in BPM
05Operational characteristics

What this looks like in production.

SLO-01
Tenant isolation
4 layers

IAM, network, data (RLS), operations (logs/runbooks)

SLO-02
Posting throughput
14.2k tps

Sustained, hot-path, double-entry across tenants

SLO-03
BPM process count
38 active

Versioned, replayable, journaled per execution

SLO-04
Idempotency window
24 hours

Key→outcome cached at gateway boundary

SLO-05
Sponsor-bank policy refresh
< 5 min

Approved policy commit → live at gateway

SLO-06
DR/BCP RPO/RTO
< 5 min / < 1 hr

Tested quarterly with sponsor-bank participation

06Architecture

Behind the surface.

Service mesh of stateless workers in front of partitioned PostgreSQL ledgers. BPM engine (Camunda-class) drives every money-movement flow. Tenant boundary enforced at the IAM, network, schema and ops-runbook layers.

07Integrations

Vendors & rails.

Sponsor banksCard processorsKYT vendorsOAuth2 IdPsAWS KMSKafkaOpenTelemetry
08Regulatory posture

BaaS contracts with sponsor-bank oversight · per-tenant SAR pipeline · SOC 2-aligned controls · DR/BCP runbooks tested quarterly.

09Adjacent products

The rest of the platform.