App Runtime and Workspaces
This guide explains how the desktop app, authored workspaces, shared workspace state, and machine-local runtime state fit together.
If you are writing flows, this is the missing “how the whole thing hangs together” page.
The two roots to keep in mind
There are usually two important folders:
the workspace collection root
one authored workspace inside that collection
Example:
workspaces/
example_workspace/
flow_modules/
flow_modules/flow_helpers/
config/
databases/
.workspace_state/
The collection root is the parent folder that contains one or more authored workspaces.
The authored workspace is the folder that contains the authoring surface for one logical workspace:
flow_modules/flow_modules/flow_helpers/config/databases/
That authored workspace is what the app binds to when you select a workspace in the UI.
How the app is structured
The desktop app is a single-window operator surface that binds to one authored workspace at a time.
When you change the selected workspace, the app rebinds:
workspace paths
flow discovery
daemon client and daemon manager
local runtime ledger
visible run history and log views
control state and lease state
This means the app is multi-workspace for discovery and selection, but single-workspace for active runtime context.
That distinction matters when you are reasoning about:
what is cheap to inspect globally
what is authoritative for the currently selected workspace
why the UI can feel like one workspace “becomes” the app until you switch again
Local state vs workspace state
Data Engine uses both shared workspace state and machine-local state.
Shared workspace state
Shared workspace state lives inside the authored workspace under .workspace_state/.
It exists so multiple workstations can coordinate around:
control ownership
control requests
shared run history snapshots
shared logs
file freshness state
Machine-local state
Machine-local state lives under the app runtime root and local settings store.
This includes:
the local SQLite runtime ledger for the currently selected workspace
compiled flow-module artifacts
runtime caches
daemon log files
app-local workspace selection and collection-root settings
The local runtime ledger path is resolved per workspace and stays machine-local.
When no workspace collection root is configured, the app stays in an explicit “no workspace” state. The empty-state UI uses that state directly and avoids per-workspace daemon or runtime artifacts.
Compiled flow-module artifacts are also workspace-local. Data Engine loads helper imports against the active workspace’s compiled artifacts so similarly named helper modules in different workspaces stay isolated from each other.
That local ledger is important because the desktop app needs a fast local read model even when the authoritative daemon is elsewhere.
Why both exist
The split gives the system two useful properties:
one workstation can own and publish runtime state for a workspace
another workstation can still open the workspace and observe it without taking control
It also keeps the authored workspace from becoming a dumping ground for every cache and local artifact.
Control, handoff, and control requests
Workspace control is intentionally conservative.
The basic model is:
a workstation claims the workspace
that workstation’s daemon becomes the active owner
it keeps the lease alive through checkpoints
other workstations observe that the workspace is leased
If another workstation wants control, it can request it. Those requests are written to:
.workspace_state/control_requests/<workspace_id>.parquet
A control request records:
requester machine id
requester host name
requester pid
requester client kind
request time
The app surfaces this as “control requested” and makes the handoff visible to operators.
Handoff and takeover
The control UI distinguishes between:
local ownership
another machine owning the workspace
a pending local request for takeover
takeover becoming available after the remote lease appears stale
That behavior comes from WorkspaceControlState, which derives operator-facing status from:
the last daemon snapshot
whether the daemon is live
the current lease metadata checkpoint age
any pending control request
When a takeover is available
If a workspace is leased but the last checkpoint is older than the stale threshold, the UI can surface takeover availability.
The system can also quarantine stale lease state and recover it into the stale/ area before reclaiming the workspace.
The daemon and the selected workspace
The desktop app talks to a per-workspace local daemon.
For GUI use, the daemon lifecycle is intentionally ephemeral:
it is created for the selected workspace as needed
it can survive workspace switches when active work is still running
it follows the selected workspace lifecycle and can stay alive while active work continues
The important behavior is this:
switching away from a workspace leaves active work running
switching back should rehydrate the selected workspace’s daemon state immediately
That immediate rehydration is what keeps engine state, manual runs, and control state accurate after a workspace switch.
Workspace selection
The workspace selector in the app chooses which authored workspace the window is currently bound to.
When you switch workspaces, the app:
closes workspace-scoped preview dialogs
invalidates stale deferred message-box callbacks
hides the selector popup
queues the actual rebind one Qt tick later
That last step is important because it lets the native combo-box popup finish closing before the rest of the workspace state is rebuilt.
Practically, the selected workspace governs:
which flows are loaded
which runtime ledger is open
which daemon is being queried or controlled
which logs and runs are visible in the main view
which workspace-relative
context.config(...)andcontext.database(...)calls make sense during authoring
Workspace provisioning
Provisioning is deliberately safe and additive.
Provisioning a workspace creates missing conventional folders without overwriting existing files:
flow_modules/flow_modules/helpers/config/databases/.vscode/settings.json
Provisioning also writes a .vscode/settings.json at the collection root.
If those files already exist, the provisioning service preserves the existing authored files by default.
This is meant to make a new workspace usable immediately without turning provisioning into a heavy bootstrap system.
VS Code provisioning
Data Engine now writes VS Code settings in two places:
at the workspace collection root
at the individual authored workspace root
Both settings files use a workspace-relative interpreter:
"python.defaultInterpreterPath": "${workspaceFolder}/.venv"
That makes the settings portable across workstations as long as each workstation keeps its venv in the same relative place.
The generated settings also:
hide
.workspace_statefrom VS Code Explorer and searchset terminal environment variables for Data Engine paths on Linux, macOS, and Windows
add
src/topython.analysis.extraPathswhen running from a checkoutenable pytest configuration when a checkout-local
tests/folder exists
The collection-root settings are for the “open the whole workspace collection in VS Code” workflow.
The authored-workspace settings are for the “open just one workspace” workflow.
Flow-module compilation
Flow modules authored as notebooks or Python files are compiled into machine-local runtime artifacts before discovery and execution.
That compilation path intentionally favors structural correctness over filesystem timing quirks:
recompilation is based on rendered content changes
helper imports resolve from the current workspace
mirrored helper packages swap into place as complete directory trees
Those guarantees matter most on network filesystems, cross-platform checkouts, and fast edit/save cycles with coarse timestamp granularity.
Logging and run history
There are a few different log and history concepts that are easy to blur together.
Local runtime ledger
The selected workspace also has a machine-local SQLite runtime ledger. That is the app’s fast local runtime store and is what powers most local querying, hydrated snapshots, and UI views.
GUI run history limits
The GUI intentionally limits how much visible run history it renders at once. The current run-history sidebar/view is capped to 50 visible run groups in the UI.
That cap is a presentation choice for the current UI view.
“Runs last 7 days”
The small footer tag on the home view shows:
modules
groups
flows
runs in the last 7 days
That 7-day value is a summary count for the currently selected workspace.
The kill switch
The Settings pane exposes an emergency kill switch for the selected workspace daemon.
This is intentionally coarse.
It works at the daemon-process level:
asks the daemon to shut down normally
waits briefly for a graceful exit
force-kills the daemon process if it is still alive
performs best-effort cleanup of local daemon/lease state
That is the right emergency tool when a flow is stuck inside a blocking native call or an uninterruptible external library path.
It is intentionally user-driven and appears as an explicit operator action.