This vignette demonstrates the Representation-Level Control Surfaces (RLCS) paradigm in a complete end-to-end pipeline.
We will simulate: 1. A Latent Manifold: A sequence
of embeddings moving smoothly. 2. Fault Injection:
Introducing a sudden “shock” (temporal discontinuity) and “noise”
(out-of-distribution). 3. Sensing: Applying
reslik (population), tcs (temporal), and
agreement (cross-view) sensors. 4.
Control: Deriving explicit PROCEED /
DEFER / ABSTAIN signals.
This illustrates how RLCS catches failure modes that might otherwise propagate silently through an AI system.
We generate a sequence of 50 time steps. * Steps 1-20: Normal operation (Clean). * Step 21: Shock (Sudden jump). * Steps 22-50: Recovery (but potentially noisy).
The use of set.seed(42) ensures that the generated data
and subsequent sensor responses are fully reproducible and deterministic
across environments.
set.seed(42)
n_steps <- 50
dim_z <- 5
# 1. Generate latent walk (smooth)
z_clean <- matrix(0, nrow = n_steps, ncol = dim_z)
for (t in 2:n_steps) {
z_clean[t, ] <- z_clean[t-1, ] + rnorm(dim_z, 0, 0.1)
}
# 2. Inject Faults
z_corrupt <- z_clean
# Shock at t=21
z_corrupt[21, ] <- z_corrupt[21, ] + 5.0
# Noise burst at t=30
z_corrupt[30, ] <- z_corrupt[30, ] + rnorm(dim_z, 0, 2.0)We simulate an encoder that projects these latent states into a
“feature” space. Ideally, this would be a neural network. Here, we use a
fixed random projection followed by tanh to bound the
outputs.
In RLCS, the ResLik sensor requires a reference distribution (mean and standard deviation) derived from “known good” population data. We calculate this from the clean sequence.
We now run the RLCS sensor array on the corrupted sequence.
This detects if the current embedding looks like the training set.
library(resLIK)
# We analyze steps 2 to 50 (since TCS needs t-1)
z_test <- feat_corrupt[2:n_steps, ]
res_out <- reslik(z_test, ref_mean = ref_mean, ref_sd = ref_sd)
# Inspect diagnostics
summary(res_out$diagnostics$discrepancy)
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> 0.566 1.058 1.339 1.471 1.807 3.644This detects if the embedding changed too quickly between \(t-1\) and \(t\).
z_prev <- feat_corrupt[1:(n_steps-1), ]
tcs_out <- tcs(z_test, z_prev)
# We expect a drop in consistency at the shock index (t=21 in original -> index 20 in reduced)
plot(tcs_out$consistency, type='l', main="Temporal Consistency", ylim=c(0,1))This detects if two views of the data agree. We simulate a “backup” sensor that is slightly noisy but generally agrees, except at the shock where it disagrees.
We feed the diagnostics into the deterministic control surface.
Let’s examine the distribution of signals generated by the control surface.
table(signals)
#> signals
#> ABSTAIN
#> 49
# Check the signal at the shock point (t=21 of original -> index 20)
print(paste("Signal at Shock (t=21):", signals[20]))
#> [1] "Signal at Shock (t=21): ABSTAIN"
# Check the signal at the noise burst (t=30 of original -> index 29)
print(paste("Signal at Noise (t=30):", signals[29]))
#> [1] "Signal at Noise (t=30): ABSTAIN"This demonstrates how RLCS converts complex high-dimensional anomalies into actionable, discrete safety signals.