DDESONN: A Deep Dynamic Experimental Self-Organizing Neural Network Framework

Provides a fully native R deep learning framework for constructing, training, evaluating, and inspecting Deep Dynamic Ensemble Self Organizing Neural Networks at research scale. The core engine is an object oriented R6 class-based implementation with explicit control over layer layout, dimensional flow, forward propagation, back propagation, and transparent optimizer state updates. The framework does not rely on external deep learning back ends, enabling direct inspection of model state, reproducible numerical behavior, and fine grained architectural control without requiring compiled dependencies or graphics processing unit specific run times. Users can define dimension agnostic single layer or deep multi-layer networks without hard coded architecture limits, with per layer configuration vectors for activation functions, derivatives, dropout behavior, and initialization strategies automatically aligned to network depth through controlled replication or truncation. Reproducible workflows can be executed through high level helpers for fit, run, and predict across binary classification, multi-class classification, and regression modes. Training pipelines support optional self organization, adaptive learning rate behavior, and structured ensemble orchestration in which candidate models are evaluated under user specified performance metrics and selectively promoted or pruned to refine a primary ensemble, enabling controlled ensemble evolution over successive runs. Ensemble evaluation includes fused prediction strategies in which member outputs may be combined through weighted averaging, arithmetic averaging, or voting mechanisms to generate consolidated metrics for research level comparison and reproducible per-seed assessment. The framework supports multiple optimization approaches, including stochastic gradient descent, adaptive moment estimation, and look ahead methods, alongside configurable regularization controls such as L1, L2, and mixed penalties with separate weight and bias update logic. Evaluation features provide threshold tuning, relevance scoring, receiver operating characteristic and precision recall curve generation, area under curve computation, regression error diagnostics, and report ready metric outputs. The package also includes artifact path management, debug state utilities, structured run level metadata persistence capturing seeds, configuration states, thresholds, metrics, ensemble transitions, fused evaluation artifacts, and model identifiers, as well as reproducible scripts and vignettes documenting end to end experiments. Kingma and Ba (2015) <doi:10.48550/arXiv.1412.6980> "Adam: A Method for Stochastic Optimization". Hinton et al. (2012) <https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf> "Neural Networks for Machine Learning (RMSprop lecture notes)". Duchi et al. (2011) <https://jmlr.org/papers/v12/duchi11a.html> "Adaptive Subgradient Methods for Online Learning and Stochastic Optimization". Zeiler (2012) <doi:10.48550/arXiv.1212.5701> "ADADELTA: An Adaptive Learning Rate Method". Zhang et al. (2019) <doi:10.48550/arXiv.1907.08610> "Lookahead Optimizer: k steps forward, 1 step back". You et al. (2019) <doi:10.48550/arXiv.1904.00962> "Large Batch Optimization for Deep Learning: Training BERT in 76 minutes (LAMB)". McMahan et al. (2013) <https://research.google.com/pubs/archive/41159.pdf> "Ad Click Prediction: a View from the Trenches (FTRL-Proximal)". Klambauer et al. (2017) <https://proceedings.neurips.cc/paper/6698-self-normalizing-neural-networks.pdf> "Self-Normalizing Neural Networks (SELU)". Maas et al. (2013) <https://ai.stanford.edu/~amaas/papers/relu_hybrid_icml2013_final.pdf> "Rectifier Nonlinearities Improve Neural Network Acoustic Models (Leaky ReLU / rectifiers)".

Version: 7.1.9
Depends: R (≥ 4.1.0)
Imports: R6, stats, utils, dplyr, openxlsx, tidyr, pROC, PRROC, reshape2, digest, ggplot2
Suggests: testthat, knitr, rmarkdown, foreach, quantmod, randomForest, reticulate, zoo, readxl, tibble
Published: 2026-03-03
DOI: 10.32614/CRAN.package.DDESONN (may not be active yet)
Author: Mathew William Armitage Fok [aut, cre]
Maintainer: Mathew William Armitage Fok <quiksilver67213 at yahoo.com>
BugReports: https://github.com/MatHatter/DDESONN/issues
License: MIT + file LICENSE
URL: https://github.com/MatHatter/DDESONN
NeedsCompilation: no
Materials: README
CRAN checks: DDESONN results

Documentation:

Reference manual: DDESONN.html , DDESONN.pdf
Vignettes: DDESONN vs Keras — 1000-Seed Summary — Heart Failure (source, R code)
DDESONN Main / Change / Movement Logs - Ensemble Runs: Scenario D (source, R code)
DDESONN — Plot Controls — Scenario 1 — Ensemble Runs: Scenario C & D (source, R code)
DDESONN — Plot Controls — Scenario 1 & 2 — Single Run: Scenario A (source, R code)

Downloads:

Package source: DDESONN_7.1.9.tar.gz
Windows binaries: r-devel: not available, r-release: not available, r-oldrel: not available
macOS binaries: r-release (arm64): DDESONN_7.1.9.tgz, r-oldrel (arm64): DDESONN_7.1.9.tgz, r-release (x86_64): not available, r-oldrel (x86_64): not available

Linking:

Please use the canonical form https://CRAN.R-project.org/package=DDESONN to link to this page.

mirror server hosted at Truenetwork, Russian Federation.