CRAN Package Check Results for Maintainer ‘Lorenz A. Kapsner <lorenz.kapsner at gmail.com>’

Last updated on 2025-12-26 23:51:39 CET.

Package ERROR OK
autonewsmd 13
BiasCorrector 13
DQAgui 13
DQAstats 13
kdry 4 9
mlexperiments 1 12
mllrnrs 4 9
mlsurvlrnrs 13
rBiasCorrection 13
sjtable2df 13

Package autonewsmd

Current CRAN status: OK: 13

Package BiasCorrector

Current CRAN status: OK: 13

Package DQAgui

Current CRAN status: OK: 13

Package DQAstats

Current CRAN status: OK: 13

Package kdry

Current CRAN status: ERROR: 4, OK: 9

Version: 0.0.2
Check: examples
Result: ERROR Running examples in ‘kdry-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: mlh_reshape > ### Title: mlh_reshape > ### Aliases: mlh_reshape > > ### ** Examples > > set.seed(123) > class_0 <- rbeta(100, 2, 4) > class_1 <- (1 - class_0) * 0.4 > class_2 <- (1 - class_0) * 0.6 > dataset <- cbind("0" = class_0, "1" = class_1, "2" = class_2) > mlh_reshape(dataset) Error in xtfrm.data.frame(list(`0` = 0.219788839894465, `1` = 0.312084464042214, : cannot xtfrm data frames Calls: mlh_reshape ... [.data.table -> which.max -> xtfrm -> xtfrm.data.frame Execution halted Flavors: r-devel-linux-x86_64-debian-clang, r-devel-linux-x86_64-debian-gcc

Version: 0.0.2
Check: tests
Result: ERROR Running ‘testthat.R’ [6s/7s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > library(testthat) > library(kdry) > > test_check("kdry") Saving _problems/test-mlh-70.R [ FAIL 1 | WARN 0 | SKIP 6 | PASS 71 ] ══ Skipped tests (6) ═══════════════════════════════════════════════════════════ • On CRAN (6): 'test-lints.R:10:5', 'test-rep.R:3:1', 'test-rep.R:22:1', 'test-rep.R:42:1', 'test-rep.R:61:1', 'test-rep.R:75:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-mlh.R:70:5'): test mlh - mlh_outsample_row_indices ───────────── Error in `xtfrm.data.frame(structure(list(`0` = 0.219788839894465, `1` = 0.312084464042214, `2` = 0.468126696063321), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x55967c6c6fe0>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─kdry::mlh_reshape(dataset) at test-mlh.R:70:5 2. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 3. │ └─data.table:::`[.data.table`(...) 4. └─base::which.max(.SD) 5. ├─base::xtfrm(`<data.table>`) 6. └─base::xtfrm.data.frame(`<data.table>`) [ FAIL 1 | WARN 0 | SKIP 6 | PASS 71 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-debian-clang

Version: 0.0.2
Check: tests
Result: ERROR Running ‘testthat.R’ [4s/5s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > library(testthat) > library(kdry) > > test_check("kdry") Saving _problems/test-mlh-70.R [ FAIL 1 | WARN 0 | SKIP 6 | PASS 71 ] ══ Skipped tests (6) ═══════════════════════════════════════════════════════════ • On CRAN (6): 'test-lints.R:10:5', 'test-rep.R:3:1', 'test-rep.R:22:1', 'test-rep.R:42:1', 'test-rep.R:61:1', 'test-rep.R:75:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-mlh.R:70:5'): test mlh - mlh_outsample_row_indices ───────────── Error in `xtfrm.data.frame(structure(list(`0` = 0.219788839894465, `1` = 0.312084464042214, `2` = 0.468126696063321), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x55f1f8151070>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─kdry::mlh_reshape(dataset) at test-mlh.R:70:5 2. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 3. │ └─data.table:::`[.data.table`(...) 4. └─base::which.max(.SD) 5. ├─base::xtfrm(`<data.table>`) 6. └─base::xtfrm.data.frame(`<data.table>`) [ FAIL 1 | WARN 0 | SKIP 6 | PASS 71 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-debian-gcc

Version: 0.0.2
Check: examples
Result: ERROR Running examples in ‘kdry-Ex.R’ failed The error most likely occurred in: > ### Name: mlh_reshape > ### Title: mlh_reshape > ### Aliases: mlh_reshape > > ### ** Examples > > set.seed(123) > class_0 <- rbeta(100, 2, 4) > class_1 <- (1 - class_0) * 0.4 > class_2 <- (1 - class_0) * 0.6 > dataset <- cbind("0" = class_0, "1" = class_1, "2" = class_2) > mlh_reshape(dataset) Error in xtfrm.data.frame(list(`0` = 0.219788839894465, `1` = 0.312084464042214, : cannot xtfrm data frames Calls: mlh_reshape ... [.data.table -> which.max -> xtfrm -> xtfrm.data.frame Execution halted Flavors: r-devel-linux-x86_64-fedora-clang, r-devel-linux-x86_64-fedora-gcc

Version: 0.0.2
Check: tests
Result: ERROR Running ‘testthat.R’ [9s/10s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > library(testthat) > library(kdry) > > test_check("kdry") Saving _problems/test-mlh-70.R [ FAIL 1 | WARN 0 | SKIP 6 | PASS 71 ] ══ Skipped tests (6) ═══════════════════════════════════════════════════════════ • On CRAN (6): 'test-lints.R:10:5', 'test-rep.R:3:1', 'test-rep.R:22:1', 'test-rep.R:42:1', 'test-rep.R:61:1', 'test-rep.R:75:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-mlh.R:70:5'): test mlh - mlh_outsample_row_indices ───────────── Error in `xtfrm.data.frame(structure(list(`0` = 0.219788839894465, `1` = 0.312084464042214, `2` = 0.468126696063321), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x5638be2b5d10>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─kdry::mlh_reshape(dataset) at test-mlh.R:70:5 2. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 3. │ └─data.table:::`[.data.table`(...) 4. └─base::which.max(.SD) 5. ├─base::xtfrm(`<data.table>`) 6. └─base::xtfrm.data.frame(`<data.table>`) [ FAIL 1 | WARN 0 | SKIP 6 | PASS 71 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-fedora-clang

Version: 0.0.2
Check: tests
Result: ERROR Running ‘testthat.R’ [10s/12s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > library(testthat) > library(kdry) > > test_check("kdry") Saving _problems/test-mlh-70.R [ FAIL 1 | WARN 0 | SKIP 6 | PASS 71 ] ══ Skipped tests (6) ═══════════════════════════════════════════════════════════ • On CRAN (6): 'test-lints.R:10:5', 'test-rep.R:3:1', 'test-rep.R:22:1', 'test-rep.R:42:1', 'test-rep.R:61:1', 'test-rep.R:75:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-mlh.R:70:5'): test mlh - mlh_outsample_row_indices ───────────── Error in `xtfrm.data.frame(structure(list(`0` = 0.219788839894465, `1` = 0.312084464042214, `2` = 0.468126696063321), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x2a850550>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─kdry::mlh_reshape(dataset) at test-mlh.R:70:5 2. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 3. │ └─data.table:::`[.data.table`(...) 4. └─base::which.max(.SD) 5. ├─base::xtfrm(`<dt[,3]>`) 6. └─base::xtfrm.data.frame(`<dt[,3]>`) [ FAIL 1 | WARN 0 | SKIP 6 | PASS 71 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-fedora-gcc

Package mlexperiments

Current CRAN status: ERROR: 1, OK: 12

Version: 0.0.8
Check: examples
Result: ERROR Running examples in 'mlexperiments-Ex.R' failed The error most likely occurred in: > ### Name: performance > ### Title: performance > ### Aliases: performance > > ### ** Examples > > dataset <- do.call( + cbind, + c(sapply(paste0("col", 1:6), function(x) { + rnorm(n = 500) + }, + USE.NAMES = TRUE, + simplify = FALSE + ), + list(target = sample(0:1, 500, TRUE)) + )) > > fold_list <- splitTools::create_folds( + y = dataset[, 7], + k = 3, + type = "stratified", + seed = 123 + ) > > glm_optimization <- mlexperiments::MLCrossValidation$new( + learner = LearnerGlm$new(), + fold_list = fold_list, + seed = 123 + ) > > glm_optimization$learner_args <- list(family = binomial(link = "logit")) > glm_optimization$predict_args <- list(type = "response") > glm_optimization$performance_metric_args <- list( + positive = "1", + negative = "0" + ) > glm_optimization$performance_metric <- list( + auc = metric("AUC"), sensitivity = metric("TPR"), + specificity = metric("TNR") + ) > glm_optimization$return_models <- TRUE > > # set data > glm_optimization$set_data( + x = data.matrix(dataset[, -7]), + y = dataset[, 7] + ) > > cv_results <- glm_optimization$execute() CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. > > # predictions > preds <- mlexperiments::predictions( + object = glm_optimization, + newdata = data.matrix(dataset[, -7]), + na.rm = FALSE, + ncores = 2L, + type = "response" + ) Error in `[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), : attempt access index 3/3 in VECTOR_ELT Calls: <Anonymous> -> [ -> [.data.table Execution halted Flavor: r-devel-windows-x86_64

Version: 0.0.8
Check: tests
Result: ERROR Running 'testthat.R' [296s] Running the tests in 'tests/testthat.R' failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mlexperiments) > > test_check("mlexperiments") CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold4 CV fold: Fold5 Testing for identical folds in 2 and 1. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. Saving _problems/test-glm_predictions-79.R CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerLm'. Saving _problems/test-glm_predictions-188.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 22.5 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.64 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 23.36 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.64 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. Registering parallel backend using 2 cores. Running initial scoring function 4 times in 2 thread(s)... 8.95 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.52 seconds 3) Running FUN 2 times in 2 thread(s)... 3.55 seconds CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 8.58 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.57 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 8.45 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.56 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 8.23 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.58 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 18.5 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.45 seconds 3) Running FUN 2 times in 2 thread(s)... 3.5 seconds Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.56 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.47 seconds 3) Running FUN 2 times in 2 thread(s)... 1.36 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.78 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.46 seconds 3) Running FUN 2 times in 2 thread(s)... 1.46 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 9.33 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.47 seconds 3) Running FUN 2 times in 2 thread(s)... 1.5 seconds CV fold: Fold1 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold2 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold3 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 2.76 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.49 seconds 3) Running FUN 2 times in 2 thread(s)... 0.18 seconds Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 2.75 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.45 seconds 3) Running FUN 2 times in 2 thread(s)... 0.24 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 2.79 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.42 seconds 3) Running FUN 2 times in 2 thread(s)... 0.19 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 2.78 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.46 seconds 3) Running FUN 2 times in 2 thread(s)... 0.19 seconds CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. [ FAIL 2 | WARN 0 | SKIP 1 | PASS 68 ] ══ Skipped tests (1) ═══════════════════════════════════════════════════════════ • On CRAN (1): 'test-lints.R:10:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-glm_predictions.R:73:5'): test predictions, binary - glm ─────── Error in ``[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), sd = stats::sd(as.numeric(.SD), na.rm = na.rm)), .SDcols = colnames(res), by = seq_len(nrow(res)))`: attempt access index 5/5 in VECTOR_ELT Backtrace: ▆ 1. └─mlexperiments::predictions(...) at test-glm_predictions.R:73:5 2. ├─...[] 3. └─data.table:::`[.data.table`(...) ── Error ('test-glm_predictions.R:182:5'): test predictions, regression - lm ─── Error in ``[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), sd = stats::sd(as.numeric(.SD), na.rm = na.rm)), .SDcols = colnames(res), by = seq_len(nrow(res)))`: attempt access index 5/5 in VECTOR_ELT Backtrace: ▆ 1. └─mlexperiments::predictions(...) at test-glm_predictions.R:182:5 2. ├─...[] 3. └─data.table:::`[.data.table`(...) [ FAIL 2 | WARN 0 | SKIP 1 | PASS 68 ] Error: ! Test failures. Execution halted Flavor: r-devel-windows-x86_64

Package mllrnrs

Current CRAN status: ERROR: 4, OK: 9

Version: 0.0.7
Check: tests
Result: ERROR Running ‘testthat.R’ [65s/265s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > # https://github.com/Rdatatable/data.table/issues/5658 > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mllrnrs) > > test_check("mllrnrs") CV fold: Fold1 CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 7.55 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 12.845 seconds 3) Running FUN 2 times in 2 thread(s)... 0.554 seconds OMP: Warning #96: Cannot form a team with 3 threads, using 2 instead. OMP: Hint Consider unsetting KMP_DEVICE_THREAD_LIMIT (KMP_ALL_THREADS), KMP_TEAMS_THREAD_LIMIT, and OMP_THREAD_LIMIT (if any are set). CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.346 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 12.369 seconds 3) Running FUN 2 times in 2 thread(s)... 0.702 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.193 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 17.069 seconds 3) Running FUN 2 times in 2 thread(s)... 0.607 seconds CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Saving _problems/test-binary-287.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Saving _problems/test-multiclass-162.R CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold2 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold3 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold1 Saving _problems/test-multiclass-294.R CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 6.892 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.903 seconds 3) Running FUN 2 times in 2 thread(s)... 0.922 seconds CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 7.493 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.249 seconds 3) Running FUN 2 times in 2 thread(s)... 0.592 seconds CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 6.746 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.353 seconds 3) Running FUN 2 times in 2 thread(s)... 0.942 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 9.821 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 21.466 seconds 3) Running FUN 2 times in 2 thread(s)... 1.08 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 9.533 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 3.618 seconds 3) Running FUN 2 times in 2 thread(s)... 0.994 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.939 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 31.815 seconds 3) Running FUN 2 times in 2 thread(s)... 0.942 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 [ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ] ══ Skipped tests (3) ═══════════════════════════════════════════════════════════ • On CRAN (3): 'test-binary.R:57:5', 'test-lints.R:10:5', 'test-multiclass.R:57:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-binary.R:287:5'): test nested cv, grid, binary - ranger ──────── Error in `xtfrm.data.frame(structure(list(`0` = 0.379858310721837, `1` = 0.620141689278164), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x55c38be73fe0>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─ranger_optimizer$execute() at test-binary.R:287:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─hparam_tuner$execute(k = self$k_tuning) 9. │ └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer) 10. │ └─mlexperiments:::.run_optimizer(...) 11. │ └─optimizer$execute(x = private$x, y = private$y, method_helper = private$method_helper) 12. │ ├─base::do.call(...) 13. │ └─mlexperiments (local) `<fn>`(...) 14. │ └─base::lapply(...) 15. │ └─mlexperiments (local) FUN(X[[i]], ...) 16. │ ├─base::do.call(FUN, fun_parameters) 17. │ └─mlexperiments (local) `<fn>`(...) 18. │ ├─base::do.call(private$fun_optim_cv, kwargs) 19. │ └─mllrnrs (local) `<fn>`(...) 20. │ ├─base::do.call(ranger_predict, pred_args) 21. │ └─mllrnrs (local) `<fn>`(...) 22. │ └─kdry::mlh_reshape(preds) 23. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 24. │ └─data.table:::`[.data.table`(...) 25. └─base::which.max(.SD) 26. ├─base::xtfrm(`<dt[,2]>`) 27. └─base::xtfrm.data.frame(`<dt[,2]>`) ── Error ('test-multiclass.R:162:5'): test nested cv, grid, multiclass - lightgbm ── Error in `xtfrm.data.frame(structure(list(`0` = 0.20774260202068, `1` = 0.136781829323219, `2` = 0.655475568656101), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x55c38be73fe0>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─lightgbm_optimizer$execute() at test-multiclass.R:162:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─mlexperiments:::.cv_fit_model(...) 9. │ ├─base::do.call(self$learner$predict, pred_args) 10. │ └─mlexperiments (local) `<fn>`(...) 11. │ ├─base::do.call(private$fun_predict, kwargs) 12. │ └─mllrnrs (local) `<fn>`(...) 13. │ └─kdry::mlh_reshape(preds) 14. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 15. │ └─data.table:::`[.data.table`(...) 16. └─base::which.max(.SD) 17. ├─base::xtfrm(`<dt[,3]>`) 18. └─base::xtfrm.data.frame(`<dt[,3]>`) ── Error ('test-multiclass.R:294:5'): test nested cv, grid, multi:softprob - xgboost, with weights ── Error in `xtfrm.data.frame(structure(list(`0` = 0.250160574913025, `1` = 0.124035485088825, `2` = 0.62580394744873), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x55c38be73fe0>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─xgboost_optimizer$execute() at test-multiclass.R:294:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─mlexperiments:::.cv_fit_model(...) 9. │ ├─base::do.call(self$learner$predict, pred_args) 10. │ └─mlexperiments (local) `<fn>`(...) 11. │ ├─base::do.call(private$fun_predict, kwargs) 12. │ └─mllrnrs (local) `<fn>`(...) 13. │ └─kdry::mlh_reshape(preds) 14. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 15. │ └─data.table:::`[.data.table`(...) 16. └─base::which.max(.SD) 17. ├─base::xtfrm(`<dt[,3]>`) 18. └─base::xtfrm.data.frame(`<dt[,3]>`) [ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-debian-clang

Version: 0.0.7
Check: tests
Result: ERROR Running ‘testthat.R’ [51s/164s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > # https://github.com/Rdatatable/data.table/issues/5658 > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mllrnrs) > > test_check("mllrnrs") CV fold: Fold1 CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 5.368 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 6.117 seconds 3) Running FUN 2 times in 2 thread(s)... 0.573 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 4.949 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 4.889 seconds 3) Running FUN 2 times in 2 thread(s)... 0.512 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 4.947 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 8.056 seconds 3) Running FUN 2 times in 2 thread(s)... 0.56 seconds CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Saving _problems/test-binary-287.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Saving _problems/test-multiclass-162.R CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold2 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold3 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold1 Saving _problems/test-multiclass-294.R CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 3.782 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.693 seconds 3) Running FUN 2 times in 2 thread(s)... 0.463 seconds CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 4.265 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.099 seconds 3) Running FUN 2 times in 2 thread(s)... 0.546 seconds CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 4.314 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.443 seconds 3) Running FUN 2 times in 2 thread(s)... 1.494 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 7.904 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 10.589 seconds 3) Running FUN 2 times in 2 thread(s)... 0.596 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 5.377 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.315 seconds 3) Running FUN 2 times in 2 thread(s)... 0.612 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 5.5 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 16.422 seconds 3) Running FUN 2 times in 2 thread(s)... 0.861 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 [ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ] ══ Skipped tests (3) ═══════════════════════════════════════════════════════════ • On CRAN (3): 'test-binary.R:57:5', 'test-lints.R:10:5', 'test-multiclass.R:57:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-binary.R:287:5'): test nested cv, grid, binary - ranger ──────── Error in `xtfrm.data.frame(structure(list(`0` = 0.379858310721837, `1` = 0.620141689278164), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x556d81434070>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─ranger_optimizer$execute() at test-binary.R:287:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─hparam_tuner$execute(k = self$k_tuning) 9. │ └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer) 10. │ └─mlexperiments:::.run_optimizer(...) 11. │ └─optimizer$execute(x = private$x, y = private$y, method_helper = private$method_helper) 12. │ ├─base::do.call(...) 13. │ └─mlexperiments (local) `<fn>`(...) 14. │ └─base::lapply(...) 15. │ └─mlexperiments (local) FUN(X[[i]], ...) 16. │ ├─base::do.call(FUN, fun_parameters) 17. │ └─mlexperiments (local) `<fn>`(...) 18. │ ├─base::do.call(private$fun_optim_cv, kwargs) 19. │ └─mllrnrs (local) `<fn>`(...) 20. │ ├─base::do.call(ranger_predict, pred_args) 21. │ └─mllrnrs (local) `<fn>`(...) 22. │ └─kdry::mlh_reshape(preds) 23. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 24. │ └─data.table:::`[.data.table`(...) 25. └─base::which.max(.SD) 26. ├─base::xtfrm(`<dt[,2]>`) 27. └─base::xtfrm.data.frame(`<dt[,2]>`) ── Error ('test-multiclass.R:162:5'): test nested cv, grid, multiclass - lightgbm ── Error in `xtfrm.data.frame(structure(list(`0` = 0.20774260202068, `1` = 0.136781829323219, `2` = 0.655475568656101), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x556d81434070>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─lightgbm_optimizer$execute() at test-multiclass.R:162:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─mlexperiments:::.cv_fit_model(...) 9. │ ├─base::do.call(self$learner$predict, pred_args) 10. │ └─mlexperiments (local) `<fn>`(...) 11. │ ├─base::do.call(private$fun_predict, kwargs) 12. │ └─mllrnrs (local) `<fn>`(...) 13. │ └─kdry::mlh_reshape(preds) 14. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 15. │ └─data.table:::`[.data.table`(...) 16. └─base::which.max(.SD) 17. ├─base::xtfrm(`<dt[,3]>`) 18. └─base::xtfrm.data.frame(`<dt[,3]>`) ── Error ('test-multiclass.R:294:5'): test nested cv, grid, multi:softprob - xgboost, with weights ── Error in `xtfrm.data.frame(structure(list(`0` = 0.250160574913025, `1` = 0.124035485088825, `2` = 0.62580394744873), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x556d81434070>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─xgboost_optimizer$execute() at test-multiclass.R:294:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─mlexperiments:::.cv_fit_model(...) 9. │ ├─base::do.call(self$learner$predict, pred_args) 10. │ └─mlexperiments (local) `<fn>`(...) 11. │ ├─base::do.call(private$fun_predict, kwargs) 12. │ └─mllrnrs (local) `<fn>`(...) 13. │ └─kdry::mlh_reshape(preds) 14. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 15. │ └─data.table:::`[.data.table`(...) 16. └─base::which.max(.SD) 17. ├─base::xtfrm(`<dt[,3]>`) 18. └─base::xtfrm.data.frame(`<dt[,3]>`) [ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-debian-gcc

Version: 0.0.7
Check: tests
Result: ERROR Running ‘testthat.R’ [87s/239s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > # https://github.com/Rdatatable/data.table/issues/5658 > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mllrnrs) > > test_check("mllrnrs") CV fold: Fold1 CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 10.758 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 16.779 seconds 3) Running FUN 2 times in 2 thread(s)... 0.785 seconds OMP: Warning #96: Cannot form a team with 24 threads, using 2 instead. OMP: Hint Consider unsetting KMP_DEVICE_THREAD_LIMIT (KMP_ALL_THREADS), KMP_TEAMS_THREAD_LIMIT, and OMP_THREAD_LIMIT (if any are set). CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 10.313 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 12.87 seconds 3) Running FUN 2 times in 2 thread(s)... 0.662 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.89 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 19.163 seconds 3) Running FUN 2 times in 2 thread(s)... 0.693 seconds CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Saving _problems/test-binary-287.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Saving _problems/test-multiclass-162.R CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold2 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold3 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold1 Saving _problems/test-multiclass-294.R CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 6.836 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.372 seconds 3) Running FUN 2 times in 2 thread(s)... 0.726 seconds CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 6.552 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.395 seconds 3) Running FUN 2 times in 2 thread(s)... 0.625 seconds CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 6.515 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.189 seconds 3) Running FUN 2 times in 2 thread(s)... 0.693 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 7.849 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 20.789 seconds 3) Running FUN 2 times in 2 thread(s)... 0.838 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 7.109 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 8.6 seconds 3) Running FUN 2 times in 2 thread(s)... 0.661 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 7.097 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 13.924 seconds 3) Running FUN 2 times in 2 thread(s)... 0.566 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 [ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ] ══ Skipped tests (3) ═══════════════════════════════════════════════════════════ • On CRAN (3): 'test-binary.R:57:5', 'test-lints.R:10:5', 'test-multiclass.R:57:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-binary.R:287:5'): test nested cv, grid, binary - ranger ──────── Error in `xtfrm.data.frame(structure(list(`0` = 0.403024198843656, `1` = 0.596975801156344), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x55c52180ed10>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─ranger_optimizer$execute() at test-binary.R:287:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─hparam_tuner$execute(k = self$k_tuning) 9. │ └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer) 10. │ └─mlexperiments:::.run_optimizer(...) 11. │ └─optimizer$execute(x = private$x, y = private$y, method_helper = private$method_helper) 12. │ ├─base::do.call(...) 13. │ └─mlexperiments (local) `<fn>`(...) 14. │ └─base::lapply(...) 15. │ └─mlexperiments (local) FUN(X[[i]], ...) 16. │ ├─base::do.call(FUN, fun_parameters) 17. │ └─mlexperiments (local) `<fn>`(...) 18. │ ├─base::do.call(private$fun_optim_cv, kwargs) 19. │ └─mllrnrs (local) `<fn>`(...) 20. │ ├─base::do.call(ranger_predict, pred_args) 21. │ └─mllrnrs (local) `<fn>`(...) 22. │ └─kdry::mlh_reshape(preds) 23. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 24. │ └─data.table:::`[.data.table`(...) 25. └─base::which.max(.SD) 26. ├─base::xtfrm(`<dt[,2]>`) 27. └─base::xtfrm.data.frame(`<dt[,2]>`) ── Error ('test-multiclass.R:162:5'): test nested cv, grid, multiclass - lightgbm ── Error in `xtfrm.data.frame(structure(list(`0` = 0.20774260202068, `1` = 0.136781829323219, `2` = 0.655475568656101), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x55c52180ed10>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─lightgbm_optimizer$execute() at test-multiclass.R:162:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─mlexperiments:::.cv_fit_model(...) 9. │ ├─base::do.call(self$learner$predict, pred_args) 10. │ └─mlexperiments (local) `<fn>`(...) 11. │ ├─base::do.call(private$fun_predict, kwargs) 12. │ └─mllrnrs (local) `<fn>`(...) 13. │ └─kdry::mlh_reshape(preds) 14. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 15. │ └─data.table:::`[.data.table`(...) 16. └─base::which.max(.SD) 17. ├─base::xtfrm(`<dt[,3]>`) 18. └─base::xtfrm.data.frame(`<dt[,3]>`) ── Error ('test-multiclass.R:294:5'): test nested cv, grid, multi:softprob - xgboost, with weights ── Error in `xtfrm.data.frame(structure(list(`0` = 0.274507701396942, `1` = 0.12648206949234, `2` = 0.599010229110718), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x55c52180ed10>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─xgboost_optimizer$execute() at test-multiclass.R:294:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─mlexperiments:::.cv_fit_model(...) 9. │ ├─base::do.call(self$learner$predict, pred_args) 10. │ └─mlexperiments (local) `<fn>`(...) 11. │ ├─base::do.call(private$fun_predict, kwargs) 12. │ └─mllrnrs (local) `<fn>`(...) 13. │ └─kdry::mlh_reshape(preds) 14. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 15. │ └─data.table:::`[.data.table`(...) 16. └─base::which.max(.SD) 17. ├─base::xtfrm(`<dt[,3]>`) 18. └─base::xtfrm.data.frame(`<dt[,3]>`) [ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-fedora-clang

Version: 0.0.7
Check: tests
Result: ERROR Running ‘testthat.R’ [81s/272s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > # https://github.com/Rdatatable/data.table/issues/5658 > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mllrnrs) > > test_check("mllrnrs") CV fold: Fold1 CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 13.936 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 15.599 seconds 3) Running FUN 2 times in 2 thread(s)... 1.026 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 9.67 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 13.19 seconds 3) Running FUN 2 times in 2 thread(s)... 0.761 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.648 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 18.391 seconds 3) Running FUN 2 times in 2 thread(s)... 0.758 seconds CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Saving _problems/test-binary-287.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Saving _problems/test-multiclass-162.R CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold2 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold3 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold1 Saving _problems/test-multiclass-294.R CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 7.741 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.538 seconds 3) Running FUN 2 times in 2 thread(s)... 0.699 seconds CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 7.22 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.107 seconds 3) Running FUN 2 times in 2 thread(s)... 0.672 seconds CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 7.472 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.057 seconds 3) Running FUN 2 times in 2 thread(s)... 0.569 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.488 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 14.291 seconds 3) Running FUN 2 times in 2 thread(s)... 0.902 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.278 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 4.069 seconds 3) Running FUN 2 times in 2 thread(s)... 1.247 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 10.291 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 37.276 seconds 3) Running FUN 2 times in 2 thread(s)... 1.041 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 [ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ] ══ Skipped tests (3) ═══════════════════════════════════════════════════════════ • On CRAN (3): 'test-binary.R:57:5', 'test-lints.R:10:5', 'test-multiclass.R:57:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-binary.R:287:5'): test nested cv, grid, binary - ranger ──────── Error in `xtfrm.data.frame(structure(list(`0` = 0.379858310721837, `1` = 0.620141689278164), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x20a9e550>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─ranger_optimizer$execute() at test-binary.R:287:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─hparam_tuner$execute(k = self$k_tuning) 9. │ └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer) 10. │ └─mlexperiments:::.run_optimizer(...) 11. │ └─optimizer$execute(x = private$x, y = private$y, method_helper = private$method_helper) 12. │ ├─base::do.call(...) 13. │ └─mlexperiments (local) `<fn>`(...) 14. │ └─base::lapply(...) 15. │ └─mlexperiments (local) FUN(X[[i]], ...) 16. │ ├─base::do.call(FUN, fun_parameters) 17. │ └─mlexperiments (local) `<fn>`(...) 18. │ ├─base::do.call(private$fun_optim_cv, kwargs) 19. │ └─mllrnrs (local) `<fn>`(...) 20. │ ├─base::do.call(ranger_predict, pred_args) 21. │ └─mllrnrs (local) `<fn>`(...) 22. │ └─kdry::mlh_reshape(preds) 23. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 24. │ └─data.table:::`[.data.table`(...) 25. └─base::which.max(.SD) 26. ├─base::xtfrm(`<dt[,2]>`) 27. └─base::xtfrm.data.frame(`<dt[,2]>`) ── Error ('test-multiclass.R:162:5'): test nested cv, grid, multiclass - lightgbm ── Error in `xtfrm.data.frame(structure(list(`0` = 0.20774260202068, `1` = 0.136781829323219, `2` = 0.655475568656101), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x20a9e550>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─lightgbm_optimizer$execute() at test-multiclass.R:162:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─mlexperiments:::.cv_fit_model(...) 9. │ ├─base::do.call(self$learner$predict, pred_args) 10. │ └─mlexperiments (local) `<fn>`(...) 11. │ ├─base::do.call(private$fun_predict, kwargs) 12. │ └─mllrnrs (local) `<fn>`(...) 13. │ └─kdry::mlh_reshape(preds) 14. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 15. │ └─data.table:::`[.data.table`(...) 16. └─base::which.max(.SD) 17. ├─base::xtfrm(`<dt[,3]>`) 18. └─base::xtfrm.data.frame(`<dt[,3]>`) ── Error ('test-multiclass.R:294:5'): test nested cv, grid, multi:softprob - xgboost, with weights ── Error in `xtfrm.data.frame(structure(list(`0` = 0.250160574913025, `1` = 0.124035485088825, `2` = 0.62580394744873), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x20a9e550>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─xgboost_optimizer$execute() at test-multiclass.R:294:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─mlexperiments:::.cv_fit_model(...) 9. │ ├─base::do.call(self$learner$predict, pred_args) 10. │ └─mlexperiments (local) `<fn>`(...) 11. │ ├─base::do.call(private$fun_predict, kwargs) 12. │ └─mllrnrs (local) `<fn>`(...) 13. │ └─kdry::mlh_reshape(preds) 14. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 15. │ └─data.table:::`[.data.table`(...) 16. └─base::which.max(.SD) 17. ├─base::xtfrm(`<dt[,3]>`) 18. └─base::xtfrm.data.frame(`<dt[,3]>`) [ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-fedora-gcc

Package mlsurvlrnrs

Current CRAN status: OK: 13

Package rBiasCorrection

Current CRAN status: OK: 13

Package sjtable2df

Current CRAN status: OK: 13

mirror server hosted at Truenetwork, Russian Federation.