LFA App User’s Guide

Filip Paskali, Weronika Schary, Matthias Kohl

2024-05-27



Introduction

The LFA shiny apps (W. Chang et al. 2020) consist of four modular Shiny applications:

  1. LFA App core for image acquisition, editing, region of interest definition via gridding, background correction with multiple available methods, as well as intensity data extraction of the pre-defined bands of the analysed LFAs. More precisely, it consists of Tab 1, Tab 2 and parts of Tab 3 described in detail below.

  2. LFA App calibration extends the LFA App core by methods for merging the intensity data with information from experiments, computation of calibration models and the generation of a report about the calibration analysis. The functionality corresponds to the Tabs 1-6 described below.

  3. LFA App quantification enables quantification of the extracted intensity values via loading existing calibration models. It extends the LFA App core by Tab 7 described below.

  4. LFA App analysis includes the full functionality mentioned above and enables full analysis in one application. That is, it consists of Tab 1-7.

The graphical user interface of the apps is built in a modular way divided into several tabs, where each tab represents a specific step of the workflow. While the applications can be used in a sequential fashion, the specific steps can also be carried out individually.



Testing our apps without installing

Before installing our package, one can also test our apps on https://www.shinyapps.io. The desktop version of our full purpose analysis app is at

https://lfapp.shinyapps.io/LFAnalysis/

The respective mobile version is at

https://lfapp.shinyapps.io/mobile_app/



Installation

The package requires Bioconductor package EBImage, which should be installed first via.

## Install package BiocManager
if(!requireNamespace("BiocManager", quietly = TRUE)) 
  install.packages("BiocManager")
## Use BiocManager to install EBImage
BiocManager::install("EBImage", update = FALSE)

Our package depends on the most recent version of package shinyMobile, which must be installed from github (https://github.com/RinteRface/shinyMobile) by

## Install package remotes
if(!requireNamespace("remotes", quietly = TRUE)) 
  install.packages("remotes")
## Install package shinyMobile
remotes::install_github("RinteRface/shinyMobile")

For generating our vignette and automatic reports, we need packages knitr and rmarkdown, which will be installed next.

## Install package knitr
if(!requireNamespace("knitr", quietly = TRUE)) 
  install.packages("knitr")
## Install package rmarkdown
if(!requireNamespace("rmarkdown", quietly = TRUE)) 
  install.packages("rmarkdown")

Finally, one can install package LFApp, where all remaining dependencies will be installed automatically.

## Install package LFApp
remotes::install_github("fpaskali/LFApp", build_vignette=TRUE)



Start Apps

As already described above, package LFApp includes four different shiny apps where there is a desktop and a mobile version for each app. They can be started with one of the following commands:

## desktop versions
## LFA App core
LFApp::run_core()

## LFA App quantification
LFApp::run_quan()

## LFA App calibration
LFApp::run_cal()

## LFA App full analysis
LFApp::run_analysis()

## mobile versions
## LFA App core
LFApp::run_mobile_core()

## LFA App quantification
LFApp::run_mobile_quan()

## LFA App calibration
LFApp::run_mobile_cal()

## LFA App full analysis
LFApp::run_mobile_analysis()



Tab 1: Cropping and Segmentation

Upload Image or Choose Sample Image

The first step consists of loading an image of one or several lateral flow strips. For trying out the app one can also load a sample image provided with the package. For the purpose of demonstration we will use the sample image provided with the package. The image can be rotated or flipped horizontally (button FH) and vertically (button FV) via the rotation panel on the left hand side. It can also be cropped to a specific size via double click on the interactive plot.

Set number of strips and number of lines per strip

In the next step, one needs to select the number of strips shown in the image and the number of lines (bands) per strip.

The maximum number of strips per image is 10, but in principle could be extended to a higher number. The only requirements is that the strips are regularly spaced with identical or at least similar spaces between them and between the lines. The minimum number of lines per strip is two, the maximum number of lines per strip is six, since we are not aware of any lateral flow assay having more than six lines. But again, this could be easily extended to a higher number.

Apply Segmentation

After loading the image and selecting the number of strips as well as lines per strip, one needs to click on the image and drag to generate a rectangular selection region on the image. It is best, if each line is roughly in the middle of a respective segment. In addition, there is a segment between two consecutive bands. This segment should not include any part of a line, since it can be used for background correction; see below. An example of a well selected cropping region is shown in the following screenshot.

By clicking “APPLY SEGMENTATION” the original image is cropped to size of the selected region and segmented. The segments between the bands are used in our quantile background correction method; see below. If you want to change the cropped region you can select “RESET”.

The cropping was adapted from the shiny app provided by package ShinyImage (Fu, Shin, and Matlof 2017).



Tab 2: Background

Select Strip

If there is more than one strip on the analysed image, one first needs to select, which strip shall be analysed.

Optional: in case of color images

If the image is a color image, it will be transformed to grayscale. By default, we apply the luminance approach, which converts color to grayscale preserving luminance; that is, the grayscale values are obtained by

\[ 0.2126 * R + 0.7152 * G + 0.0722 * B \] where R, G, B stands for the red, green and blue channel of the color image. By selecting mode “gray”, the arithmetic mean of the RGB values is used. Furthermore, the selection of mode “red”, “green” or “blue” allows a color channel wise analysis of color images.

Select threshold method and apply

There are four threshold methods, where the default is Otsu’s method (Otsu 1979). Otsu’s method returns a single intensity threshold that separates pixels into foreground and background. It is equivalent to k-means clustering of the intensity histogram (Liu and Yu 2009). Otsu and Li Li and Tam (1998) are non-parametric, fully automatic algorithms that find the optimal threshold for the image. Additionally, two semi-automatic algorithms, namely Quantile and Triangle (Zack, Rogers, and Latt 1977) were included, to cover different images and cases where automatic threshold results are not ideal. The quantile method is a simple method that computes the specified quantile of all pixel intensities of the segments between the lines (per strip). In most of the images we have analysed so far, Otsu’s method performed very well and better than our quantile method. However, in cases where the lines are unclear and very blurred our quantile method may outperform Otsu’s method. In case of the triangle method an additional offeset (default = 0.2) is added to the computed threshold.

By clicking “APPLY TRESHOLD” the selected threshold method is applied to the segmented images of the selected strip. The upper plots show the pixels that have intensities above the background. The lower plots show the images after background subtraction; see the screenshot below.

The thresholds (top of the page) as well as calculated mean and median intensities of the lines (bottom of the page) in order from top of the strip to the bottom of the strip are shown as well.

Add to intensity data

Clicking “ADD TO INTENSITY DATA” adds the mean and median of the background subtracted intensities of the pixels with intensities above the threshold to the data. Now, one can proceed with the second strip of the image, use a different color conversion mode or go back to Tab 1 and load the next image. When all strips and images are processed one can proceed with “SWITCH TO INTENSITY DATA”.

Switch To Intensity Data

Clicking “SWITCH TO INTENSITY DATA” changes the Tab to Tab 3.



Tab 3: Intensity Data

Here, the latest version of the generated intensity data is shown in the Tab.

Download data

By clicking “DOWNLOAD DATA” the data can be saved as a standard csv file.

Restart with new data

We also provide a “DELETE DATA” button for restarting with new images or uploading an already existing dataset.

Upload existing data

Instead of generating new data, one can also upload already existing intensity data, that can also be generated with a different software. This step can also be seen as a second entry point of the app. The screenshot below shows an example of data that was generated with the app saved and then loaded again.

Switch To Experiment Info

Clicking “SWITCH TO EXPERIMENT INFO” changes the Tab to Tab 4, where information about the experiment can be loaded.



Tab 4: Experiment Info

Upload Experiment Info or upload existing merged data and go to calibration

One first can either upload information about the experiment in form of a .csv file or upload already merged data (intensity data merged with experiment information). An example of a table with information about the experiment is shown in the screenshot below.

This is a third optional starting point of the app. Here, one can also directly start with already merged data and proceed with the calibration analysis.

Select ID columns and merge datasets

For merging the intensity data with the information about the experiment, one has to specify the names of the columns on which the merge should take place. The default is “File”, as a “File” column is generated when the intensity data is computed with the app. By clicking “MERGE WITH INTENSITY DATA” the two datasets will be merged. An example of a table with merged data is shown in the screenshot below.

Download data

The merged data can be downloaded as a .csv file by clicking “DOWNLOAD DATA”.

Restart with new data

We again provide a “DELETE DATA” button for restarting with new images, uploading new intensity data on Tab 3 or uploading an already existing merged dataset.

Prepare Calibration

Clicking “PREPARE CALIBRATION” changes the Tab to Tab 5, where the data can be further preprocessed and the calibration analysis can be started.



Tab 5: Calibration

Specify a folder for the analysis results

In the first step, a folder for the analysis results should be specified. If no folder is provided, a new folder “LFApp” will be generated in the home directory of the user.

Upload existing preprocessed data

Instead of generating the preprocessed data with our app, one can also upload preprocessed data and perform the calibration analysis on them. An example of uploaded preprocessed data that were generated with our app and saved as a .csv file is shown in the following screenshot.

Optional: average technical replicates

If the merged data includes technical replicates, these replicates can be averaged either by the arithmetic mean or the median. For this purpose the name of the column, which enables the identification of the replicates is required. By default we assume that this column is called “Sample”. In case there is more than one analyte/color per line/band, one should set “Number of analytes/colors per line” to the respective number. If this number is larger than 1, an additional field will be shown, where one should enter the name of the column including the analyte/color information.

Optional: reshape data from long to wide

If there is more than one analyte/color per line/band, one might want to combine both information into one calibration model. For such cases it is useful to reshape the data from long (analyte/color data in the rows) to wide (analyte/color data in the columns).

Download data

By clicking “DOWNLOAD DATA”, the data used for the calibration can be downloaded as .csv file.

Calibration by linear, local polynomial or generalized additive model

In the final step of this Tab, one has to specify the calibration model. The model can the chosen via a checkbox (linear (lm), local polynomial (loess), generalized additive model (GAM)). A column for concentration values has to be selected. The concentration values can be logarithmized. In addition, one has to specify a response variable.

If only a subset of the data shall be used for the calibration analysis, an logical R expression can be specified in the field below “Optional: specify subset (logical R expression)”. This expression will be applied to select the respective subset.

Run calibration analysis

By pressing “RUN CALIBRATION ANALYSIS” the RData-file “CalibrationData.RData” will be generated and saved in the specified analysis folder. Next, the R markdown file “CalibrationAnalysis(model).Rmd”, where model = lm, loess, gam, will be copied to the same folder and will be rendered into file “CalibrationAnalysis(model).html” by using R package “rmarkdown” (Allaire et al. 2020). The results of the calibration analysis will be saved in file “CalibrationResults.RData” and will be used to generated a brief overview of the results that will be shown on Tab 6. All files will remain in the folder selected for the analysis results. That is, one may use and modify these files for an extended analysis. In addition, the fitted values for the concentrations are added to data used for the calibration. That is, one can also run several different calibration models, download the calibration data and use the data to compare the results of different calibration models.



Tab 6: Results

This Tab shows a brief overview of the analysis results as shown in the following screenshot.

Open analysis report

By clicking “Open” the full report of the calibration analysis will be opened in the standard browser. The first part of the report is visible in the following screenshot.



Tab 7: Quantification

The final tab of our application enables the quantification of intensity values with the help of a calibration model. You can upload an existing model from a previous calibration analysis and use it to predict the concentrations for the intensity values acquired beforehand by clicking “PREDICT”. Via the “DOWNLOAD DATA” button the data table can be saved as a .csv file containing the intensity values as well as the predicted concentrations.



Required R packages

Our app requires the following R packages to work properly:



References

Allaire, J., Y. Xie, J. McPherson, J. Luraschi, K. Ushey, A. Atkins, H. Wickham, J. Cheng, W. Chang, and R. Iannone. 2020. R Markdown: Dynamic Documents for R. R package rmarkdown version 2.6.4. Comprehensive R Archive Network (CRAN). https://rmarkdown.rstudio.com.
Attali, Dean. 2020. Shinyjs: Easily Improve the User Experience of Your Shiny Apps in Seconds. https://CRAN.R-project.org/package=shinyjs.
Chang, W., J. Cheng, J. Allaire, Y. Xie, and J. McPherson. 2020. Web Application Framework for R. R package shiny version 1.5.0. Comprehensive R Archive Network (CRAN). https://CRAN.R-project.org/package=shiny.
Chang, Winston. 2018. Shinythemes: Themes for Shiny. https://CRAN.R-project.org/package=shinythemes.
Fu, A., A. Shin, and N. Matlof. 2017. Image Manipulation, with an Emphasis on Journaling. R package ShinyImage version 0.1.0. Comprehensive R Archive Network (CRAN). https://CRAN.R-project.org/package=ShinyImage.
Hester, Jim, and Hadley Wickham. 2020. Fs: Cross-Platform File System Operations Based on ’Libuv’. https://CRAN.R-project.org/package=fs.
Li, C. H., and C. K. Lee. 1993. Minimum cross entropy thresholding.” Pattern Recognition 26 (4): 617–25. https://doi.org/10.1016/0031-3203(93)90115-D.
Li, C. H., and P. K. S. Tam. 1998. An iterative algorithm for minimum cross entropy thresholding.” Pattern Recognition Letters 19 (8): 771–76. https://doi.org/10.1016/S0167-8655(98)00057-9.
Liu, Dongju, and Jian Yu. 2009. Otsu Method and K-means.” In 2009 Ninth International Conference on Hybrid Intelligent Systems, edited by Jeng-Shyang Pan, 344–49. Piscataway, NJ: IEEE. https://doi.org/10.1109/HIS.2009.74.
Otsu, Nobuyuki. 1979. A Threshold Selection Method from Gray-Level Histograms.” IEEE Transactions on Systems, Man, and Cybernetics 9 (1): 62–66. https://doi.org/10.1109/TSMC.1979.4310076.
Pau, Grégoire, Florian Fuchs, Oleg Sklyar, Michael Boutros, and Wolfgang Huber. 2010. EBImage–an R package for image processing with applications to cellular phenotypes.” Bioinformatics (Oxford, England) 26 (7): 979–81. https://doi.org/10.1093/bioinformatics/btq046.
Pedersen, Thomas Lin, Vincent Nijs, Thomas Schaffner, and Eric Nantz. 2020. ShinyFiles: A Server-Side File System Viewer for Shiny. https://CRAN.R-project.org/package=shinyFiles.
R Core Team. 2021. R: A Language and Environment for Statistical Computing.” Vienna, Austria. https://www.R-project.org.
Wickham, Hadley. 2016. ggplot2: Elegant graphics for data analysis. Use R! Cham: Springer. https://doi.org/10.1007/978-3-319-24277-4.
Wood, S. N. 2017. Generalized Additive Models: An Introduction with r. 2nd ed. Chapman; Hall/CRC.
Xie, Yihui, Joe Cheng, and Xianying Tan. 2020. DT: A Wrapper of the Javascript Library ’Datatables’. https://CRAN.R-project.org/package=DT.
Zack, G. W., W. E. Rogers, and S. A. Latt. 1977. Automatic measurement of sister chromatid exchange frequency.” The Journal of Histochemistry and Cytochemistry : Official Journal of the Histochemistry Society 25 (7): 741–53. https://doi.org/10.1177/25.7.70454.

mirror server hosted at Truenetwork, Russian Federation.