The purpose of this vignette is to learn how to estimate trophic position of a species using stable isotope data (\(\delta^{13}C\) and \(\delta^{15}N\)). We can estimate trophic position using a two source model that is based on equations from Post (2002), Vander Zaden and Vadeboncoeur (2002), and Heuvel et al. (2024)
The equations for a two source model consists of the following:
\[ \alpha = \frac{(\delta^{13}C_c - \delta^{13}C_{b2})}{(\delta^{13}C_{b1}-\delta^{13}C_{b2})} \]
\(\delta^{15}C_c\) is the isotope value of the consumer, \(\delta^{15}C_{b1}\) is the mean isotope value of the first baseline, \(\delta^{15}C_{b2}\) is the mean isotope value of the second baseline. For aquatic ecosystems, often \(\delta^{15}C_{b1}\) is from a benthic source and \(\delta^{15}C_{b2}\) is from a pelagic source. Lastly, \(\alpha\) is the proportion of carbon that comes from each source and should be bound by 0 and 1. We will correct (i.e., scale) these values using an equation in Heuvel et al., (2024).
\[ \alpha_r = \frac{(\alpha - \alpha_{min})}{(\alpha_{max}-\alpha_{min})} \]
where \(\alpha_r\) is the corrected (i.e., scaled) \(\alpha\), \(\alpha\) is derived from above, \(\alpha_{min}\) is the minimum value of \(\alpha\) calculated above, and \(\alpha_{max}\) is the maximum value of \(\alpha\) calculated above, \(\alpha_r\) is then used in the trophic position equation below.
\[ \text{Trophic Position} = \lambda + \frac{(\delta^{15}N_c - [(\delta^{15}N_{b1} \times \alpha_r) - (\delta^{15}N_{b1} \times (1 - \alpha_r))]}{\Delta N} \]
where \(\lambda\) is the trophic
position of the baseline (e.g., 2
), \(\delta^{15}N_c\) is the \(\delta^{15}N\) of the consumer, \(\delta^{15}N_{b1}\) is the mean \(\delta^{15}N\) of the first baseline (e.g.,
benthic), \(\delta^{15}N_{b2}\) is the
mean \(\delta^{15}N\) of the second
baseline (e.g., pelagic), \(\alpha_r\)
is estimated above, and \(\Delta N\) is
the trophic enrichment factor (e.g., 3.4).
There is a variation of this model that uses a mixing model to consider different trophic position for each baseline (\(\lambda\)). The equation replaces \(\lambda\) with the following:
\[ \lambda = (\lambda_1 \times \alpha_r) - (\lambda_2\times (1 - \alpha_r)) \]
Where \(\lambda_1\) is the trophic level of the first baseline (e.g., 2), \(\lambda_2\) is the trophic level of the second baseline (e.g., 2.5), and \(\alpha_r\) is from above. Only use this replacement equation for \(\lambda\) if you have baselines from two different trophic levels.
To use these model with a Bayesian framework, we need to calculate
\(\alpha\), \(\alpha_{min}\), and \(\alpha_{max}\) which can be done using
add_alpha()
. We then can use the rearrange equation for
\(\alpha_r\):
\[ \alpha = \alpha_r \times (\alpha_{max} - \alpha_{min}) + \alpha_{min} \]
Estimates of \(\alpha_r\) are then used in the rearranged equation for trophic position below.
\[ \delta^{15}N_c = \Delta N \times (\text{Trophic Position} - \lambda) + \delta^{15}N_{b1} \times \alpha_r + \delta^{15}N_{b2} \times (1 - \alpha_r) \]
The function two_source_model()
uses both of these
rearranged equation. If using baselines from two different trophic
levels, you can set the argument lambda
to 2
to replace \(\lambda\)
(l1
) with the mixing model for \(\lambda\) above.
First we need to organize the data prior to running the model. To do this work we will use {dplyr} and {tidyr} but we could also use {data.table}.
When running the model we will use {trps} and {brms}.
Once we have run the model we will use {bayesplot} to assess models and then extract posterior draws using {tidybayes}. Posterior distributions will be plotted using {ggplot2} and {ggdist} with colours provided by {viridis}.
First we load all the packages needed to carry out the analysis.
In {trps} we have several data sets, they include stable isotope data (\(\delta^{13}C\) and \(\delta^{15}N\)) for a consumer, lake trout (Salvelinus namaycush), a benthic baseline, amphipods, and a pelagic baseline, dreissenids, for an ecoregion in Lake Ontario.
We check out each data set with the first being the consumer.
consumer_iso
#> # A tibble: 30 × 4
#> common_name ecoregion d13c d15n
#> <fct> <fct> <dbl> <dbl>
#> 1 Lake Trout Embayment -22.9 15.9
#> 2 Lake Trout Embayment -22.5 16.2
#> 3 Lake Trout Embayment -22.8 17.0
#> 4 Lake Trout Embayment -22.3 16.6
#> 5 Lake Trout Embayment -22.5 16.6
#> 6 Lake Trout Embayment -22.3 16.6
#> 7 Lake Trout Embayment -22.3 16.6
#> 8 Lake Trout Embayment -22.5 16.2
#> 9 Lake Trout Embayment -22.9 16.4
#> 10 Lake Trout Embayment -22.3 16.3
#> # ℹ 20 more rows
We can see that this data set contains the common_name
of the consumer, the ecoregion
samples were collected from,
and \(\delta^{13}C\)
(d13c
) and \(\delta^{15}N\) (d15n
).
Next we check out the benthic baseline data set.
baseline_1_iso
#> # A tibble: 14 × 5
#> common_name ecoregion d13c_b1 d15n_b1 id
#> <fct> <fct> <dbl> <dbl> <int>
#> 1 Amphipoda Embayment -26.2 8.44 1
#> 2 Amphipoda Embayment -26.6 8.77 2
#> 3 Amphipoda Embayment -26.0 8.05 3
#> 4 Amphipoda Embayment -22.1 13.6 4
#> 5 Amphipoda Embayment -24.3 6.99 5
#> 6 Amphipoda Embayment -22.1 7.95 6
#> 7 Amphipoda Embayment -24.7 7.37 7
#> 8 Amphipoda Embayment -26.6 6.93 8
#> 9 Amphipoda Embayment -24.6 6.97 9
#> 10 Amphipoda Embayment -22.1 7.95 10
#> 11 Amphipoda Embayment -24.7 7.37 11
#> 12 Amphipoda Embayment -22.1 7.95 12
#> 13 Amphipoda Embayment -24.7 7.37 13
#> 14 Amphipoda Embayment -26.9 10.2 14
We can see that this data set contains the common_name
of the baseline, the ecoregion
samples were collected from,
and \(\delta^{13}C\)
(d13c_b1
) and \(\delta^{15}N\) (d15n_b1
).
Next we check out the pelagic baseline data set.
baseline_2_iso
#> # A tibble: 12 × 4
#> common_name ecoregion d13c_b2 d15n_b2
#> <fct> <fct> <dbl> <dbl>
#> 1 Dreissenids Embayment -23.4 7.81
#> 2 Dreissenids Embayment -22.9 7.61
#> 3 Dreissenids Embayment -22.7 7.32
#> 4 Dreissenids Embayment -23.4 7.81
#> 5 Dreissenids Embayment -22.9 7.61
#> 6 Dreissenids Embayment -22.7 7.32
#> 7 Dreissenids Embayment -23.4 7.81
#> 8 Dreissenids Embayment -22.9 7.61
#> 9 Dreissenids Embayment -22.7 7.32
#> 10 Dreissenids Embayment -26.9 10.2
#> 11 Dreissenids Embayment -23.5 7.68
#> 12 Dreissenids Embayment -23.7 7.64
We can see that this data set contains the common_name
of the baseline, the ecoregion
samples were collected from,
and \(\delta^{13}C\)
(d13c_b2
) and \(\delta^{15}N\) (d15n_b2
).
Now that we understand the data we need to combine both data sets to estimate trophic position for our consumer.
To do this we first need to make an id
column in each
data set, which will allow us to join them together. We first
arrange()
the data by ecoregion
and
common_name
. Next we group_by()
the same
variables, and add id
for each grouping using
row_number()
. Always ungroup()
the
data.frame
after using group_by()
. Lastly, we
use dplyr::select()
to rearrange the order of the
columns.
Let’s first add id
to consumer_iso
data
frame.
con_tsar <- consumer_iso %>%
arrange(ecoregion, common_name) %>%
group_by(ecoregion, common_name) %>%
mutate(
id = row_number()
) %>%
ungroup() %>%
dplyr::select(id, common_name:d15n)
You will notice that I have renamed this object to
consumer_iso_2
this is because we are modifying
consumer_iso
and should make a new object. I have continued
with the same renaming nomenclature for objects below.
Next let’s add id
to baseline_1_iso
data
frame. For joining purposes we are going to drop
common_name
from this data frame.
Next let’s add id
to baseline_2_iso
data
frame. For joining purposes we are going to drop
common_name
from this data frame.
Now that we have the consumer and baseline data sets in a consistent
format we can join them by "id"
and
"ecoregion"
using left_join()
from {dplyr}.
combined_iso_tsar <- con_tsar %>%
left_join(b1_tsar, by = c("id", "ecoregion")) %>%
left_join(b2_tsar, by = c("id", "ecoregion"))
We can see that we have successfully combined our consumer and
baseline data. We need to do one last thing prior to analyzing the data,
and that is calculate the mean \(\delta^{13}C\) (c1
and
c2
) and \(\delta^{15}N\)
(n1
and n2
) for the baselines and add in the
constant \(\lambda\) (l1
)
to our data frame. We do this by using groub_by()
to group
the data by our two groups, then using mutate()
and
mean()
to calculate the mean values.
Important note, to run the model successfully, columns need to be
named d13c
, c1
, c2
,
d15n
, n1
, n2
, and l1
with l2
needed if using two \(\lambda s\).
We will also add_alpha()
to the data frame.
combined_iso_tsar_1 <- combined_iso_tsar %>%
group_by(ecoregion, common_name) %>%
mutate(
c1 = mean(d13c_b1, na.rm = TRUE),
n1 = mean(d15n_b1, na.rm = TRUE),
c2 = mean(d13c_b2, na.rm = TRUE),
n2 = mean(d15n_b2, na.rm = TRUE),
l1 = 2
) %>%
ungroup() %>%
add_alpha()
Let’s view our combined data.
combined_iso_tsar_1
#> # A tibble: 30 × 17
#> id common_name ecoregion d13c d15n d13c_b1 d15n_b1 d13c_b2 d15n_b2 c1
#> <int> <fct> <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 Lake Trout Embayment -22.9 15.9 -26.2 8.44 -23.4 7.81 -24.6
#> 2 2 Lake Trout Embayment -22.5 16.2 -26.6 8.77 -22.9 7.61 -24.6
#> 3 3 Lake Trout Embayment -22.8 17.0 -26.0 8.05 -22.7 7.32 -24.6
#> 4 4 Lake Trout Embayment -22.3 16.6 -22.1 13.6 -23.4 7.81 -24.6
#> 5 5 Lake Trout Embayment -22.5 16.6 -24.3 6.99 -22.9 7.61 -24.6
#> 6 6 Lake Trout Embayment -22.3 16.6 -22.1 7.95 -22.7 7.32 -24.6
#> 7 7 Lake Trout Embayment -22.3 16.6 -24.7 7.37 -23.4 7.81 -24.6
#> 8 8 Lake Trout Embayment -22.5 16.2 -26.6 6.93 -22.9 7.61 -24.6
#> 9 9 Lake Trout Embayment -22.9 16.4 -24.6 6.97 -22.7 7.32 -24.6
#> 10 10 Lake Trout Embayment -22.3 16.3 -22.1 7.95 -26.9 10.2 -24.6
#> # ℹ 20 more rows
#> # ℹ 7 more variables: n1 <dbl>, c2 <dbl>, n2 <dbl>, l1 <dbl>, alpha <dbl>,
#> # min_alpha <dbl>, max_alpha <dbl>
It is now ready to be analyzed!
We can now estimate trophic position for lake trout in an ecoregion of Lake Ontario.
There are a few things to know about running a Bayesian analysis, I suggest reading these resources:
Bayesian analyses rely on supplying uninformed or informed prior
distributions for each parameter (coefficient; predictor) in the model.
The default informed priors for a two source model are the following,
\(\alpha_r\) is bound by 0 and 1 and
assumes an unformed beta distribution (\(\alpha = 1\) and \(\beta = 1\)), \(\Delta N\) assumes a normal distribution
(dn
; \(\mu = 3.4\); \(\sigma = 0.25\)), trophic position assumes
a uniform distribution (lower bound = 2 and upper bound = 10), \(\sigma\) assumes a uniform distribution
(lower bound = 0 and upper bound = 10), and if informed priors are
desired for \(\delta^{13}C_{b1}\) and
\(\delta^{13}C_{b2}\) (c1
and c2
; \(\mu = -21\) and
\(-26\); \(\sigma = 1\)), and \(\delta^{15}N_{b1}\) and \(\delta^{15}N_{b2}\) (n1
and
n2
; \(\mu = 8\) and \(9.5\); \(\sigma =
1\)), we can set the argument bp
to
TRUE
in all two_source_
functions.
You can change these default priors using
two_source_priors_params()
, however, I would suggest
becoming familiar with Bayesian analyses, your study species, and system
prior to adjusting these values.
It is important to always run the model with at least 2 chains. If the model does not converge you can try to increase the following:
The amount of samples that are burned-in (discarded; in
brm()
this can be controlled by the argument
warmup
)
The number of iterative samples retained (in brm()
this can be controlled by the argument iter
).
The number of samples drawn (in brm()
this is
controlled by the argument thin
).
The adapt_delta
value using
control = list(adapt_delta = 0.95)
.
When assessing the model we want \(\hat R\) to be 1 or within 0.05 of 1, which indicates that the variance among and within chains are equal (see {rstan} documentation on \(\hat R\)), a high value for effective sample size (ESS), trace plots to look “grassy” or “caterpillar like,” and posterior distributions to look relatively normal.
We will use functions from {trps} that drop into a
{brms} model. These
functions are two_source_model()
which provides
brm()
the formula structure needed to run a one source
model. Next brm()
needs the structure of the priors which
is supplied to the prior
argument using
two_source_priors()
. Lastly, values for these priors are
supplied through the stanvars
argument using
two_source_priors_params()
. You can adjust the mean (\(\mu\)), variance (\(\sigma\)), or upper and lower bounds
(lb
and ub
) for each prior of the model using
two_source_priors_params()
, however, only adjust priors if
you have a good grasp of Bayesian frameworks and your study system and
species.
Let’s run the model!
Let’s view the summary of the model.
model_output_tsar
#> Family: MV(gaussian, gaussian)
#> Links: mu = identity; sigma = identity
#> mu = identity; sigma = identity
#> Formula: alpha ~ ar * (max_alpha - min_alpha) + min_alpha
#> ar ~ 1
#> d15n ~ dn * (tp - l1) + n1 * ar + n2 * (1 - ar)
#> ar ~ 1
#> tp ~ 1
#> dn ~ 1
#> Data: combined_iso_tsar_1 (Number of observations: 30)
#> Draws: 2 chains, each with iter = 4000; warmup = 1000; thin = 1;
#> total post-warmup draws = 6000
#>
#> Regression Coefficients:
#> Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
#> alpha_ar_Intercept 0.56 0.04 0.47 0.65 1.00 4098 3034
#> d15n_ar_Intercept 0.50 0.29 0.02 0.98 1.00 4809 3013
#> d15n_tp_Intercept 4.71 0.47 4.03 5.77 1.00 2784 2245
#> d15n_dn_Intercept 3.32 0.51 2.33 4.29 1.00 2802 2261
#>
#> Further Distributional Parameters:
#> Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
#> sigma_alpha 0.59 0.08 0.45 0.78 1.00 3936 3498
#> sigma_d15n 0.62 0.09 0.48 0.82 1.00 4143 3482
#>
#> Residual Correlations:
#> Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
#> rescor(alpha,d15n) -0.13 0.18 -0.45 0.24 1.00 4277 3790
#>
#> Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
#> and Tail_ESS are effective sample size measures, and Rhat is the potential
#> scale reduction factor on split chains (at convergence, Rhat = 1).
We can see that \(\hat R\) is 1 meaning that variance among and within chains are equal (see {rstan} documentation on \(\hat R\)) and that ESS is quite large. Overall, this means the model is converging and fitting accordingly.
We can check how well the model is predicting the \(\alpha\) of the consumer using
pp_check()
from {bayesplot}
.
pp_check(model_output_tsar, resp = "alpha")
#> Using 10 posterior draws for ppc type 'dens_overlay' by default.
We can see that posteriors draws (\(y_{rep}\); light lines) are effectively modeling \(\alpha\) (\(y\); dark line).
Next We can check how well the model is predicting the \(\delta^{15}N\) of the consumer using
pp_check()
from {bayesplot}
.
pp_check(model_output_tsar, resp = "d15n")
#> Using 10 posterior draws for ppc type 'dens_overlay' by default.
We can see that posteriors draws (\(y_{rep}\); light lines) are effectively modeling \(\delta^{15}N\) of the consumer (\(y\); dark line).
Let’s again look at the summary output from the model.
model_output_tsar
#> Family: MV(gaussian, gaussian)
#> Links: mu = identity; sigma = identity
#> mu = identity; sigma = identity
#> Formula: alpha ~ ar * (max_alpha - min_alpha) + min_alpha
#> ar ~ 1
#> d15n ~ dn * (tp - l1) + n1 * ar + n2 * (1 - ar)
#> ar ~ 1
#> tp ~ 1
#> dn ~ 1
#> Data: combined_iso_tsar_1 (Number of observations: 30)
#> Draws: 2 chains, each with iter = 4000; warmup = 1000; thin = 1;
#> total post-warmup draws = 6000
#>
#> Regression Coefficients:
#> Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
#> alpha_ar_Intercept 0.56 0.04 0.47 0.65 1.00 4098 3034
#> d15n_ar_Intercept 0.50 0.29 0.02 0.98 1.00 4809 3013
#> d15n_tp_Intercept 4.71 0.47 4.03 5.77 1.00 2784 2245
#> d15n_dn_Intercept 3.32 0.51 2.33 4.29 1.00 2802 2261
#>
#> Further Distributional Parameters:
#> Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
#> sigma_alpha 0.59 0.08 0.45 0.78 1.00 3936 3498
#> sigma_d15n 0.62 0.09 0.48 0.82 1.00 4143 3482
#>
#> Residual Correlations:
#> Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
#> rescor(alpha,d15n) -0.13 0.18 -0.45 0.24 1.00 4277 3790
#>
#> Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
#> and Tail_ESS are effective sample size measures, and Rhat is the potential
#> scale reduction factor on split chains (at convergence, Rhat = 1).
We can see that \(\alpha_r\) is
estimated to be 0.56
with l-95% CI
of
0.47
and u-95% CI
of 0.65
. These
values do make more sense as this indicates the lake trout are using
both bethic and pelagic resource which we know from previous work
istrue.
Moving down to the trophic position model we can see \(\Delta N\) is estimated to be
3.38
with l-95% CI
of 2.89
, and
u-95% CI
of 3.85
. If we move down to trophic
position (tp
) we see trophic position is estimated to be
4.61
with l-95% CI
of 4.26
, and
u-95% CI
of 5.05
.
We use functions from {tidybayes} to do this
work. First we look at the the names of the variables we want to extract
using get_variables()
.
get_variables(model_output_tsar)
#> [1] "b_alpha_ar_Intercept" "b_d15n_ar_Intercept" "b_d15n_tp_Intercept"
#> [4] "b_d15n_dn_Intercept" "sigma_alpha" "sigma_d15n"
#> [7] "rescor__alpha__d15n" "lprior" "lp__"
#> [10] "accept_stat__" "stepsize__" "treedepth__"
#> [13] "n_leapfrog__" "divergent__" "energy__"
You will notice that "b_alpha_ar_Intercept"
and
"b_d15n_tp_Intercept"
are the names of the variable that we
are wanting to extract. We extract posterior draws using
gather_draws()
, and rename
"b_alpha_ar_Intercept"
to tp
and
"b_d13c_alpha_Intercept"
to ar
.
post_draws <- model_output_tsar %>%
gather_draws(b_alpha_ar_Intercept, b_d15n_tp_Intercept) %>%
mutate(
ecoregion = "Embayment",
common_name = "Lake Trout",
.variable = case_when(
.variable %in% "b_d15n_tp_Intercept" ~ "tp",
.variable %in% "b_alpha_ar_Intercept" ~ "ar"
)
) %>%
dplyr::select(common_name, ecoregion, .chain:.value)
Let’s view the post_draws
post_draws
#> # A tibble: 12,000 × 7
#> # Groups: .variable [2]
#> common_name ecoregion .chain .iteration .draw .variable .value
#> <chr> <chr> <int> <int> <int> <chr> <dbl>
#> 1 Lake Trout Embayment 1 1 1 ar 0.573
#> 2 Lake Trout Embayment 1 2 2 ar 0.524
#> 3 Lake Trout Embayment 1 3 3 ar 0.547
#> 4 Lake Trout Embayment 1 4 4 ar 0.588
#> 5 Lake Trout Embayment 1 5 5 ar 0.593
#> 6 Lake Trout Embayment 1 6 6 ar 0.559
#> 7 Lake Trout Embayment 1 7 7 ar 0.585
#> 8 Lake Trout Embayment 1 8 8 ar 0.574
#> 9 Lake Trout Embayment 1 9 9 ar 0.542
#> 10 Lake Trout Embayment 1 10 10 ar 0.557
#> # ℹ 11,990 more rows
We can see that this consists of seven variables:
ecoregion
common_name
.chain
.iteration
(number of sample after burn-in).draw
(number of samples from iter
).variable
(this will have different variables depending
on what is supplied to gather_draws()
).value
(estimated value)Considering we are likely using this information for a paper or
presentation, it is nice to be able to report the median and credible
intervals (e.g., equal-tailed intervals; ETI). We can extract and export
these values using gather_draws()
and
median_qi
from {tidybayes}.
We rename d15n_tp_Intercept
to tp
and
b_alpha_ar_Intercept
to ar
, add the grouping
columns, round all columns that are numeric to two decimal points using
mutate_if()
, and rearrange the order of the columns using
dplyr::select()
.
medians_ci <- model_output_tsar %>%
gather_draws(b_alpha_ar_Intercept,
b_d15n_tp_Intercept) %>%
median_qi() %>%
mutate(
ecoregion = "Embayment",
common_name = "Lake Trout",
.variable = case_when(
.variable %in% "b_d15n_tp_Intercept" ~ "tp",
.variable %in% "b_alpha_ar_Intercept" ~ "ar"
)
) %>%
mutate_if(is.numeric, round, digits = 2)
Let’s view the output.
medians_ci
#> # A tibble: 2 × 9
#> .variable .value .lower .upper .width .point .interval ecoregion common_name
#> <chr> <dbl> <dbl> <dbl> <dbl> <chr> <chr> <chr> <chr>
#> 1 ar 0.57 0.47 0.65 0.95 median qi Embayment Lake Trout
#> 2 tp 4.63 4.03 5.77 0.95 median qi Embayment Lake Trout
I like to use {openxlsx} to
export these values into a table that I can use for presentations and
papers. For the vignette I am not going to demonstrate how to do this
but please check out {openxlsx}
.
Now that we have our posterior draws extracted we can plot them. To analyze a single species or group, I like using density plots.
For this example we first plot the density for posterior draws using
geom_density()
.
ggplot(data = post_draws, aes(x = .value)) +
geom_density() +
facet_wrap(~ .variable, scale = "free") +
theme_bw(base_size = 15) +
theme(
panel.grid = element_blank(),
strip.background = element_blank(),
) +
labs(
x = "P(Estimate | X)",
y = "Density"
)
Next we plot it as a point interval plot using
stat_pointinterval()
.
ggplot(data = post_draws, aes(y = .value,
x = common_name)) +
stat_pointinterval() +
facet_wrap(~ .variable, scale = "free") +
theme_bw(base_size = 15) +
theme(
panel.grid = element_blank(),
strip.background = element_blank(),
) +
labs(
x = "P(Estimate | X)",
y = "Density"
)
Congratulations we have estimated the trophic position for Lake Trout! Again, you will notice estimates of \(\alpha_r\) do make sense and this model is estimating \(\alpha\) on the correct scale.
You can use iterative process to produce estimates of trophic position for more than one group (e.g., comparing trophic position among species or in this case different ecoregions) using iterative processes that are demonstrated in estimate trophic position - one source - multiple groups.