Magnitude abundance fluctuations in minimal ecological models

12 minute read

Background

Continuing with the theme, I wrote some derivations a while back to explain the how the magnitude of abundance fluctuations scales with mean abundance for the SLM. It was recently pointed out to me by Jacopo Grilli that this same approach also applies to a competing model that reproduces the same stationary behavior. Below is a rewrite of and addition to that derivation.

Below are those notes, with the forms of the SDEs altered to maintain consistent parameterization (Aguilar et al., 2025). These types of comparisons of SDEs that capture different mesoscopic dynamics (e.g., noise source) while producing the same stationary behavior are common.

Stochastic Logistic Model/Environmental Noise

We are interested in the following relationship of abundance ($x(t)$),

\[\left\langle | x(t + \delta t) - x(t)| \right \rangle \propto \left\langle x \right\rangle^{\gamma}\]

for some model-dependent $\gamma$. This relationship for different minimal models of ecological dynamics was obtained via simulations a few years back (Descheemaeker and De Buyl, 2020), but I could not find a derivation.

We start with two minimal models of ecology. First, the Stochastic Logistic Model (SLM) where deterministic dynamics are logistic and the noise is environmental (i.e., linear multiplicative):

\[\frac{dx}{dt} = k x(\mu - x) + D x \cdot \eta(t)\]

where $\eta(t)$ is our Brownian noise term $\left \langle \eta(t) \right \rangle = 0$, $\left < \eta(t) \eta(t’) \right > = \delta (t - t’)$.

There is no straightforward solution to this SDE (Otunuga, 2021).

And there are a number of ways to approach an SDE like this. It’s pretty easy to look at this equation and guess the correct an answer to our question. We could also take the first-order behavior over a small interval of time. Instead, we’ll examine the limiting behavior. To do that, we first rescale the SDE by $\left <x \right > = \mu - \frac{D^{2}}{2k}$. We then define $y \equiv \mathrm{ln} \left( \frac{x}{\left <x \right >} \right)$

Using Itô rules,

\[d(\mathrm{ln}\, x ) = \frac{dx}{x} - \frac{(dx)^{2}}{2x^{2}} \\ (dx)^{2} = D^{2}x^{2}dt\]

we plug in and get

\[\frac{dy}{dt} = k m \left ( 1 - e^{y} \right ) + D \eta(t) \approx - k m y + D \eta(t)\]

where we have defined $m \equiv \left <x \right >$. This is an Ornstein-Uhlenbeck process, with the following time-dependent moments

\[\left < y(\delta t) | y_{0} \right > = y_{0} e^{-km\delta t} \\ \sigma_{\delta t}^{2} = \frac{k m D^{2}}{2} \left( 1 - e^{-2 mk \delta t} \right)\]

where $y(0) \equiv y_{0}$

for the Gaussian propagator

\[P(y, \delta t | y_{0}) = \frac{1}{\sqrt{2 \pi \sigma_{\delta t}^{2}}} \cdot \mathrm{exp} \left [ - \frac{(y - \left < y(\delta t) | y_{0} \right > )^{2} }{2\sigma_{\delta t}^{2} } \right ]\]

with autocorrelation $\rho \equiv e^{-mk \delta t}$ and it reduces to a zero-centered Gaussian as $\delta t \rightarrow \infty$. We drop the time-dependent term in this limit.

We are going to use this result to derive $(\left\langle \mid x(t + \delta t) - x(t) \mid \right\rangle)$.

Define $(y_{1} \equiv y(t))$ and $(y_{2} \equiv y(t + \delta t))$.

\[\left\langle | x(t + \delta t) - x(t)| \right\rangle = \left\langle x\right\rangle \left\langle | e^{y_{2}} - e^{y_{1}} | \right\rangle\]

For stationary OU processes $(y_{1}, y_{2})$ are bivariate Gaussian random variables using the moments derived above. The variances will be the same, as it does not depend on $y_{0}$. This means the covariance is just the autocorrelation multiplied by the variance. From there we calculate the conditional distribution of the Gaussian, which is also Gaussian. The conditional mean is $(\sigma^{2} \rho) \frac{1}{\sigma^{2} } y = \rho y$ and the variance is $\sigma^{2} - (\sigma^{2} \rho )\frac{1}{\sigma^{2} } (\sigma^{2} \rho) = \sigma^{2} (1 - \rho^{2})$. By conditioning on $y_{1} = y$ we obtain

\[|e^{y_{2}} - e^{y_{1}} | = e^{y} | e^{(y_{2} - y_{1})} -1 |\]

We are going to define the difference between our two OU variables as $Z \equiv y_{2} - y_{1} \mid (y_{1}=y) \sim \mathcal{N}( (\rho-1)y, \sigma^{2}(1 - \rho^{2}) )$. From there we get

\[\left\langle ( |e^{y_{2}} - e^{y_{1}} | ) | y_{1} = y \right\rangle = \left\langle e^{y} \left\langle |e^{Z} - 1 | \right\rangle \right\rangle\]

So we need to first compute the inner expectation and then average the product over $y$. We see that the difference changes sign at $Z=0$, so we can calculate the expectation as the sum of the truncated expectations. The notation is getting annoying so let’s define $a \equiv (\rho-1)y$, $b \equiv \sigma^{2}(1 - \rho^{2})$. We can now compute $\left\langle e^{Z} -1 \mid Z \geq 0 \right\rangle$ and $\left\langle e^{Z} -1 \mid Z \leq 0 \right\rangle$ where $Z \sim \mathcal{N}(a,b)$.

\[\begin{aligned}[t] \int_{0}^{\infty} e^{z} P(z|a,b) dz = \frac{1}{\sqrt{2 \pi b}} \int_{0}^{\infty} \mathrm{exp} \left ( z - \frac{(z-a)^{2}}{2b} \right ) dz \\ &= e^{a + b/2} \frac{1}{\sqrt{2 \pi b}} \int_{0}^{\infty} \mathrm{exp}\left ( - \frac{(z - (a+b))^{2} }{2b} dz\right ) \\ &= e^{a + b/2} \Phi\left ( \frac{ a + b}{\sqrt{b} } \right ) \end{aligned}\]

where we have recognized that this is a Gaussian integral, so we first complete the square to obtain a shifted Gaussian distribution, the integral of which is the CDF. Using this result and the properties of the CDF, we can derive our terms

\[\begin{aligned}[t] \left\langle (e^{Z} -1) \mid Z \geq 0 \right\rangle = \left\langle e^{Z} \mid Z \geq 0 \right\rangle - P(Z \geq 0) \\ &= e^{a + b/2} \Phi\left ( \frac{ a + b}{\sqrt{b} } \right ) - \Phi\left ( \frac{ a }{\sqrt{b} } \right ) \end{aligned}\] \[\begin{aligned}[t] \left\langle (e^{Z} -1) \mid Z < 0 \right\rangle = P(Z < 0) - \left\langle e^{Z} | Z < 0 \right\rangle \\ &= \Phi\left ( - \frac{ a }{\sqrt{b} } \right ) - e^{a + b/2} \Phi\left (- \frac{ a + b}{\sqrt{b} } \right ) \end{aligned}\]

At last, we calculate the final expectation

\[\int_{-\infty}^{\infty} e^{y} \left[ e^{a+b/2} \left( 2\Phi\!\left(\frac{a+b}{\sqrt b}\right)-1 \right) + \left( 1-2\Phi\!\left(\frac{a}{\sqrt b}\right) \right) \right] P(y)\,dy\]

We break this integral into two parts. First

\[\begin{aligned}[t] I_{1} \equiv \int_{-\infty}^{\infty} e^{y} e^{a+b/2} \left( 2\Phi\!\left(\frac{a+b}{\sqrt b}\right)-1 \right) P(y)\,dy \\ &= e^{b/2} \int_{-\infty}^{\infty} e^{\rho y} \left( 2\Phi\!\left(\frac{a+b}{\sqrt b}\right)-1 \right) P(y)\,dy \\ &= e^{b/2} \int_{-\infty}^{\infty} e^{\rho y} (2 \Phi(A + By) -1 ) P(y) dy \end{aligned}\]

having defined $A\equiv \sqrt{b}$ and $B \equiv \frac{\rho -1}{\sqrt{b}}$. We are going to use two identities: 1) exponential tilting and 2) the expectation of a Gaussian CDF where a parameter is Gaussian distributed.

First trick:

\[e^{\rho y} P(y) = e^{\frac{\rho^{2} \sigma^{2} }{2}} P_{\rho}(y)\]

where $P_{s}(y)$ is the shifted Gaussian $y \sim \mathcal{N}(\rho\sigma^{2}, \sigma^{2})$. Now the second trick, which using our shifted distribution:

\[\left\langle \Phi(A + By) \right\rangle = \Phi\left ( \frac{A + B\rho \sigma^{2} }{\sqrt{1+B^{2}\sigma^{2} }} \right )\]

With these two we obtain

\[I_{1} = e^{b/2} e^{\rho^{2} \sigma^{2} / 2} \left ( 2\Phi\left ( \frac{A + B\rho \sigma^{2} }{\sqrt{1+B^{2}\sigma^{2} }} \right ) -1 \right )\]

Now for the second integral where we have used the same tricks.

\[I_{2} \equiv \int_{-\infty}^{\infty} e^{y} \left( 1-2\Phi\!\left(\frac{a}{\sqrt b}\right) \right) P(y)\,dy \\ = e^{\sigma^{2} / 2} \left ( 1 - \Phi \left ( \frac{B \sigma^{2} }{\sqrt{1 + B^{2} \sigma^{2} }} \right ) \right )\]

And by plugging in $A$, $B$, and $b$, we obtain

\[I = I_{1} + I_{2} = 2 e^{\frac{\sigma^{2}}{2}} \left( 2 \Phi\left ( \sqrt{\frac{\sigma^{2} (1 - \rho)}{2}} \right ) - 1 \right)\]

We plug in our original definition to obtain

\[\begin{aligned}[t] \left\langle |x(t + \delta t) - x(t) | \right\rangle = \left\langle x \right\rangle \left\langle | e^{y_{2}} - e^{y_{1}} | \right\rangle \\ &= 2 \left\langle x \right\rangle e^{\frac{\sigma^{2}}{2}} \mathrm{erf} \left ( \sqrt{\frac{\sigma^{2} (1 - \rho) }{4}} \right ) \end{aligned}\]

where we have converted the Gaussian CDF to the error function. This looks promising, as $\rho \approx 0$ and $\delta t \gg \frac{1}{k \left\langle x \right\rangle }$, resulting in the error function approaching unity and the expected magnitude of fluctuations scaling isometrically with $\left\langle x \right\rangle$.

In the limit $\delta t \ll \frac{1}{k \left\langle x \right\rangle }$ we obtain

\[\left\langle |x(t + \delta t) - x(t) | \right\rangle \sim 2 \left\langle x \right\rangle \mathrm{exp}\left ( \frac{D^{2}}{4 k \left\langle x \right\rangle } \right ) \frac{D}{\sqrt{2 \pi}} \sqrt{\delta t} \\ \sim \sqrt{\frac{2}{\pi}} D \left\langle x \right\rangle \sqrt{\delta t}\]

Where we have ignored exponential corrections. If Taylor’s Law holds then the CV of $x$ is constant, which implies $D\propto \sqrt{ \left\langle x \right\rangle}$. So the exponential term only drops out if $k \gg 1$. We end up with a similar scaling with $\left\langle x \right\rangle$, but now it’s also possible to examine the relationship as a function of the sampling increment with scaling $\sqrt{\delta t}$.

Birth-Death Model/Cox–Ingersoll–Ross/Demographic Noise

Our second model is

\[\frac{dx}{dt} = k (\mu - x) + D \sqrt{x} \eta(t)\]

The time-dependent distribution of this SDE has an exact solution (Azaele et al., 2006). That derivation also multiplies the delta-correlated noise autocorrelation function by a factor two.

\[P(x (\delta ) | x_{0}) = \left ( \frac{k}{D^{2}} \right)^{\frac{k \mu}{D^{2}}} x^{\frac{k \mu}{D^{2}} -1} e^{-\frac{xk}{D^{2}}} \cdot \\ \frac{\left [ \left ( \frac{k}{D^{2}} \right )^{2} x_{0} x e^{-kt} \right ]^{\frac{1}{2} -\frac{k \mu}{2 D^{2}}}}{ \left ( 1 - e^{ - k t} \right)} \\ \cdot \mathrm{exp}\left [ - \frac{ \frac{k}{D} (x + x_{0}) e^{-k t} }{ \left ( 1 - e^{ - k t} \right) } \right] \\ \cdot I_{\frac{k \mu}{D^{2}} -1 } \left [ \frac{ \frac{2k}{D} \sqrt{x_{0}x e^{-kt} } }{ \left ( 1 - e^{ - k t} \right) } \right ]\]

Allowing us to derive the magnitude of fluctuations as

\[\left\langle | x(t + \delta t) - x(t) | \right\rangle = \int_{0}^{\infty} P(x_{0}) \int_{0}^{\infty} | x_{1}(\delta t) - x_{0} | P(x_{1} \delta t | x_{0}) dx_{1} dx_{0}\]

This integral is hard to evaluate due to the Bessel function in our conditional probability.

First attempt

Instead, let’s repeat our calculation for the SLM, this time using $y = \sqrt{\frac{x}{\left\langle x \right\rangle}}$.

\[dy = \left[ -\frac{k}{2}y + \left(\frac{k}{2}-\frac{D^{2}}{8\mu}\right)\frac{1}{y} \right]dt + \frac{D}{2\sqrt{\mu}}\, \eta(t)\]

This reduces to an OU process when $D^{2} = 4k\mu$, a criteria equivalent to $\mathrm{CV}_{x}=\sqrt{2}$.

\[dy = -\frac{k}{2}ydt + \frac{D}{2\sqrt{\mu}}\, \eta(t)\]

The same time-dependent distribution previously introduced can be used again, with the following moments

\[\left < y(\delta t) | y_{0} \right > = y_{0} e^{-\frac{k}{2} \delta t} \\ \sigma_{\delta t}^{2} = \left( 1 - e^{-k \delta t} \right) \\ \rho = e^{-\frac{k}{2} \delta t}\]

which approaches a standard Gaussian for $\delta t \gg k^{1}$. Much easier. Similar to earlier, from our transformation it is clear we want to calculate

\[\left\langle \left| y_{2}^{2} - y_{1}^{2} \right| \right\rangle\]

we define the following standard Gaussian variables, obtained via orthogonal transform

\[u \equiv \frac{y_{2} - y_{1}}{\sqrt{2(1 - \rho)}} \\ v \equiv \frac{y_{2} + y_{1}}{\sqrt{2(1 + \rho)}}\]

rearranging to defined $y$ terms, we get

\[y_{2}^{2} - y_{1}^{2} = 2\sqrt{1-\rho^{2}}uv \\ v \equiv \frac{y_{2} + y_{1}}{\sqrt{2(1 + \rho)}}\]

Our standard Gaussians are independent, the expected value of the absolute value of each being $\sqrt{2}{\pi}$. From this result we obtain

\[\left\langle \left| y_{2}^{2} - y_{1}^{2} \right| \right\rangle = \frac{4}{\pi} \sqrt{1-\rho^{2}}\]

giving us

\[\left\langle \left| x(t + \delta t) - x(t) \right| \right\rangle = \left\langle x \right\rangle \frac{4}{\pi} \sqrt{1-e^{k \delta t}} \\ \approx \left\langle x \right\rangle \frac{4}{\pi} \sqrt{k \delta t}\]

Many steps, many of which were unneeded to determine how $\left \langle \mid \delta x \mid \right \rangle$ should scale with $\left \langle x \right \rangle $. But this is differs from the square-root scaling we expected!

Second attempt

The issue is our constraint $D^{2} \equiv 4k\mu$. Relaxing this, we have to take the small time increment from our SDE.

\[\delta x \approx D \sqrt{x} \sqrt{\delta t} \eta(t)\]

Conditioning on initial conditions

\[\left\langle \mid \delta x \mid \mid x \right\rangle \approx D \left\langle \sqrt{x } \right\rangle \sqrt{\delta t} \sqrt{\frac{2}{\pi}}\]

But we are still not done, as $ \left\langle \sqrt{x } \right\rangle \neq \sqrt{\left\langle x \right\rangle} $. Thankfully,

\[\left\langle \sqrt{x } \right\rangle = \sqrt{\langle x\rangle}\, \frac{ \Gamma\!\left(\frac{2k\mu}{D^{2}}+\frac{1}{2}\right) }{ \Gamma\!\left(\frac{2k\mu}{D^{2}}\right) \sqrt{\frac{2k\mu}{D^{2}}} }\]

So we got what we were looking for. Not perfect. Still, it’s nice to have a more formal, if not exact, result.

References

Aguilar, Javier, Miguel A. Muñoz, and Sandro Azaele. “The Limits of Inference in Complex Systems: When Stochastic Models Become Indistinguishable.” arXiv preprint arXiv:2509.24977 (2025).

Azaele, Sandro, et al. “Dynamical evolution of ecosystems.” Nature 444.7121 (2006): 926-928.

Descheemaeker, Lana, and Sophie De Buyl. “Stochastic logistic models reproduce experimental time series of microbial communities.” Elife 9 (2020): e55650.

Gardiner, Crispin. Stochastic methods. Vol. 4. Springer Berlin Heidelberg, 2009.

Otunuga, Olusegun Michael. “Time-dependent probability density function for general stochastic logistic population model with harvesting effort.” Physica A: Statistical Mechanics and its Applications 573 (2021): 125931.

Shoemaker, William Randolph. “An Interpretation, Survey, and Outlook of Microbial Macroecology.” EcoEvoRxiv. https://doi.org/10.32942/X2008N. (2026).