Magnitude abundance fluctuations in minimal models
Background
Continuing with the theme, I wrote some derivations a while back to explain the how the magnitude of abundance fluctuations scales with mean abundance. Below are those notes, with the forms of the SDEs altered to maintain consistent parameterization (Aguilar et al., 2025). These types of comparisons of SDEs that capture different mesoscopic dynamics (e.g., noise source) while producing the same stationary behavior are common.
Stochastic Logistic Model/Environmental Noise
We are interested in the following relationship of abundance ($x(t)$),
\[\left\langle | x(t + \delta t) - x(t)| \right \rangle \propto \left\langle x \right\rangle\]for some model-dependeent $\gamma$. This relationship for different minimal models of ecological dynamics was obtained via simulations a few years back (Descheemaeker and De Buyl, 2020), but I could not find a derivation.
We start with two minimal models of ecology. First, the Stochastic Logistic Model (SLM) where deterministic dynamics are logistic and the noise is environmental (i.e., linear multiplicative):
\[\frac{dx}{dt} = k x(\mu - x) + D x \cdot \eta(t)\]where $\eta(t)$ is our Brownian noise term $\left \langle \eta(t) \right \rangle = 0$, $\left < \eta(t) \eta(t’) \right > = \delta (t - t’)$. There is no straightforward solution to this SDE (Otunuga, 2021). Instead, we’ll examine the limiting behavior. To do that, we first rescale the SDE by $\left <x \right > = \mu - \frac{D^{2}}{2k}$. We then define $y \equiv \mathrm{ln} \left( \frac{x}{\left <x \right >} \right)$
Using Itô rules,
\[d(\mathrm{ln}\, x ) = \frac{dx}{x} - \frac{(dx)^{2}}{2x^{2}} \\ (dx)^{2} = D^{2}x^{2}dt\]we plug in and get
\[\frac{dy}{dt} = k m \left ( 1 - e^{y} \right ) + D \eta(t) \approx - k m y + D \eta(t)\]where we have defined $m \equiv \left <x \right >$. This is an Ornstein-Uhlenbeck process, with the following time-dependent moments
\[\left < y(\delta t) | y_{0} \right > = y_{0} e^{km\delta t} \\ \sigma_{\delta t}^{2} = \frac{k m D^{2}}{2} \left( 1 - e^{-2 mk \delta t} \right)\]where $y(0) \equiv y_{0}$
for the Gaussian propagator
\[P(y, \delta t | y_{0}) = \frac{1}{\sqrt{2 \pi \sigma_{\delta t}^{2}}} \cdot \mathrm{exp} \left [ - \frac{(y - \left < y(\delta t) | y_{0} \right > )^{2} }{2\sigma_{\delta t}^{2} } \right ]\]with autocorrelation $\rho \equiv e^{-mk \delta t}$ and it reduces to a zero-centered Gaussian as $\delta t \rightarrow \infty$. We drop the time-dependen in this limit.
| We are going to use this result to derive $\left\langle | x(t + \delta t) - x(t) | \right\rangle$. Define $y_{1}\equiv y(t)$ and $y_{2}\equiv y(t + \delta t)$. |
For stationary OU processes $(y_{1}, y_{2})$ are bivariate Gaussian random variables using the moments derived above. The variances will be the same, as it does not depend on $y_{0}$. This means the covariance is just the autocorrelation multiplied by the variance. From there we calculate the conditional distribution of the Gaussian, which is also Gaussian. The conditional mean is $(\sigma^{2} \rho) \frac{1}{\sigma^{2} } y = \rho y$ and the variance is $\sigma^{2} - (\sigma^{2} \rho )\frac{1}{\sigma^{2} } (\sigma^{2} \rho) = \sigma^{2} (1 - \rho^{2})$. By conditioning on $y_{1} = y$ we obtain
\[|e^{y_{2}} - e^{y_{1}} | = e^{y} | e^{(y_{2} - y_{1})} -1 |\]| We are going to define the difference between our two OU variables as $Z \equiv y_{2} - y_{1} | (y_{1}=y) \sim \mathcal{N}( (\rho-1)y, \sigma^{2}(1 - \rho^{2}) )$. From there we get |
| So we need to first compute the inner expectation and then average the product over $y$. We see that the difference changes sign at $Z=0$, so we can caculate the expectation as the sum of the truncated expectations. The notation is getting annoying so let’s define $a \equiv (\rho-1)y$, $b \equiv \sigma^{2}(1 - \rho^{2})$. We can now compute $\left\langle e^{Z} -1 \mid Z \geq 0 \right\rangle$ and $\left\langle e^{Z} -1 | Z \leq 0 \right\rangle$ where $Z \sim \mathcal{N}(a,b)$. |
where we have recognized that this is a Gaussian integral, so we first complete the square to obtain a shifted Gaussian distribution, the integral of which is the CDF. Using this result and the properties of the CDF, we can derive our terms
\[\begin{aligned}[t] \left\langle (e^{Z} -1) \mid Z \geq 0 \right\rangle = \left\langle e^{Z} \mid Z \geq 0 \right\rangle - P(Z \geq 0) \\ &= e^{a + b/2} \Phi\left ( \frac{ a + b}{\sqrt{b} } \right ) - \Phi\left ( \frac{ a }{\sqrt{b} } \right ) \end{aligned}\] \[\begin{aligned}[t] \left\langle (e^{Z} -1) \mid Z < 0 \right\rangle = P(Z < 0) - \left\langle e^{Z} | Z < 0 \right\rangle \\ &= \Phi\left ( - \frac{ a }{\sqrt{b} } \right ) - e^{a + b/2} \Phi\left (- \frac{ a + b}{\sqrt{b} } \right ) \end{aligned}\]At last, we calculate the final expectation
\[\int_{-\infty}^{\infty} e^{y} \left[ e^{a+b/2} \left( 2\Phi\!\left(\frac{a+b}{\sqrt b}\right)-1 \right) + \left( 1-2\Phi\!\left(\frac{a}{\sqrt b}\right) \right) \right] P(y)\,dy\]We break this integral into two parts. First
\[\begin{aligned}[t] I_{1} \equiv \int_{-\infty}^{\infty} e^{y} e^{a+b/2} \left( 2\Phi\!\left(\frac{a+b}{\sqrt b}\right)-1 \right) P(y)\,dy \\ &= e^{b/2} \int_{-\infty}^{\infty} e^{\rho y} \left( 2\Phi\!\left(\frac{a+b}{\sqrt b}\right)-1 \right) P(y)\,dy \\ &= e^{b/2} \int_{-\infty}^{\infty} e^{\rho y} (2 \Phi(A + By) -1 ) P(y) dy \end{aligned}\]having defined $A\equiv \sqrt{b}$ and $B \equiv \frac{\rho -1}{\sqrt{b}}$. We are going to use two identities: 1) exponential tilting and 2) the expectation of a Gaussian CDF where a parameter is Gaussian distributed.
First trick:
\[e^{\rho y} P(y) = e^{\frac{\rho^{2} \sigma^{2} }{2}} P_{\rho}(y)\]where $P_{s}(y)$ is the shifted Gaussian $y \sim \mathcal{N}(\rho\sigma^{2}, \sigma^{2})$. Now the second trick, which using our shifted distribution:
\[\left\langle \Phi(A + By) \right\rangle = \Phi\left ( \frac{A + B\rho \sigma^{2} }{\sqrt{1+B^{2}\sigma^{2} }} \right )\]With these two we obtain
\[I_{1} = e^{b/2} e^{\rho^{2} \sigma^{2} / 2} \left ( 2\Phi\left ( \frac{A + B\rho \sigma^{2} }{\sqrt{1+B^{2}\sigma^{2} }} \right ) -1 \right )\]Now for the second integral where we have used the same tricks.
\[I_{2} \equiv \int_{-\infty}^{\infty} e^{y} \left( 1-2\Phi\!\left(\frac{a}{\sqrt b}\right) \right) P(y)\,dy \\ = e^{\sigma^{2} / 2} \left ( 1 - \Phi \left ( \frac{B \sigma^{2} }{\sqrt{1 + B^{2} \sigma^{2} }} \right ) \right )\]And by plugging in $A$, $B$, and $b$, we obtain
\[I = I_{1} + I_{2} = 2 e^{\frac{\sigma^{2}}{2}} \left( 2 \Phi\left ( \sqrt{\frac{\sigma^{2} (1 - \rho)}{2}} \right ) - 1 \right)\]We plug in our original definition to obtain
\[\begin{aligned}[t] \left\langle |x(t + \delta t) - x(t) | \right\rangle = \left\langle x \right\rangle \left\langle | e^{y_{2}} - e^{y_{1}} | \right\rangle \\ &= 2 \left\langle x \right\rangle e^{\frac{\sigma^{2}}{2}} \mathrm{erf} \left ( \sqrt{\frac{\sigma^{2} (1 - \rho) }{4}} \right ) \end{aligned}\]where we have converted the Gaussian CDF to the error function. This looks promising, as $\rho \approx 0$ and $\delta t \gg \frac{1}{k \left\langle x \right\rangle }$, resulting in the error function approchhing unity and the expected magnitude of fluctuations scaling isometrically with $\left\langle x \right\rangle$.
In the limit $\delta t \ll \frac{1}{k \left\langle x \right\rangle }$ we obtain
\[\left\langle |x(t + \delta t) - x(t) | \right\rangle \sim 2 \left\langle x \right\rangle \mathrm{exp}\left ( \frac{D^{2}}{4 k \left\langle x \right\rangle } \right ) \frac{D}{\sqrt{2 \pi}} \sqrt{\delta t} \\ \sim \sqrt{\frac{2}{\pi}} D \left\langle x \right\rangle \sqrt{\delta t}\]Where we have ignored exponential corrections. If Taylor’s Law holds then the CV of $x$ is constant, which implies $D\propto \sqrt{ \left\langle x \right\rangle}$. So the exponential term only drops out if $k \gg 1$. We end up with a similar scaling with $\left\langle x \right\rangle$, but now it’s also possible to examine the relationship as a function of the sampling increment with scaling $\sqrt{\delta t}$.
Birth-Death Model/Cox–Ingersoll–Ross/Demographic Noise
Our second model is
\[\frac{dx}{dt} = k (\mu - x) + D \sqrt{x} \eta(t)\]The time-dependent distribution of this SDE has an exact solution (Azaele et al., 2006). That derivation also multiplies the delta-correlated noise autocorrelation function by a factor two.
\[P(x (\delta ) | x_{0}) = \left ( \frac{k}{D^{2}} \right)^{\frac{k \mu}{D^{2}}} x^{\frac{k \mu}{D^{2}} -1} e^{-\frac{xk}{D^{2}}} \cdot \\ \frac{\left [ \left ( \frac{k}{D^{2}} \right )^{2} x_{0} x e^{-kt} \right ]^{\frac{1}{2} -\frac{k \mu}{2 D^{2}}}}{ \left ( 1 - e^{ - k t} \right)} \\ \cdot \mathrm{exp}\left [ - \frac{ \frac{k}{D} (x + x_{0}) e^{-k t} }{ \left ( 1 - e^{ - k t} \right) } \right] \\ \cdot I_{\frac{k \mu}{D^{2}} -1 } \left [ \frac{ \frac{2k}{D} \sqrt{x_{0}x e^{-kt} } }{ \left ( 1 - e^{ - k t} \right) } \right ]\]Allowing us to derive the magnitude of fluctuations as
\[\left\langle | x(t + \delta t) - x(t) | \right\rangle = \int_{0}^{\infty} P(x_{0}) \int_{0}^{\infty} | x_{1}(\delta t) - x_{0} | P(x_{1} \delta t | x_{0}) dx_{1} dx_{0}\]This is hard to evaluate due to the Bessel function in our conditional probability. Instead I’m going to assume a short time interval, where we have the increment
\[x( t + \delta t) - x(t) \approx \delta t k (\mu - x_{0}) + D \sqrt{x_{0}} (W_{t+\delta t} - W_{t})\]| conditioned on $x(t)=x_{0}$, we have $\delta x | x(t)=x_{0} \sim \mathcal{N}(\delta t k(\mu - x_{0}), \delta t D^{2} x_{0})$. We can take the short-time Gaussian approximation of the SDE to obtain (Gardiner, 2009) |
This result could have been obtained by taking the small $\delta t$ limit of the Bessel function in the conditional distribution, but is easier. We now average over the stationary distribution as the initial condition. Here we are dealing with a gamma distribution $x \sim \mathrm{Gamma} \left ( \frac{2k\mu}{D^{2}}, \frac{D^{2}}{2k} \right )$. But we want the expecation of the square root. Using the “method of wikipedia”, we learn that we can use a generalized gamma distribution for this task, obtaining
\[\left\langle \sqrt{x} \right\rangle = \left ( \frac{D^{2}}{2k} \right)^{\frac{1}{2}} \frac{\Gamma \left ( \frac{2k\mu}{D^{2}} + 1 \right ) }{\Gamma \left ( \frac{2k\mu}{D^{2}} \right )}\] \[\left\langle |\delta x | \right\rangle = D \sqrt{\frac{2}{\pi}} \sqrt{\delta t} \left ( \frac{D^{2}}{2k} \right)^{\frac{1}{2}} \frac{\Gamma \left ( \frac{2k\mu}{D^{2}} + 1 \right ) }{\Gamma \left ( \frac{2k\mu}{D^{2}} \right )} + \mathcal{O}((\delta t)^{3/2} ) \\ = D \sqrt{\frac{2}{\pi}} \sqrt{\mu \delta t} \left( 1 - \frac{D^{2}}{16k\mu} + \mathcal{O}\left( \frac{D^{4}}{k^{2}\mu^{2}} \right) \right) + \mathcal{O}((\delta t)^{3/2} )\]If we repeat this small $\delta t$ approximation for the SLM we obtain
\[\left\langle |\delta x | \right\rangle = D \sqrt{\frac{2}{\pi}} \sqrt{\delta t} \left ( \mu - \frac{D^{2}}{2k} \right ) + \mathcal{O}((\delta t)^{3/2} )\]Final comparison: \(\left\langle |\delta x | \right\rangle_{\mathrm{SLM}} = D \sqrt{\frac{2}{\pi}} \sqrt{ \left\langle x \right\rangle \delta t} \\ \left\langle |\delta x | \right\rangle_{\mathrm{BDM}} = D \sqrt{\frac{2}{\pi}} \left\langle x \right\rangle \sqrt{\delta t}al{O}((\delta t)^{3/2} )\)
References
Aguilar, Javier, Miguel A. Muñoz, and Sandro Azaele. “The Limits of Inference in Complex Systems: When Stochastic Models Become Indistinguishable.” arXiv preprint arXiv:2509.24977 (2025).
Azaele, Sandro, et al. “Dynamical evolution of ecosystems.” Nature 444.7121 (2006): 926-928.
Descheemaeker, Lana, and Sophie De Buyl. “Stochastic logistic models reproduce experimental time series of microbial communities.” Elife 9 (2020): e55650.
Gardiner, Crispin. Stochastic methods. Vol. 4. Springer Berlin Heidelberg, 2009.
Otunuga, Olusegun Michael. “Time-dependent probability density function for general stochastic logistic population model with harvesting effort.” Physica A: Statistical Mechanics and its Applications 573 (2021): 125931.
Shoemaker, William Randolph. “An Interpretation, Survey, and Outlook of Microbial Macroecology.” EcoEvoRxiv. https://doi.org/10.32942/X2008N. (2026).
