Recurrence Plots and Cross Recurrence Plots

Dynamical Invariants Derived from Recurrence Plots

Correlation entropy and correlation dimension

The lengths of diagonal lines in an RP are directly related to the ratio of determinism or predictability inherent to the system. Suppose that the states at times \(i\) and \(j\) are neighbouring, i.e. \(R_{i,j}=1\). If the system behaves predictably, similar situations will lead to a similar future, i.e. the probability for \(R_{i+1,j+1}=1\). is high. For perfectly predictable systems, this leads to infinitely long diagonal lines (like in the RP of the sine function). In contrast, if the system is stochastic, the probability for \(R_{i+1,j+1}=1\). will be small and we only find single points or short lines. If the system is chaotic, initially neighbouring states will diverge exponentially. The faster the divergence, i.e. the higher the Lyapunov exponent, the shorter the diagonals.

Typology of recurrence plots
Prototypical examples of recurrence plots of a (A) stochastic system (white noise), (B) fully deterministic, oscillating system (sine function) and (C) chaotic system (Rössler oscillator in chaotic regime).

At first, we recall the definition of the second order Rényi entropy (correlation entropy). Let us consider a trajectory \(\vec{x}(t)\) in a bounded \(d\)-dimensional phase space; the state of the system is measured at time intervals \(\tau\). Let \({1,2,\ldots,M(\varepsilon)}\) be a partition of the attractor in boxes of size \(\varepsilon\). Then \(p(i_1,\ldots,i_l)\) denotes the joint probability that \(\vec{x}(t =1\tau)\) is in the box \(i_1\), \(\vec{x}(t =2\tau)\) is in the box \(i_2\), …, and \(\vec{x}(t =l\tau)\) is in the box \(i_l\). The 2nd order Rényi entropy is then defined as (Renyi, 1970; Grassberger & Procaccia, 1983) $$ K_2 = - \lim_{\tau \rightarrow 0} \ \lim_{\varepsilon \rightarrow 0} \ \lim_{l \rightarrow \infty} \frac{1}{l \tau} \ln \sum_{i_1,\ldots,i_l} p^2(i_1,\ldots,i_l). $$ Roughly speaking, this measure is directly related to the number of possible trajectories that the system can take for \(l\) time steps in the future. If the system is perfectly deterministic in the classical sense, there will be only one possibility for the trajectory to evolve and hence \(K_2=0\). In contrast, for purely stochastic systems the number of possible future trajectories increases to infinity so fast that \(K_2 \rightarrow \infty\). Chaotic systems are characterised by a finite value of \(K_2\), as they belong to a category between pure deterministic and pure stochastic systems. Also in this case the number of possible trajectories diverges but not as fast as in the stochastic case. The inverse of \(K_2=0\) has units of time and can be interpreted as the mean prediction time of the system.

The sum of the probabilities \(p(i_1,\ldots,i_l)\) can be approximated by the probability \(p_t(l)\) of finding a sequence of \(l\) points in boxes of size \(\varepsilon\) centred at the points \( \vec{x}(t), \ldots, \vec{x}(t+(l-1))\): $$ \frac{1}{N}\sum_{t=1}^N p_{i_1(t),\ldots,i_l(t+ (l-1) \Delta t)} \approx \frac{1}{N}\sum_{t=1}^N p_t(l). $$ Moreover, \(p_t(l)\) can be expressed by means of the recurrence matrix $$ p_t(l) = \lim_{N \to \infty}\frac{1}{N}\sum_{s=1}^N \prod_{k=0}^{l-1}R_{t+k,s+k}. $$ Using these relation, we find an estimator for the second order Rényi entropy by means of the RP (Thiel et al., 2003) $$ K_2(l)= -\frac{1}{l\,\Delta t} \ln \left( p^c(l) \right)=-\frac{1}{l\,\Delta t} \ln \left(\frac{1}{N^2}\sum_{t,s=1}^N \prod_{k=0}^{l-1} R_{t+k,s+k}\right), $$ where \(p^c(l)\) is the probability to find a diagonal of at least length \(l\) in the RP.

On the other hand, the \(l\)-dimensional correlation sum can be used to define \(K_2\) (Grassberger & Procaccia, 1983a). This definition of \(K_2\) can also be expressed by means of RPs and yields the following fundamental relationship (Thiel et al., 2003): $$ \ln p^c(l) \sim \varepsilon^{D_2} e^{-K_2(\varepsilon)\tau}. $$ \(D_2\) is the correlation dimension of the system under consideration (Grassberger & Procaccia, 1983). Therefore, in a logarithmic presentation of \(p^c(l)\) over \(l\) the slope of the lines corresponds to \(-K_2\tau\) for large \(l\), which is independent of \(\varepsilon\) for a rather large range in \(\varepsilon\).

If we represent the slope of the curves for large \(l\) in dependence on \(\varepsilon\) a plateau can be found for chaotic systems. The value of this plateau determines \(K_2\). If the system is not chaotic, we have to consider the value of the slope for a sufficiently small value of \(\varepsilon\).

The relationship between \(K_2\) and RPs also allows to estimate \(D_2\) from \(p^c(l)\). Considering the relationship between \(K_2\) and RPs for two different thresholds \(\varepsilon\) and \(\varepsilon + \Delta \varepsilon\) and dividing both of them, we get $$ D_2(\varepsilon) = \frac{\ln\left(\frac{p^c(\varepsilon,l)}{p^c(\varepsilon+\Delta\varepsilon,l)}\right)}{ \ln\left(\frac{\varepsilon}{\varepsilon+\Delta\varepsilon}\right)}, $$ which is an estimator of the correlation dimension \(D_2\) (Grassberger, 1983).

K2 and D2 calculated by means of RPs for the Bernoulli map.
\(K_2\) and \(D_2\) examplary calculated for the chaotic Bernoulli map, \(x_{n+1} = 2x_n\mod(1)\). (A) Total number of diagonal lines of at least length \(l\) in the RP of the Bernoulli map. Each histogram is computed for a different threshold \(\varepsilon\), from \(0.000436\) (bottom) to \(0.0247\) (top). \(10,000\) data points have been used for the computation. (B) Estimate of \(K_2\) in dependence on \(\varepsilon\). We find \(K_2 = 0.6733\), which is in good accordance with the values found by others (e.g. Ott, 1993). (C) Estimate of \(D_2\) in dependence on \(\varepsilon\). We obtain a value of \(0.9930\), which is also close to the theoretical value of \(D_2=1\).

Generalised mutual information (generalised redundancies)

The mutual information quantifies the amount of information that we obtain from the measurement of one variable on another. It has become a widely applied measure to quantify dependencies within or between time series (auto and cross mutual information). The time delayed generalised mutual information (redundancy) \(I_q(\tau)\) of a system \(\vec{x}_i\) is defined by (Rényi, 1970) $$ I_q^{\vec x}(\tau) = 2 H_q - H_q(\tau). $$ \(H_q\) is the \(q\)th-order Rényi entropy of \(\vec{x}_i\) and \(H_q(\tau)\) is the \(q\)th-order joint Rényi entropy of \(\vec{x}_i\) and \(\vec{x}_{i+\tau}\) $$ H_q=-\ln\sum\limits_{k}p_k^q,\qquad H_q(\tau)=-\ln\sum\limits_{k,l}p_{k,l}^q(\tau), $$ where \(p_k\) is the probability that the trajectory visits the \(k\)th box and \(p_{k,l}(\tau)\) is the joint probability that \(\vec{x}_i\) is in box \(k\) and \(\vec{x}_{i+\tau}\) is in box \(l\). Hence, for the case \(q=2\) we can use the recurrence matrix to estimate \(H_2\) $$ H_2=-\ln \left(\frac{1}{N^2}\sum_{i,j=1}^N R_{i,j}\right) $$ and \(H_q(\tau)\) $$ H_2(\tau) = -\ln \left(\frac{1}{N^2}\sum_{i,j=1}^N R_{i,j} R_{i+\tau,j+\tau}\right) = -\ln \left(\frac{1}{N^2}\sum_{i,j=1}^N JR^{\vec x, \vec x}_{i,j}(\tau)\right), $$

where \(JR_{i,j}(\tau)\) denotes the delayed joint recurrence matrix. Then, the second order generalised mutual information can be estimated by means of RPs (Thiel et al., 2003) $$ I_2^{\vec x}(\tau)= \ln \left(\frac{1}{N^2}\sum\limits_{i,j=1}^N JR_{i,j}^{\vec x, \vec x}(\tau)\right) - 2 \ln \left(\frac{1}{N^2} \sum\limits_{i,j=1}^N R_{i,j}\right). $$


See also

Online demo: TOCSY:K2
Matlab code using commandline RP software: Matlab-Skript Calculate line length distributions of a recurrence plot

Creative Commons License © 2000-2017 SOME RIGHTS RESERVED
The material of this web site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Germany License.

Please respect the copyrights! The content of this web site is protected by a Creative Commons License. You may use the text or figures, but you have to cite this source ( as well as N. Marwan, M. C. Romano, M. Thiel, J. Kurths: Recurrence Plots for the Analysis of Complex Systems, Physics Reports, 438(5-6), 237-329, 2007.

Spam Harvester Protection Network
provided by Unspam