We demonstrate the strength of a general discrete-time modeling framework in which model specification is reduced to the bare essentials. The approach is due to Kermack and McKendrick, dating from 1927. Despite the many citations, the true contents of their work is essentially unknown. On top of that, their continuous-time model takes the mathematical form of a renewal equation, and only a few experts can handle such equations numerically. Here, we establish that the discrete-time version is equally general and flexible and, yet, is very simple to parametrize on the basis of data and to implement computationally. This makes the framework highly suitable for studying control scenarios for epidemic diseases such as COVID-19.

The COVID-19 pandemic has led to numerous mathematical models for the spread of infection, the majority of which are large compartmental models that implicitly constrain the generation-time distribution. On the other hand, the continuous-time Kermack–McKendrick epidemic model of 1927 (KM27) allows an arbitrary generation-time distribution, but it suffers from the drawback that its numerical implementation is rather cumbersome. Here, we introduce a discrete-time version of KM27 that is as general and flexible, and yet is very easy to implement computationally. Thus, it promises to become a very powerful tool for exploring control scenarios for specific infectious diseases such as COVID-19. To demonstrate this potential, we investigate numerically how the incidence-peak size depends on model ingredients. We find that, with the same reproduction number and the same initial growth rate, compartmental models systematically predict lower peak sizes than models in which the latent and the infectious period have fixed duration.

]]>Early warning signals (EWS) of tipping points are vital to anticipate system collapse or other sudden shifts. However, existing generic early warning indicators designed to work across all systems do not provide information on the state that lies beyond the tipping point. Our results show how deep learning algorithms (artificial intelligence) can provide EWS of tipping points in real-world systems. The algorithm predicts certain qualitative aspects of the new state, and is also more sensitive and generates fewer false positives than generic indicators. We use theory about system behavior near tipping points so that the algorithm does not require data from the study system but instead learns from a universe of possible models.

Many natural systems exhibit tipping points where slowly changing environmental conditions spark a sudden shift to a new and sometimes very different state. As the tipping point is approached, the dynamics of complex and varied systems simplify down to a limited number of possible “normal forms” that determine qualitative aspects of the new state that lies beyond the tipping point, such as whether it will oscillate or be stable. In several of those forms, indicators like increasing lag-1 autocorrelation and variance provide generic early warning signals (EWS) of the tipping point by detecting how dynamics slow down near the transition. But they do not predict the nature of the new state. Here we develop a deep learning algorithm that provides EWS in systems it was not explicitly trained on, by exploiting information about normal forms and scaling behavior of dynamics near tipping points that are common to many dynamical systems. The algorithm provides EWS in 268 empirical and model time series from ecology, thermoacoustics, climatology, and epidemiology with much greater sensitivity and specificity than generic EWS. It can also predict the normal form that characterizes the oncoming tipping point, thus providing qualitative information on certain aspects of the new state. Such approaches can help humans better prepare for, or avoid, undesirable state transitions. The algorithm also illustrates how a universe of possible models can be mined to recognize naturally occurring tipping points.

]]>Genetic screens have enumerated the genes that control the process of self-organization that converts a featureless fertilized egg into an embryo. Even the elementary steps may involve 10 genes, so models that attempt to represent each gene contain a plethora of unmeasured parameters. Mathematics has largely categorized the types of solutions that can arise from the equations governing gene networks. These representations are well suited to modern time-lapse imaging, where a limited number of genetic markers are followed in time. Models with minimal parameters that focus on the nonlinear regime from inception to pattern maturation simplify data fitting and provide an intuitive and transparent representation for the dynamics of development.

Embryonic development leads to the reproducible and ordered appearance of complexity from egg to adult. The successive differentiation of different cell types that elaborate this complexity results from the activity of gene networks and was likened by Waddington to a flow through a landscape in which valleys represent alternative fates. Geometric methods allow the formal representation of such landscapes and codify the types of behaviors that result from systems of differential equations. Results from Smale and coworkers imply that systems encompassing gene network models can be represented as potential gradients with a Riemann metric, justifying the Waddington metaphor. Here, we extend this representation to include parameter dependence and enumerate all three-way cellular decisions realizable by tuning at most two parameters, which can be generalized to include spatial coordinates in a tissue. All diagrams of cell states vs. model parameters are thereby enumerated. We unify a number of standard models for spatial pattern formation by expressing them in potential form (i.e., as topographic elevation). Turing systems appear nonpotential, yet in suitable variables the dynamics are low dimensional and potential. A time-independent embedding recovers the original variables. Lateral inhibition is described by a saddle point with many unstable directions. A model for the patterning of the *Drosophila* eye appears as relaxation in a bistable potential. Geometric reasoning provides intuitive dynamic models for development that are well adapted to fit time-lapse data.

Many systems involve more variables than can be reasonably simulated. Even when only some of these variables are of interest, they usually depend strongly on the other variables. Reduced order models of the relevant variables, which behave as those variables would in a full simulation, are of great interest. Many such models involve a “memory” term that is difficult to compute and can lead to instability if not properly approximated. We have developed a time-dependent renormalization approach to stabilize such models. We validate the approach on the inviscid Burgers equation. We use it to obtain a perturbative renormalization of the three-dimensional Euler equations of incompressible fluid flow including all the complex effects present in the dynamics.

While model order reduction is a promising approach in dealing with multiscale time-dependent systems that are too large or too expensive to simulate for long times, the resulting reduced order models can suffer from instabilities. We have recently developed a time-dependent renormalization approach to stabilize such reduced models. In the current work, we extend this framework by introducing a parameter that controls the time decay of the memory of such models and optimally select this parameter based on limited fully resolved simulations. First, we demonstrate our framework on the inviscid Burgers equation whose solution develops a finite-time singularity. Our renormalized reduced order models are stable and accurate for long times while using for their calibration only data from a full order simulation before the occurrence of the singularity. Furthermore, we apply this framework to the three-dimensional (3D) Euler equations of incompressible fluid flow, where the problem of finite-time singularity formation is still open and where brute force simulation is only feasible for short times. Our approach allows us to obtain a perturbatively renormalizable model which is stable for long times and includes all the complex effects present in the 3D Euler dynamics. We find that, in each application, the renormalization coefficients display algebraic decay with increasing resolution and that the parameter which controls the time decay of the memory is problem-dependent.

]]>A simple model shows that control of COVID-19 infection driven by asymptomatic transmission on an urban, residential college campus is possible by instituting comprehensive public health protocols founded on surveillance testing and contact tracing. The model gives expressions for the number of infections expected as a function of these protocols and compares well with data from a large residential university for fall 2020.

A customized susceptible, exposed, infected, and recovered compartmental model is presented for describing the control of asymptomatic spread of COVID-19 infections on a residential, urban college campus embedded in a large urban community by using public health protocols, founded on surveillance testing, contact tracing, isolation, and quarantine. Analysis in the limit of low infection rates—a necessary condition for successful operation of the campus—yields expressions for controlling the infection and understanding the dynamics of infection spread. The number of expected cases on campus is proportional to the exogenous infection rate in the community and is decreased by more frequent testing and effective contact tracing. Simple expressions are presented for the dynamics of superspreader events and the impact of partial vaccination. The model results compare well with residential data from Boston University’s undergraduate population for fall 2020.

]]>In collective decision-making systems, such as committees and governments, many individuals follow others instead of evaluating the options on their own. Can a group settle on the option with higher merit when social learners prevail? Previous research has reached mixed conclusions because collective decisions emerge from a complex interaction of cognitive and social factors, which are rarely studied together. This paper develops a simple yet general mathematical framework to study this interaction and predicts a critical threshold for the proportion of social learners, above which an option may prevail regardless of its merit. The results suggest predictable limits to the proportion of social learners in collective situations from teamwork to democratic elections, beyond which the collective performance is affected negatively.

A key question concerning collective decisions is whether a social system can settle on the best available option when some members learn from others instead of evaluating the options on their own. This question is challenging to study, and previous research has reached mixed conclusions, because collective decision outcomes depend on the insufficiently understood complex system of cognitive strategies, task properties, and social influence processes. This study integrates these complex interactions together in one general yet partially analytically tractable mathematical framework using a dynamical system model. In particular, it investigates how the interplay of the proportion of social learners, the relative merit of options, and the type of conformity response affect collective decision outcomes in a binary choice. The model predicts that, when the proportion of social learners exceeds a critical threshold, a bistable state appears in which the majority can end up favoring either the higher- or lower-merit option, depending on fluctuations and initial conditions. Below this threshold, the high-merit option is chosen by the majority. The critical threshold is determined by the conformity response function and the relative merits of the two options. The study helps reconcile disagreements about the effect of social learners on collective performance and proposes a mathematical framework that can be readily adapted to extensions investigating a wider variety of dynamics.

]]>Shortages of COVID-19 vaccines hampered efforts to fight the current pandemic, leading experts to argue for delaying the second dose to provide earlier first-dose protection to twice as many people. We designed a model-based strategy for identifying the optimal second-dose delay using the hospitalization rate as the key metric. While epistemic uncertainties apply to our modeling, we found that the optimal delay was dependent on first-dose efficacy and vaccine mechanism of action. For infection-blocking vaccines, the second dose could be delayed ≥8 weeks if the first-dose efficacy was ≥50%. For symptom-alleviating vaccines, this delay duration is recommended if the first-dose efficacy was ≥70%. These results suggest that delaying the second vaccine dose is a feasible option.

Slower than anticipated, COVID-19 vaccine production and distribution have impaired efforts to curtail the current pandemic. The standard administration schedule for most COVID-19 vaccines currently approved is two doses administered 3 to 4 wk apart. To increase the number of individuals with partial protection, some governments are considering delaying the second vaccine dose. However, the delay duration must take into account crucial factors, such as the degree of protection conferred by a single dose, the anticipated vaccine supply pipeline, and the potential emergence of more virulent COVID-19 variants. To help guide decision-making, we propose here an optimization model based on extended susceptible, exposed, infectious, and removed (SEIR) dynamics that determines the optimal delay duration between the first and second COVID-19 vaccine doses. The model assumes lenient social distancing and uses intensive care unit (ICU) admission as a key metric while selecting the optimal duration between doses vs. the standard 4-wk delay. While epistemic uncertainties apply to the interpretation of simulation outputs, we found that the delay is dependent on the vaccine mechanism of action and first-dose efficacy. For infection-blocking vaccines with first-dose efficacy ≥50%, the model predicts that the second dose can be delayed by ≥8 wk (half of the maximal delay), whereas for symptom-alleviating vaccines, the same delay is recommended only if the first-dose efficacy is ≥70%. Our model predicts that a 12-wk second-dose delay of an infection-blocking vaccine with a first-dose efficacy ≥70% could reduce ICU admissions by 400 people per million over 200 d.

]]>This paper sheds light on the phase portrait structure of a “minimal model” for a large nonlinear system of many randomly interacting degrees of freedom equipped with a stability feedback mechanism. In this way, it significantly extends local stability analysis of large complex systems, which was undertaken by Robert May in 1972. We show that the transition from stability to instability is characterized by the exponential explosion in the number of unstable equilibria, with drastically reduced probability of finding truly locally attracting equilibria. At the same time, we demonstrate abundance of equilibria with a large proportion of stable directions, which arguably can slow down the system dynamics for sufficiently long time and induce aging effects.

We consider a nonlinear autonomous system of

Lacking any ability to store glucose, the mammalian brain relies on a constant glucose and oxygen supply via the cerebral vasculature. In the cortex, this supply is maintained by parallel arterioles and venules. Yet, mathematical modeling of both real and idealized cortical networks shows that, far from being perfused uniformly, the cortex is strewn with regions of very low flow. Increasing the number of perfusing vessels increases the number of low-flow spots. Minimizing the influence of low-flow spots sets an optimal arteriole–venule ratio that we find to be closely recapitulated in data from real mammalian cortices. Further, low-flow regions complicate the regulation of metabolite delivery with neuronal activity, leading to unintuitive changes in perfusion when penetrating vessels dilate.

The energy demands of neurons are met by a constant supply of glucose and oxygen via the cerebral vasculature. The cerebral cortex is perfused by dense, parallel arterioles and venules, consistently in imbalanced ratios. Whether and how arteriole–venule arrangement and ratio affect the efficiency of energy delivery to the cortex has remained an unanswered question. Here, we show by mathematical modeling and analysis of the mapped mouse sensory cortex that the perfusive efficiency of the network is predicted to be limited by low-flow regions produced between pairs of arterioles or pairs of venules. Increasing either arteriole or venule density decreases the size of these low-flow regions, but increases their number, setting an optimal ratio between arterioles and venules that matches closely that observed across mammalian cortical vasculature. Low-flow regions are reshaped in complex ways by changes in vascular conductance, creating geometric challenges for matching cortical perfusion with neuronal activity.

]]>