Solving a basic New-Keynesian model in symbolic programming style with Julia and Python

1 Prerequisites

1.1 Motivations

This semester, we had a taste of the New-Keynesian models for monetary economics. Came along with it are the ugly algebra transformations. When my batchmates told me this would be the life of doing macroeconomics, I decided that it’s time to learn how to do symbolic programming.

In this document, I am going to redo one of our assignments in class, solving a very basic New Keynesian model using only symbolic programming in Julia and Python.

1.2 The ecosystem for symbolic programming

The tools we’re exploring today is SymPy in Python (Meurer et al. 2017) and Symbolics.jl in Julia (Gowda et al. 2021). In Julia, one may also find SymPy.jl, which is basically a wrapper around Sympy and thus should be able to bring feature parity.

What are the alternatives? Given the complexity of buiding a cas system, there are only a few contenders worth noting. Mathematica has been around for a long time and is likely the one with the most features. Plunging deeper, we find SageMath and Mathics, both trying to compete with Mathematica while staying open-sourced and are built upon Sympy and other Python packages. While SageMath is a more comprehensive system, Mathics focuses primarily on algebra and symbolic computation.

While it’s tempting to search for the omnipotent tool that satisfies any mathematical needs, For me an assessible tool is more desired than a full-featured one. At least, I think Paul Romer would agree. The math in economics is not crazily complex. By leveraging the open-source ecosystem I can get up to speed quickly and get works done.

1.3 Installing packages

Installing packages in Julia is trivial. You can just call a package (e.g. using Symbolics) and it will ask to install if the package is not present. Here’s the packages running on my Julia v1.18.3:

[23fbe1c1] Latexify v0.15.18
[d1185830] SymbolicUtils v0.19.11
[0c5d862f] Symbolics v4.14.0

Python doens’t really have a good package managing system, this means that it might be tricky to replicate the evironment to run my code (Hmm!). Usually the solution to this problem is to set up virtual environments using Pipenv or Poetry, but those are overkill for our purpose today, which is to quickly setup packages to use in an interactive session. Let’s just say that I installed Sympy in my system using our friendly tool pip:

pip install sympy

And the version I’m using:

pip show sympy
Name: sympy
Version: 1.11.1
Summary: Computer algebra system (CAS) in Python
Author: SymPy development team
License: BSD
Location: /home/hieuphay/.local/lib/python3.10/site-packages
Requires: mpmath

2 Importing packages

For Julia, we are importing Symbolics.jl. There are two experimental functions, solve_for() and coeff(), that are not exported with Symbolics.jl by default so we need to import them specificly.

using Symbolics
using Symbolics: solve_for, coeff #these are not exported by default

For Python we can also flood the environment with SymPy functions and classes. This is not something you want to do while working in a big project with Python, but for small interactive sessions like these it should be fine:

from sympy import *

Latexify.jl is a nifty Julia package that converts symbolic expressions to LaTeX, so I’m using it for the purpose of creating this document.

using Latexify                    #To export results to latex

For Python I’m using the function pprint() to display expressions in unicode glyphs. If you are using SymPy in a Jupyter notebook or in other IPython interfaces, you may find that expressions are printed in unicode or LaTeX by default.

3 Derive the system of stochastic difference equations characterize the dynacmics of the economy

We start with the follow equations. More explainations can be found in Galí (2008, chap. 3). I’m not doing that here.

First, the Euler equation that describes the output gap \[ \tilde{y}_{t} = - \frac{1}{\sigma}(i_{t} - \mathbb{E}_{t}\pi_{t+1} - r_{t}^n) + \mathbb{E}_{t}\tilde{y}_{t+1} \tag{OG}\]

The money demand is written as \[ m_{t} = p_{t} + \tilde{y}_{t} - \eta i_{t} + y_{t}^n \tag{MD} \]

The Phillip’s curve is written as \[ \pi_{t} = \beta \mathbb{E}_{t}\pi_{t+1} + \kappa \tilde{y}_{t} \tag{PC} \]

In this exercise, the central bank operates by a monetary rule, which pin down the real balance evolution:

\begin{gather*} \Delta m_{t} = p_{t} \\ \Leftrightarrow m_{t} - m_{t-1} = p_{t} - p_{t-1} \\ \Leftrightarrow m_{t} - p_{t} = m_{t-1} - p_{t-1} \\ \Leftrightarrow l_{t} = l_{t-1} \end{gather*}

With \( l_{t} = l_{t-1} \), the dynamics of the real balance is trivial so I’m going to skip that in the following analysis.

3.1 Expressing symbols and equations

Julia support a wide range of unicode glyphs as inputs. For example typing \pi[tab] gives us π, and _t[tab] turns into the subscript t. It’s easier to read and looks closer to math notations, so let’s make the most use of that:

# Define symbols for the variables
@variables ỹₜ πₜ mₜ pₜ iₜ lₜ
@variables 𝔼ỹ 𝔼π
@variables lₜ₋₁ rⁿ yⁿ
@variables β σ η κ

In SymPy it’s not as nice. You define symbols with symbols(), and symbol names are going to be harder to decipher. There are certain gotchas that you need to know, like if you write y_tilde = symbols("ytilde"), the program will show when you try to print an expression that contains y_tilde. The rules for these symbol mappings are not well documented afaik, so if you want to look them up you need to get dirty and consult the source code. There might be some benefits in separating the symbol’s name and what it represents, but I think it’s marginal.

# Define symbols for the variables
y_tilde, pi_t, m_t, p_t, i_t, l_t, = symbols("ytilde pi_t m_t p_t i_t l_t")
l_tm1, r_n, y_n = symbols("l_tm1 r^n y^n")
E_y_tilde, E_pi = symbols("E_ytilde E_pi")
beta, sigma, eta, kappa = symbols("beta sigma eta kappa")

In Symbolics.jl, equations are defined with the tilde ~ symbol:

# Define the equations
OG = ỹₜ ~ -1/σ * (iₜ - 𝔼π - rⁿ) + 𝔼ỹ
MD = mₜ ~ pₜ + ỹₜ - η*iₜ + yⁿ
PC = πₜ ~ β*𝔼π + κ*ỹₜ

while in Sympy you use the Eq() constructor with two arguments:

# Define the equations
OG = Eq(y_tilde, -1/sigma*(i_t - E_pi - r_n) + E_y_tilde)
MD = Eq(m_t, p_t + y_tilde - eta*i_t + y_n)
PC = Eq(pi_t, beta*E_pi + kappa*y_tilde)

3.2 Rearrange & substitute equations

In this part, we want to:

  • Rearrange the equation for money demand (md) as \( i_{t} \) and substitute this \( i_{t} \) into the output gap (og)
  • Rearrange again to get an expression with only \( \tilde{y}_{t} \) in our left hand side. This is the dynamics for \( \tilde{y}_{t} \).
  • Substitue \( m_{t} - p_{t} \) as \( l_{t} \)
  • Subtitute the last expression of \( \tilde{y}_{t} \) to the Phillip Curve (pc) to get the dynamics for \( \pi_{t} \)

So, there are two important verbs here: to substitute(), and to rearrange, which we use solve_for() for that. The solve_for() function in Julia is newly developed and currently it can only solve linear expressions, which is enough for what we are doing today.

# Dynamics for the output gap
dynamics_ỹ = substitute(OG, iₜ => solve_for(MD, iₜ))
dynamics_ỹ = substitute(dynamics_ỹ, mₜ => lₜ + pₜ)
dynamics_ỹ = ỹₜ ~ solve_for(dynamics_ỹ, ỹₜ) |> simplify

# Dynamics for inflation
dynamics_π = substitute(PC, ỹₜ => dynamics_ỹ.rhs)

latexify.([dynamics_ỹ, dynamics_π])

\begin{equation*} \textnormal{\~{y}}_t = \frac{l_t - y^n + r^n \eta + \eta \mathbb{E}\pi + \eta \sigma \mathbb{E}\textnormal{\~{y}}}{1 + \eta \sigma} \end{equation*}

\begin{equation*} \pi_t = \beta \mathbb{E}\pi + \kappa \frac{l_t - y^n + r^n \eta + \eta \mathbb{E}\pi + \eta \sigma \mathbb{E}\textnormal{\~{y}}}{1 + \eta \sigma} \end{equation*}

The functions are doing its jobs, but the results does not look very neat. Let’s clean it up a bit by replacing \( \frac{1}{1+\eta\sigma} \)with \( \Omega \):

@variables Ω

dynamics_ỹ = substitute(dynamics_ỹ, Dict((1 + η*σ) => (1/Ω)))
dynamics_π = substitute(dynamics_π, Dict((1 + η*σ) => (1/Ω)))

latexify.([dynamics_ỹ, dynamics_π])

\begin{equation*} \textnormal{\~{y}}_t = \Omega \left( l_t - y^n + r^n \eta + \eta \mathbb{E}\pi + \eta \sigma \mathbb{E}\textnormal{\~{y}} \right) \end{equation*}

\begin{equation*} \pi_t = \beta \mathbb{E}\pi + \Omega \kappa \left( l_t - y^n + r^n \eta + \eta \mathbb{E}\pi + \eta \sigma \mathbb{E}\textnormal{\~{y}} \right) \end{equation*}

The same things can be done in Sympy. For rearranging equations, we use solve() and for substitution we can use the .subs() method. The SymPy algorithm feels a bit smarter, however, as it groups the variables more intelligently.

SymPy also provides a collect() function, which groups similar terms together. Here, I group the expressions by \( \mathbb{E}\tilde{y}_{t+1} \) (displayed as E_ỹ) and \( \mathbb{E}\pi_{t+1} \) (displayed as Eₚᵢ):

# Dynamics for the output gap
dynamics_y_tilde = OG.subs(i_t, solve(MD, i_t)[0].subs(m_t - p_t, l_t))
dynamics_y_tilde = Eq(y_tilde, collect(solve(dynamics_y_tilde, y_tilde)[0], (E_y_tilde, E_pi)))

# Dynamics for inflation
dynamics_pi = PC.subs(y_tilde, solve(dynamics_y_tilde, y_tilde)[0])
dynamics_pi = Eq(pi_t, collect(solve(dynamics_pi, pi_t)[0], (E_y_tilde, E_pi)))

# Substitute 1/(1 + η*σ) => Ω
Omega = symbols("Omega")
dynamics_y_tilde = dynamics_y_tilde.subs(1/(eta * sigma + 1), Omega)
dynamics_pi = dynamics_pi.subs(1/(eta * sigma + 1), Omega)

pprint([dynamics_y_tilde, dynamics_pi])
[ỹ = Ω⋅(Eₚᵢ⋅η + E_ỹ⋅η⋅σ + η⋅rⁿ + lₜ - yⁿ),
πₜ = Ω⋅(Eₚᵢ⋅(β⋅η⋅σ + β + η⋅κ) + E_ỹ⋅η⋅κ⋅σ + η⋅κ⋅rⁿ + κ⋅lₜ - κ⋅yⁿ)]

4 The economy response to exogeneous shock

4.1 Deriving the deviation equations

Our next task is to work with the following conjectures

\begin{gather*} \tilde{y}_{t} = \psi_{yz} \varepsilon^z_t + \psi_{yl}\hat{l}_{t} + \psi_{y\varepsilon}\varepsilon_{t}^{0} \\ \pi_{t} = \psi_{\pi z} \varepsilon^z_t + \psi_{\pi l}\hat{l}_{t} + \psi_{\pi \varepsilon}\varepsilon_{t}^{0} \end{gather*}

As \( \hat{l}_{t} = 0 \) for all \( t \), this can be simplified as:

\begin{gather*} \tilde{y}_{t} = \psi_{yz} \varepsilon^z_t + \psi_{y\varepsilon}\varepsilon_{t}^{0} \tag{C1} \\ \pi_{t} = \psi_{\pi z} \varepsilon^z_t + \psi_{\pi \varepsilon}\varepsilon_{t}^{0} \tag{C2} \end{gather*}

In expectations:

\begin{gather*} \mathbb{E}_{t}\tilde{y}_{t+1} = \psi_{yz}\rho_{z}\varepsilon_{t}^{z} \\ \mathbb{E}_{t}\pi_{t+1} = \psi_{\pi z}\rho_z \varepsilon_{t}^{z} \end{gather*}

Now, we need to replace all that into our system of dynamics to get new sets of equations that describe the deviation from the stable states.

In Julia:

# Conjecture
@variables ψ_yz ψ_πz ψ_ye ψ_πe
@variables  r̂ⁿ εᶻ ε⁰
@variables ρ_z

C1 = ỹₜ ~ ψ_yz*εᶻ + ψ_ye*ε⁰
C2 = πₜ ~ ψ_πz*εᶻ + ψ_πe*ε⁰

deviation_ỹ = substitute(dynamics_ỹ, Dict(
        lₜ     => 0,
        yⁿ     => 0,
        rⁿ     => (1 - ρ_z) * εᶻ,
        𝔼ỹ     => ψ_yz * ρ_z * εᶻ,
        𝔼π     => ψ_πz * ρ_z * εᶻ,
        C1.lhs => C1.rhs

deviation_π = substitute(dynamics_π, Dict(
        lₜ     => 0,
        yⁿ     => 0,
        rⁿ     => (1 - ρ_z) * εᶻ,
        𝔼ỹ     => ψ_yz * ρ_z * εᶻ,
        𝔼π     => ψ_πz * ρ_z * εᶻ,
        C2.lhs => C2.rhs

latexify.([deviation_ỹ, deviation_π])

\begin{equation*} \varepsilon^z \psi_{yz} + \varepsilon^0 \psi_{ye} = \Omega \left( \varepsilon^z \eta \left( 1 - \rho_{z} \right) + \varepsilon^z \eta \rho_{z} \psi_{{\pi}z} + \varepsilon^z \eta \rho_{z} \sigma \psi_{yz} \right) \end{equation*}

\begin{equation*} \varepsilon^0 \psi_{{\pi}e} + \varepsilon^z \psi_{{\pi}z} = \Omega \kappa \left( \varepsilon^z \eta \left( 1 - \rho_{z} \right) + \varepsilon^z \eta \rho_{z} \psi_{{\pi}z} + \varepsilon^z \eta \rho_{z} \sigma \psi_{yz} \right) + \beta \varepsilon^z \rho_{z} \psi_{{\pi}z} \end{equation*}

In Python:

# Conjecture
psi_yz, psi_piz, psi_ye, psi_pie = symbols("psi_yz psi_piz psi_ye psi_pie")
r_hat, epsilon_z, epsilon_0 = symbols("rhat^n epsilon^{z} epsilon^0")
rho_z = symbols("rho_z")

C1 = Eq(y_tilde, psi_yz*epsilon_z + psi_ye*epsilon_0)
C2 = Eq(pi_t, psi_piz*epsilon_z + psi_pie*epsilon_0)

deviation_y_tilde = dynamics_y_tilde.subs(l_t, 0) \
                                    .subs(y_n, 0) \
                                    .subs(r_n, (1 - rho_z)*epsilon_z) \
                                    .subs(E_y_tilde, psi_yz*rho_z*epsilon_z) \
                                    .subs(E_pi, psi_piz*rho_z*epsilon_z) \
                                    .subs(C1.lhs, C1.rhs)

deviation_pi = dynamics_pi.subs(l_t, 0) \
                          .subs(y_n, 0) \
                          .subs(r_n, (1 - rho_z)*epsilon_z) \
                          .subs(E_y_tilde, psi_yz*rho_z*epsilon_z) \
                          .subs(E_pi, psi_piz*rho_z*epsilon_z) \
                          .subs(C2.lhs, C2.rhs)

pprint([deviation_y_tilde, deviation_pi])
[ε⁰⋅ψ_ye + ε__{z}⋅ψ_yz = Ω⋅(ε__{z}⋅η⋅ψ_piz⋅ρ_z
                       + ε__{z}⋅η⋅ψ_yz⋅ρ_z⋅σ + ε__{z}⋅η⋅(1 - ρ_z)),
ε⁰⋅ψₚᵢₑ + ε__{z}⋅ψ_piz = Ω⋅κ⋅(ε__{z}⋅η⋅ψ_piz⋅ρ_z + ε__{z}⋅η⋅ψ_yz⋅ρ_z⋅σ
                       + ε__{z}⋅η⋅(1 - ρ_z)) + β⋅ε__{z}⋅ψ_piz⋅ρ_z]

4.2 Method of undetermined coefficients

SymPy already provide the function solve_undetermined_coeffs() that matches the coefficients together, given we provide the coresponding terms (\( \varepsilon_{t}^z \) and \( \varepsilon_{t}^0 \)):

# Undetermined coefficients
coeff_y = solve_undetermined_coeffs(deviation_y_tilde, [psi_yz, psi_ye], [epsilon_z, epsilon_0])
coeff_pi = solve_undetermined_coeffs(deviation_pi, [psi_piz, psi_pie], [epsilon_z, epsilon_0])

# convert the last output to equations
coeff_eq = [Eq(k,v) for k, v in (coeff_y | coeff_pi).items()]

# solve the matching coefficients
sol = solve(coeff_eq, [psi_yz, psi_piz, psi_ye, psi_pie])
⎪                               -Ω⋅η⋅κ⋅ρ_z + Ω⋅η⋅κ
⎨ψₚᵢₑ: 0, ψ_piz: ────────────────────────────────────────────────,
⎪                         2
⎩                Ω⋅β⋅η⋅ρ_z ⋅σ - Ω⋅η⋅κ⋅ρ_z - Ω⋅η⋅ρ_z⋅σ - β⋅ρ_z + 1
                               2                                 ⎫
                      Ω⋅β⋅η⋅ρ_z  - Ω⋅β⋅η⋅ρ_z - Ω⋅η⋅ρ_z + Ω⋅η     ⎪
  ψ_ye: 0, ψ_yz: ────────────────────────────────────────────────⎬
                          2                                      ⎪
                 Ω⋅β⋅η⋅ρ_z ⋅σ - Ω⋅η⋅κ⋅ρ_z - Ω⋅η⋅ρ_z⋅σ - β⋅ρ_z + 1⎭

For Symbolics.jl, we need some elbow grease to write a function that matches the coefficients ourselves. Given Julia’s robust syntax, the task turns out to be pretty simple:

"""Match the coefficients for symbol `sym` in equation `eq`"""
match_coeffs(eq::Equation, sym::Num) = coeff(expand(eq.rhs - eq.lhs), sym)

matched_coeffs = [match_coeffs(deviation_ỹ, εᶻ),
                  match_coeffs(deviation_π, εᶻ),
                  match_coeffs(deviation_ỹ, ε⁰),
                  match_coeffs(deviation_π, ε⁰)]

sol = solve_for(matched_coeffs, [ψ_yz, ψ_πz, ψ_ye, ψ_πe]) .|> simplify


\begin{equation*} \psi_{yz} = \frac{\Omega \eta - \Omega \eta \rho_{z} + \rho_{z}^{2} \Omega \beta \eta - \Omega \beta \eta \rho_{z}}{1 - \beta \rho_{z} - \Omega \eta \kappa \rho_{z} - \Omega \eta \rho_{z} \sigma + \rho_{z}^{2} \Omega \beta \eta \sigma} \end{equation*}

\begin{equation*} \psi_{\pi z} = \frac{\Omega \eta \kappa - \Omega \eta \kappa \rho_{z}}{1 - \beta \rho_{z} - \Omega \eta \kappa \rho_{z} - \Omega \eta \rho_{z} \sigma + \rho_{z}^{2} \Omega \beta \eta \sigma} \end{equation*}

\begin{equation*} \psi_{ye} = 0 \end{equation*}

\begin{equation*} \psi_{\pi e} = 0 \end{equation*}

5 Study the stability of the system

Our systems of dynamics for \( \tilde{y}_{t} \) and \( \pi_{t} \) can be written in matrix notation

\begin{equation*} \begin{bmatrix} \tilde{y}_t \\ \pi_t \end{bmatrix} = \mathbf{A}_{T} \begin{bmatrix} \mathbb{E}_{t} \tilde{y}_{t+1} \\ \mathbb{E}_{t} \pi_{t+1} \end{bmatrix} + \mathbf{B}_{T} u_t \,, \end{equation*}

This system is stable if and only if matrix \( \mathbf{A}_T \) has eigenvalues inside the unit circle. Following Bullard and Mitra (2002; LaSalle 1986, 28), as a results from the Schur-Cohn algorithm, the characteristic polynomial for \( \mathbf{A}_{T} \) yields solutions inside the unit circle iff: \[ \lvert \operatorname{det}(\mathbf{A}_T) \rvert < 1\,, \] and \[ \lvert - \operatorname{trace}(\mathbf{A}_T) \rvert < 1 + \operatorname{det}(\mathbf{A}_T) \,. \]

So, our computation task is to extract \( \mathbf{A}_T \), and compute its determinant and trace.

5.1 Extracting the coefficient matrix

SymPy once again provide us with the function linear_eq_to_matrix() to grab the matrix, from which the determinant and the trace can be computed easily:

# Matrix traces and determinant
A_T = linear_eq_to_matrix([dynamics_y_tilde, dynamics_pi], [E_y_tilde, E_pi])[0]
A_T = A_T.subs(Omega, 1/(eta * sigma + 1)) #Substitute Ω back

A_trace = A_T.trace().simplify()
A_det = A_T.det().simplify()

pprint([A_trace, A_det])
⎡-β⋅η⋅σ - β - η⋅κ - η⋅σ   β⋅η⋅σ ⎤
⎢──────────────────────, ───────⎥
⎣       η⋅σ + 1          η⋅σ + 1⎦

With Symbolics.jl, we once again need to roll up our sleeves and write linear_eq_to_matrix() ourselves:

"""Create a coefficient matrix given a set of equations and an array of symbols"""
function linear_eq_to_matrix(eqs::Array{Equation}, syms::Array{Num})
    hcat([match_coeffs.(eqs, sym) for sym in syms]...)

A_T = linear_eq_to_matrix([dynamics_ỹ, dynamics_π], [𝔼ỹ, 𝔼π])
A_T = substitute.(A_T, Ω => 1/(1+η*σ))
A_T = simplify.(A_T, expand=true)


\begin{equation*} \left[ \begin{array}{cc} \frac{\eta \sigma}{1 + \eta \sigma} & \frac{\eta}{1 + \eta \sigma} \\ \frac{\eta \kappa \sigma}{1 + \eta \sigma} & \frac{\beta + \eta \kappa + \beta \eta \sigma}{1 + \eta \sigma} \\ \end{array} \right] \end{equation*}

So far so good. But then I encountered some errors when computing the trace and the determinant of the matrix with tr(A_T) and det(A_T). This is such a put-off, because I expected with all the mutliple-dispatch goodness, Julia should be able to handle this operation out-of-the-box. In the end, I had to compute the trace and determinant the manual way:

A_det = simplify(A_T[1,1]*A_T[2,2] - A_T[1,2]*A_T[2,1], expand=true)
A_trace = simplify(A_T[1,1] + A_T[2,2], expand=true)

latexify.([A_trace, A_det])

\begin{equation*} \operatorname{trace}(\mathbf{A}_{T}) = \frac{\beta + \eta \kappa + \eta \sigma + \beta \eta \sigma}{1 + \eta \sigma} \end{equation*}

\begin{equation*} \operatorname{det}(\mathbf{A}_{T}) = \frac{\beta \eta \sigma}{1.0 + \eta \sigma} \end{equation*}

5.2 Analyze the conditions

I intended to stop here, because there is no clean way to reduce these inequality conditions with absolutes. But since I felt like SymPy “won” the comparisons up to now, I’m finishing off this task with it.

The fist condition can be written as:

pprint(Abs(A_det) < 1)
│ β⋅η⋅σ │
│───────│ < 1
│η⋅σ + 1│

This is trivial with \( \beta < 1 \), so we care mostly about the second condition:

pprint(reduce_inequalities([-A_trace < A_det + 1], [kappa]))
pprint(reduce_inequalities([A_trace < A_det + 1], [kappa]))
 -η⋅κ     -(1 - β)
─────── > ─────────
η⋅σ + 1    η⋅σ + 1

 -η⋅κ     -(-2⋅β⋅η⋅σ - β - 2⋅η⋅σ - 1)
─────── < ────────────────────────────
η⋅σ + 1             η⋅σ + 1

With assumption that \( \eta\sigma + 1 > 0 \), this is equivalent to \( -(2\eta\sigma + 1)(1+ \beta) < \eta\kappa < (1-\beta) \).

6 Taking stock

While programming with Julia is a joy, and the resulted codes looks more modern and elegant in comparison to Python, I feel like SymPy is generally more mature than Symbolics.jl. SymPy handles expressions in a smarter way, and provides good procedures for almost all my operations out-of-the-box. If time is in constraint, I will be using SymPy in my works for now, while keenly observing Symbolics.jl’s development.


Bullard, James, and Kaushik Mitra. 2002. “Learning About Monetary Policy Rules.” Journal of Monetary Economics 49 (6): 1105–29.
Galí, Jordi. 2008. Monetary Policy, Inflation, and the Business Cycle: An Introduction to the New Keynesian Framework. Princeton, N.J: Princeton University Press.
Gowda, Shashi, Yingbo Ma, Alessandro Cheli, Maja Gwozdz, Viral B Shah, Alan Edelman, and Christopher Rackauckas. 2021. “High-Performance Symbolic-Numerics via Multiple Dispatch.”
LaSalle, J. P. 1986. The Stability and Control of Discrete Processes. Softcover reprint of the original 1st ed. 1986 edition. New York: Springer.
Meurer, Aaron, Christopher P. Smith, Mateusz Paprocki, Ondřej Čertík, Sergey B. Kirpichev, Matthew Rocklin, AmiT Kumar, et al. 2017. “SymPy: Symbolic Computing in Python.” Peerj Computer Science 3 (January): e103.