Chapter 19. Building A Full-Stack Asset-Allocation App

At this point, we’ve traveled quite a distance through Rust’s ecosystem - from humble beginnings printing “Hello, world!” to building libraries, testing them, and optimizing for real-world performance. Along the way we learned to think the Rust way: ownership, borrowing, lifetimes, concurrency, error handling, traits, and beyond. We explored how to organize projects, how to write clean Rust, and how to benchmark code until it runs like a hot knife through butter.

All of that now converges here.

Chapter 19. Building the Full-Stack Asset-Allocation App, Part 1

In this chapter, we rock the final project of the book: a complete asset-allocation web application that unites every major skill we’ve developed so far. The app will combine simulation, allocation, and visualization - powered by Rust end-to-end, with a modern web interface built using React/Next.js.

Think of it as the grand synthesis of everything we’ve built:

  • Our early focus on fundamentals (Chapters 1-4) gave us the control and precision that Rust demands.
  • The mid-book work on structs, enums, traits, and modules (Chapters 5-10) taught us how to model complex systems safely and expressively.
  • The concurrency and performance tools (Chapters 15-18) showed us how to squeeze every ounce of efficiency out of the hardware.
  • And in Chapter 13, we built cutup - a portfolio-allocation library implementing core financial algorithms like Equal Weighting, Mean-Variance Optimization, and Hierarchical Risk Parity.

Now, we’re going to tie it all together.

The final app will simulate assets, compute allocations across strategies, and deliver a rich interactive experience - all without traditional servers. Prices will be generated by a Rust-based Monte Carlo simulation deployed as a serverless function. Those simulated prices will flow directly into cutup, where we’ll compute allocations and return results to a Next.js frontend for real-time visualization.

By the end, we’ll have something resembling a production-grade research tool: Rust performance under the hood, TypeScript polish on the surface, and Vercel’s serverless platform gluing it all together.

This chapter lays the foundation: in Part 1 we’ll build the Rust simulation engine, expose it through an API, and wire up a minimal frontend to call it. In the following parts, we’ll integrate the allocation logic, refine the data model, and turn the whole thing into a living, breathing portfolio app.

It’s the culmination of the journey - a full-stack Rust project that proves why we learned all this in the first place.

"An approximate answer to the right question is worth a great deal more than a precise answer to the wrong one." - John Tukey

Part 1 - Our first Rust-powered web app

Our journey begins where speed, safety, and simulation collide. In this first part of the full-stack project, we’ll build a Rust-powered backend that runs high-performance Monte Carlo simulations for synthetic asset prices. We’ll deploy that backend as a serverless function and connect it to a lightweight React (Next.js) frontend that visualizes the results.

This isn’t just another backend tutorial. It’s an exercise in design philosophy: Rust handles the heavy computation, while React handles interaction and visualization. Together, they form a complete research tool that models how assets behave under uncertainty-paving the way for our upcoming portfolio allocation engine built on top of cutup.

The goal - high-performance, serverless compute

Monte Carlo simulations are a cornerstone of quantitative finance. They model randomness, volatility, and probability distributions over time-perfect for estimating potential asset paths and outcomes. But they’re computationally expensive. Running thousands of iterations with random draws, drifts, and shocks can overwhelm traditional serverless environments.

Enter Rust.
We use Rust to handle the heavy numerical work: a compiled, memory-safe, zero-cost abstraction machine. On top of that, we deploy via Vercel’s Rust runtime, which lets us serve those compiled binaries as API endpoints-no container orchestration, no persistent servers, no extra complexity.

The system will have three key parts:

  • Rust backend: a serverless API endpoint that runs our Monte Carlo engine.
  • Next.js frontend: a React interface with sliders for parameters and charts for results.
  • Vercel as glue: a seamless deployment platform where both the Rust API and React app live side-by-side.

The complete setup - building the repo from scratch

Now that we understand the architecture, let’s walk through building the full project - from nothing to a working Rust + Next.js simulation app deployed on Vercel. What follows is effectively a step-by-step reconstruction of the vercel-rust-runtime repository.

Project initialization

Start by creating a new directory and initializing both your Rust and Next.js projects:

mkdir vercel-rust-runtime && cd vercel-rust-runtime
npm create next-app@latest . --typescript
cargo init --lib

This gives us a Rust crate (src/lib.rs) and a working React app in the same folder.
We’ll use the Rust code for serverless compute and the Next.js app as our frontend.

💡 Vercel automatically detects Rust binaries inside /api and builds them using its Rust runtime. That means you don’t need a separate build step or container config - just place your Rust binary in that folder and deploy.

File structure

Your finished structure should look like this:

vercel-rust-runtime/
├── api/
│   └── test.rs           # Rust serverless function
├── src/
│   └── lib.rs            # Monte Carlo logic
├── Cargo.toml            # Rust dependencies + binaries
├── app/
│   └── page.tsx          # React frontend
├── package.json
└── tsconfig.json

The /api/simulate.rs file is where Vercel will look for the Rust entry point, and src/lib.rs holds your reusable simulation functions.

Configuring Cargo

In your Cargo.toml, define the project and dependencies:

[package]
name = "tsmc-rust"
version = "0.1.0"
edition = "2021"

[dependencies]
rand = "0.8.5"
rand_distr = "0.4.3"
serde_json = { version = "1.0.117", features = ["raw_value"] }
tokio = { version = "1.37.0", features = ["macros"] }
vercel_runtime = "1.1.3"

[[bin]]
name = "test"
path = "api/simulate.rs"

That last block tells Cargo to compile the Rust file in /api/simulate.rs into a deployable binary named test.

Implementing the simulation logic

Open src/lib.rs and add the core simulation code. This file is pure Rust - no async, no web I/O, just math.

use rand_distr::{Distribution, Normal};
use rand::thread_rng;

pub fn generate_number_series(size: usize) -> Vec<f32> {
    let normal = Normal::new(0.0, 1.0).unwrap();
    let mut rng = thread_rng();
    (0..size).map(|_| normal.sample(&mut rng) as f32).collect()
}

fn calculate_drift_and_shock(mu: &f32, sigma: &f32, dt: &f32, shock: &f32) -> f32 {
    // precise form of the GBM step
    let drift = (mu - (sigma.powi(2) / 2.0)) * dt;
    let shock_val = sigma * shock * dt.sqrt();
    (drift + shock_val).exp()
}

pub fn monte_carlo_series(
    starting_value: f32,
    mu: f32,
    sigma: f32,
    dt: f32,
    generated_shocks: Vec<f32>,
) -> Vec<f32> {
    let mut results: Vec<f32> = Vec::with_capacity(generated_shocks.len());
    results.push(starting_value);

    for (i, shock) in generated_shocks.iter().enumerate() {
        let previous_value = results[i];
        let new_value = previous_value * calculate_drift_and_shock(&mu, &sigma, &dt, &shock);
        results.push(new_value);
    }
    results
}

This small module defines the three core functions:

  • generate_number_series - produces standard-normal shocks.
  • monte_carlo_series - evolves prices forward using drift + diffusion terms.
  • calculate_drift_and_shock - computes the multiplicative brownian motion factor for each step.

We can easily test this locally using a cargo test call after creating these. Here are some basic tests you can add to src/lib.rs:

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn it_calculates_drift_and_shock() {
        let mut calculated: f32 = calculate_drift_and_shock(&0.0, &0.0, &(1.0 / 252.0), &0.0);
        assert_eq!(calculated, 1.0);

        calculated = calculate_drift_and_shock(&1.0, &0.0, &(1.0 / 252.0), &0.0);

        assert!(calculated > 1.003);
        assert!(calculated < 1.004);
    }

    #[test]
    fn it_generates_numbers() {
        let v: Vec<f32> = generate_number_series(10);
        assert_eq!(v.len(), 10);
    }
    #[test]
    fn it_generates_monte_carlo_series() {
        let size = 10;
        let sigma: f32 = 0.015;
        let mu: f32 = -0.002;
        let dt: f32 = 1.0 / 252.0;
        let starting_value: f32 = 50.0;
        let random_shocks: Vec<f32> = generate_number_series(size);
        let mc = monte_carlo_series(starting_value, mu, sigma, dt, random_shocks);
        assert_eq!(mc.len(), size + 1);
        assert_ne!(mc[0], mc[1]);
    }
}

(Optional) CLI entrypoint - main.rs

For quick local runs (outside serverless), add a small CLI that wraps the library:

  • Parses inputs with clap (size, sigma, mu, dt, starting_value).
  • Converts dt via a Frequency enum (Daily/Weekly/Monthly).
  • Generates shocks and prints one Monte Carlo path to stdout (JSON-friendly).
use clap::{Parser, ValueEnum};
use tsmc_rust;

#[derive(Parser)]
struct MCData {
    size: usize,
    sigma: f32,
    mu: f32,
    #[arg(value_enum)]
    dt: Frequency,
    starting_value: f32,
}

#[derive(Debug, Clone, ValueEnum)]
enum Frequency { Daily, Weekly, Monthly }

fn get_dt_from_frequency(f: Frequency) -> f32 {
    match f {
        Frequency::Daily => 1.0 / 252.0,
        Frequency::Weekly => 1.0 / 52.0,
        Frequency::Monthly => 1.0 / 12.0,
    }
}

fn main() {
    let args = MCData::parse();
    let dt = get_dt_from_frequency(args.dt);
    let shocks = tsmc_rust::generate_number_series(args.size);
    let mc = tsmc_rust::monte_carlo_series(args.starting_value, args.mu, args.sigma, dt, shocks);
    println!("{:?}", mc); // TODO: print JSON if desired
}

#[cfg(test)]
mod tests {
    use super::*;
    #[test]
    fn cli_dt_conversion_works() {
        assert!(get_dt_from_frequency(Frequency::Daily) > 0.003);
        assert!(get_dt_from_frequency(Frequency::Daily) < 0.004);
        assert_eq!(get_dt_from_frequency(Frequency::Weekly), 1.0 / 52.0);
        assert_eq!(get_dt_from_frequency(Frequency::Monthly), 1.0 / 12.0);
    }
}

Run it locally

cargo run -- \
  --size 252 \
  --sigma 0.015 \
  --mu 0.001 \
  --dt daily \
  --starting-value 100

Tip: swap println!("{:?}", mc); for structured output later:

println!("{}", serde_json::to_string(&mc).unwrap());

Writing the serverless handler

Now, create api/simulate.rs. This will import your library and handle web requests.

use reqwest::Url;
use serde_json::json;
use std::collections::HashMap;
use tsmc_rust;
use vercel_runtime::{run, Body, Error, Request, Response, StatusCode};

#[tokio::main]
async fn main() -> Result<(), Error> {
    run(handler).await
}

pub async fn handler(_req: Request) -> Result<Response<Body>, Error> {
    let url = Url::parse(&_req.uri().to_string())?;

    // read url query params
    let query_params = url
        .query_pairs()
        .into_owned()
        .collect::<HashMap<String, String>>();
    let samples: usize = query_params
        .get("samples")
        .and_then(|s| s.parse().ok())
        .unwrap_or(10);
    let size: usize = query_params
        .get("size")
        .and_then(|s| s.parse().ok())
        .unwrap_or(100);
    let starting_value: f32 = query_params
        .get("starting_value")
        .and_then(|s| s.parse().ok())
        .unwrap_or(50.0);
    let mu: f32 = query_params
        .get("mu")
        .and_then(|s| s.parse().ok())
        .unwrap_or(0.001);
    let sigma: f32 = query_params
        .get("sigma")
        .and_then(|s| s.parse().ok())
        .unwrap_or(0.015);
    let dt: f32 = query_params
        .get("dt")
        .and_then(|s| s.parse().ok())
        .unwrap_or(1.0 / 252.0);

    let mut results: Vec<Vec<f32>> = Vec::with_capacity(samples);
    for _i in 0..samples {
        let random_shocks: Vec<f32> = tsmc_rust::generate_number_series(size);

        let mc = tsmc_rust::monte_carlo_series(starting_value, mu, sigma, dt, random_shocks);
        results.push(mc);
    }
    Ok(Response::builder()
        .status(StatusCode::OK)
        .header("Content-Type", "application/json")
        .body(
            json!({ "message": "Rust is the best!", "results": results })
                .to_string()
                .into(),
        )?)
}

Note the final main function calling run(handler): this hooks your async handler into Vercel’s runtime.

Notice that we parse query parameters for samples, size, starting_value, mu, sigma, and dt - all of which control the Monte Carlo simulation. We then run the specified number of simulations and return the results as JSON.

Connecting this back to earlier chapters

If this pattern feels familiar, it should - we’ve already laid the groundwork for it in a few key chapters earlier in the book.

Back in Chapter 9 (“Errors”), we explored the difference between recoverable and unrecoverable errors and learned how Rust encodes that difference in its type system. There, we dug into how functions that can fail - like reading a file, parsing data, or sending a network request - return a Result<T, E> instead of simply panicking when something goes wrong. You’ll recall that we often used the ? operator to propagate those errors upward when we wanted our caller to handle them, or we used methods like .unwrap() or .expect() when we were confident the result would always exist.

In contrast, this Monte Carlo serverless endpoint is an example of a graceful fallback. Here, failure isn’t catastrophic - if the client doesn’t provide a parameter, or if it’s malformed, the program doesn’t need to crash or bubble up an error. It just needs to move on with a sensible default. That’s why, instead of keeping the full Result type (which carries error information), we use .ok() to convert it into an Option. We’re explicitly saying: “I don’t need to know why this failed - only whether it did.”

Then, in Chapter 10 (“Reference lifetimes, generics, and traits”), we saw how Rust’s composability and strong typing let us express intent clearly with functions that return precise types like Option<T>. That demonstrated how returning an Option signals “a value might be missing,” while Result signals “a computation might fail.” Our use of .get("sigma") returning Option<&String> fits perfectly into that model - the compiler forces us to think about what happens when something isn’t there.

And way back in Chapter 3 (“Normal Programming Stuff”) we learned that expressions in Rust evaluate to values and can be chained. That’s exactly what’s happening here: each step (get, and_then, ok, unwrap_or) produces a new expression built on top of the previous one, all without temporary variables or unsafe assumptions. It’s expressive and type-safe - the kind of composability we admired back when we first met Rust’s functional side.

So, when we look at a short snippet like:

let sigma: f32 = query_params
    .get("sigma")
    .and_then(|s| s.parse().ok())
    .unwrap_or(0.015);

it’s not just convenient shorthand - it’s the culmination of everything we’ve already covered.

This is what idiomatic Rust looks like in practice - compact, clear, and explicit about what can go wrong. Instead of ignoring potential issues or catching them at runtime, we encode them directly in the type system and handle them deliberately. In a simulation engine like ours, where parameters can vary wildly depending on the user or test scenario, this approach isn’t just elegant - it’s the difference between resilient software and a brittle one.

Testing the serverless function

You can test this locally with:

cargo run --bin test

or, if deployed on Vercel, hitting your endpoint:

https://<your-app>.vercel.app/api/simulate?samples=25&size=252&mu=0.001&sigma=0.015

Running locally requires setting up Vercel’s Rust runtime environment, which is very complex. For local testing, consider using the localhost:3000 endpoint.

Connecting the React frontend - full implementation

In the following section, we'll build a frontend that calls the Rust endpoint, draws the results, and gives you interactive controls. Though we'll refactor and add to this later, this is a good start to get us going. In general, it's good to build piece-by-piece in this fashion - your author believes this helps provide momentum while developing software, providing a host of benefits such as early design checks (was my API designed correctly for the caller?) and wind in your sails in order to build more.

A little bit about React

This book is obviously about Rust - not TypeScript/JavaScript or React. In the frontend sections here I'll ask you to "just trust me" and blindly implement things. React, a framework, has a lot going on under the hood. If you are not a frontend engineer, there's simply no way to begin peeling back the onion in a way that would not distract from our ultimate goal here: a real-world working app that, without Rust, wouldn't be possible.

And for those of you who'd like to dig deeper, I recommend the React docs as a starting place. React + Rust is an incredibly useful tool combination to have in your toolbox.

Install dependencies

npm install next react react-dom react-chartjs-2 chart.js

If you’re using Tailwind (the examples below include Tailwind classes), set it up or remove the classes. See the Tailwind note at the end of this section.

Add a small URL helper

Create utils/build-url.ts:

// https://github.com/vercel/next.js/discussions/16429#discussioncomment-7379305
export function getBaseUrl() {
  const custom = process.env.NEXT_PUBLIC_SITE_URL
  const vercel = process.env.NEXT_PUBLIC_VERCEL_URL
  const vercelProd = process.env.NEXT_PUBLIC_VERCEL_PROJECT_PRODUCTION_URL
  const isProd = process.env.NEXT_PUBLIC_VERCEL_ENV === 'production'
  if (isProd) return `https://${vercelProd}`
  else if (custom) return custom
  else if (vercel) return `https://${vercel}`
  else return 'http://localhost:3000'
}

export function buildUrl(path: string) {
  return getBaseUrl() + path
}

Components

Create components/LineChart.tsx:

'use client'
import React from 'react'
import { Line } from 'react-chartjs-2'
import {
  Chart as ChartJS,
  CategoryScale,
  LinearScale,
  PointElement,
  LineElement,
  Title,
  Tooltip,
  Legend,
  Filler,
} from 'chart.js'

ChartJS.register(
  CategoryScale,
  LinearScale,
  PointElement,
  LineElement,
  Title,
  Tooltip,
  Legend,
  Filler,
)

function LineChart({ data }: { data: { results?: number[][] } }) {
  if (typeof data?.results === 'undefined') return null

  const chartData = {
    labels: data.results[0].map((_, index) => index),
    datasets: data.results.map((series, index) => ({
      label: ``,
      data: series,
      borderColor: `hsl(${(index * 60) % 360}, 70%, 50%)`,
      backgroundColor: `hsl(${(index * 60) % 360}, 70%, 30%)`,
    })),
  }

  const options = {
    responsive: true,
    plugins: {
      legend: {
        display: data.results.length < 50,
        position: 'top' as const,
        labels: { useBorderRadius: true, borderRadius: 5 },
        onClick: (e: any, legendItem: any, legend: any) => {
          const index = legendItem.datasetIndex
          const ci = legend.chart
          const meta = ci.getDatasetMeta(index)
          const dimColor = 'rgba(128, 128, 128, 0.3)'
          if (ci.isDatasetVisible(index)) {
            ci.hide(index)
            legendItem.hidden = true
          } else {
            ci.show(index)
            legendItem.hidden = false
          }
          legend.legendItems.forEach((item: any, idx: number) => {
            const itemMeta = ci.getDatasetMeta(idx)
            item.fillStyle = itemMeta.hidden
              ? dimColor
              : itemMeta.controller.getDataset().backgroundColor
          })
        },
      },
      title: { display: true, text: 'Monte Carlo Simulations' },
      tooltip: { usePointStyle: true },
    },
    elements: {
      line: { tension: 0.3 },
      point: { radius: 0, hitRadius: 30 },
    },
  }

  // @ts-ignore
  return <Line data={chartData} options={options} />
}

export default React.memo(LineChart)

Create components/Slider.tsx:

'use client'
import { useRef } from 'react'

function getSliderLabelText(
  labelText?: string,
  currentValue?: string | number,
  divisor?: number,
) {
  if (divisor) {
    return labelText !== 'undefined'
      ? `${labelText} - ${Number(currentValue) / divisor}%`
      : 'Range steps'
  } else {
    return labelText !== 'undefined'
      ? `${labelText} - ${currentValue}`
      : 'Range steps'
  }
}

export default function Slider({
  id,
  labelText,
  min,
  max,
  value,
  step,
  onValueChange,
  divisor,
}: {
  id?: string
  labelText?: string
  min?: number | string
  max?: number | string
  value?: number | string
  step?: number | string
  onValueChange: (value: string) => void
  divisor?: number
}) {
  const inputRef = useRef<HTMLInputElement>(null)

  const handleChange = () => {
    if (inputRef.current) {
      const currentValue = inputRef.current.value
      onValueChange(currentValue)
    }
  }

  const labelDisplayText = getSliderLabelText(
    labelText,
    inputRef.current?.value ?? value,
    divisor,
  )

  return (
    <div>
      <label className="mb-2 block text-sm font-medium text-gray-900 dark:text-white">
        {labelDisplayText}
      </label>
      <input
        ref={inputRef}
        id={id || 'steps-range'}
        type="range"
        min={min || '10'}
        max={max || '1000'}
        defaultValue={value || '10'}
        step={step || '10'}
        className="h-2 w-full cursor-pointer appearance-none rounded-lg bg-gray-200 dark:bg-gray-700"
        onInput={handleChange}
      />
    </div>
  )
}

App layout and page

Replace app/layout.tsx:

import type { Metadata } from 'next'
import { Inter } from 'next/font/google'
import './globals.css'

const inter = Inter({ subsets: ['latin'] })

export const metadata: Metadata = {
  title: 'Create Next App',
  description: 'Generated by create next app',
}

export default function RootLayout({
  children,
}: Readonly<{ children: React.ReactNode }>) {
  return (
    <html lang="en">
      <body className={inter.className}>{children}</body>
    </html>
  )
}

Replace app/page.tsx:

'use client'

import { useState, useEffect, useCallback } from 'react'
import LineChart from '@/components/LineChart'
import { buildUrl } from '@/utils/build-url'
import Slider from '@/components/Slider'

const DEFAULT_NUM_SIMULATIONS = '10'
const DEFAULT_MU = '50'
const DEFAULT_SIGMA = '150'
const DEFAULT_STARTING_VALUE = '50'
const DEFAULT_NUM_DAYS = '30'

type ChartData = {
  results?: number[][]
}

const getBackendData = (url: string) =>
  fetch(url)
    .then((res) => {
      if (!res.ok) throw new Error('Network response was not ok')
      return res.json()
    })
    .catch((err) => console.error(err))

export default function Home() {
  const [data, setData] = useState<ChartData>({ results: undefined })
  const [numSimulations, setNumSimulations] = useState(DEFAULT_NUM_SIMULATIONS)
  const [numDays, setNumDays] = useState(DEFAULT_NUM_DAYS)
  const [mu, setMu] = useState(DEFAULT_MU)
  const [sigma, setSigma] = useState(DEFAULT_SIGMA)
  const [startingValue, setStartingValue] = useState(DEFAULT_STARTING_VALUE)

  const handleRefresh = useCallback(() => {
    setMu(DEFAULT_MU)
    setSigma(DEFAULT_SIGMA)
    setNumDays(DEFAULT_NUM_DAYS)
    setStartingValue(DEFAULT_STARTING_VALUE)
    setNumSimulations(DEFAULT_NUM_SIMULATIONS)
  }, [])

  useEffect(() => {
    const url = buildUrl(
      `/api/simulate?samples=${numSimulations}&size=${numDays}&mu=${Number(mu) / 10000.0}&sigma=${Number(sigma) / 10000.0}&starting_value=${startingValue}`,
    )
    getBackendData(url)
      .then((data) => setData(data))
      .catch((err) => console.error(err))
  }, [numSimulations, numDays, mu, sigma, startingValue])

  return (
    <main className="flex min-h-screen flex-col items-center justify-between p-2 pt-8 md:p-24">
      <h1 className="text-center text-4xl font-bold tracking-tight text-gray-900 sm:text-6xl dark:text-white">
        {`The simulations below are running in Vercel's Rust runtime`}
      </h1>

      <div className="lg:mx-4">
        <p className="mt-6 text-left text-lg leading-8 text-gray-600 dark:text-gray-200">
          Using Rust on the server allows for performant, low-overhead, and
          memory-safe compute-intensive applications, such as Monte Carlo
          simulations.
        </p>
        <p className="mt-6 text-left text-lg leading-6 text-gray-600 dark:text-gray-200">
          {`Try out the different parameters below to see how quickly these are
          calculated and rendered. `}
          <span className="italic">{`It's insane!`}</span>
        </p>
      </div>

      <LineChart data={data} />

      <div className="grid grid-cols-1 gap-2 pt-4 pb-8 md:grid-cols-2 md:gap-4 lg:grid-cols-4">
        <Slider
          id="num-simulations-slider"
          labelText="Number of simulations"
          step="20"
          max="500"
          min="20"
          value={numSimulations}
          onValueChange={setNumSimulations}
        />
        <Slider
          id="num-days-slider"
          labelText="Number of days"
          step="10"
          max="100"
          min="10"
          value={numDays}
          onValueChange={setNumDays}
        />
        <Slider
          id="mu-slider"
          labelText="μ"
          step="10"
          max="500"
          min="10"
          value={mu}
          onValueChange={setMu}
          divisor={100}
        />
        <Slider
          id="vol-slider"
          labelText="σ"
          step="10"
          max="5000"
          min="10"
          value={sigma}
          onValueChange={setSigma}
          divisor={100}
        />
        <Slider
          id="starting-value-slider"
          labelText="Starting value"
          step="1"
          max="100"
          min="1"
          value={startingValue}
          onValueChange={setStartingValue}
        />
      </div>

      <div className="pt-4 pb-4">
        <button
          className="rounded-md px-3 py-2 transition duration-300 ease-in-out active:bg-black active:text-black active:text-white dark:border dark:border-gray-50 hover:dark:bg-gray-800 dark:active:bg-gray-700 dark:active:text-gray-200"
          onClick={handleRefresh}
        >
          Refresh data
        </button>
      </div>
    </main>
  )
}

Make @ imports work

Add a base URL and path alias in tsconfig.json:

{
  "compilerOptions": {
    "baseUrl": ".",
    "paths": {
      "@/*": ["*"]
    }
  }
}

Environment variables (optional)

Create .env.local if you want custom base URLs in different environments:

NEXT_PUBLIC_VERCEL_ENV=development
# NEXT_PUBLIC_SITE_URL=https://your-custom-domain.com
# NEXT_PUBLIC_VERCEL_URL=your-preview-url.vercel.app
# NEXT_PUBLIC_VERCEL_PROJECT_PRODUCTION_URL=your-prod-url.vercel.app

Tailwind CSS (optional styling)

If you keep the Tailwind classes above, install and initialize Tailwind:

npm install -D tailwindcss postcss autoprefixer
npx tailwindcss init -p

Edit tailwind.config.js:

module.exports = {
  content: [
    './app/**/*.{ts,tsx}',
    './components/**/*.{ts,tsx}',
    './pages/**/*.{ts,tsx}',
  ],
  theme: { extend: {} },
  plugins: [],
}

Import Tailwind in app/globals.css:

@tailwind base;
@tailwind components;
@tailwind utilities;

Run it

npm run dev
# open http://localhost:3000

You now have a complete, interactive frontend calling your Rust/Vercel simulation endpoint - with sliders for parameters and a chart for results. This is everything you need to mirror the repo’s frontend, minus advanced optimizations.

Deploying to Vercel

Finally, push to GitHub and deploy with Vercel:

git add .
git commit -m "Initial commit"
git push origin main

Vercel automatically detects your Next.js app and Rust binary. It will compile the Rust code in /api during the build step and serve it alongside your frontend.

Once deployed, your production endpoint will look something like:

https://vercel-rust-runtime.vercel.app/api/simulate?samples=50&size=252

and will return JSON such as:

{
  "results": [
    [50.1, 50.2, 49.8, 50.6, ...],
    [50.1, 49.9, 50.3, 50.5, ...]
  ]
}

The payoff

You’ve just built a full-stack Rust + React web app - capable of running thousands of simulations per request, safely and serverlessly.

This same structure will underpin our asset-allocation engine. The synthetic prices from your Rust Monte Carlo model will soon flow into cutup, where we’ll apply real portfolio allocation methods - Equal Weighting, HRP, and MVO - to those simulated returns.

It’s the perfect union: Rust’s computational power meeting React’s user experience, all deployed effortlessly on Vercel.

Part 2 - Integrating portfolio allocation

Now that our Monte Carlo simulation is humming along beautifully, it’s time to turn those synthetic prices into something meaningful. In finance, raw price data is just potential energy. What gives it purpose is how we allocate - how we assign weight across assets, strategies, and time.

In this part, we’ll take the simulated asset prices generated by our Rust backend and feed them into real portfolio allocation algorithms using the cutup library we built back in Chapter 13. This is where quantitative finance meets practical Rust: transforming time series data into actionable portfolio weights elegantly and efficiently.

Later, in Part 3, we'll combine these two, add more interactivity, and polish the app into a complete full stack deployed application with CI/CD.

“It’s not what you own that matters; it’s how much of it you own.” - anonymous quant proverb (probably said after a drawdown)

Building the new backend route

The next step is to extend our backend with a new serverless function dedicated to portfolio allocation. We’ll call it /api/allocate. This endpoint will accept arrays of simulated prices, run the allocation logic, and return a set of normalized weights - one for each asset.

In Vercel’s Rust runtime, every .rs file inside /api is compiled as its own binary. So just as we had api/simulate.rs for simulation, we’ll now create api/allocate.rs for allocation and register it in our Cargo.toml, in addition to adding the dependencies we need:

[dependencies]
// ...
cutup = "0.1.4"
nalgebra = "0.33"

[[bin]]
name = "allocate"
path = "api/allocate.rs"

Writing the handler

Let’s begin with the serverless entrypoint. We’ll use the cutup crate (the one you wrote earlier) to handle allocation, and import its core types - PortfolioAllocator and MvoConfig.

Here’s a clean version of api/allocate.rs using our new helper functions from lib.rs:

use serde_json::{json, Value};
use vercel_runtime::{run, Body, Error, Request, Response, StatusCode};
use tsmc_rust::allocate_from_json;

pub async fn handler(req: Request) -> Result<Response<Body>, Error> {
    if req.method() != "POST" {
        return Ok(Response::builder()
            .status(StatusCode::METHOD_NOT_ALLOWED)
            .header("Content-Type", "application/json")
            .body(json!({"error": "Use POST with JSON body"}).to_string().into())?);
    }

    // Read and parse request body
    let body_str = match req.body() {
        Body::Text(t) => t.clone(),
        Body::Binary(b) => String::from_utf8_lossy(b).to_string(),
        Body::Empty => String::new(),
    };
    let parsed: Value = match serde_json::from_str(&body_str) {
        Ok(v) => v,
        Err(e) => {
            return Ok(Response::builder()
                .status(StatusCode::BAD_REQUEST)
                .header("Content-Type", "application/json")
                .body(
                    json!({"error": "Invalid JSON", "details": e.to_string()})
                        .to_string()
                        .into(),
                )?)
        }
    };

    // Run allocation
    match allocate_from_json(&parsed) {
        Ok(weights) => {
            let strategy = parsed
                .get("strategy")
                .and_then(|s| s.as_str())
                .unwrap_or("mvo")
                .to_string();

            let response = json!({
                "strategy": strategy,
                "weights": weights,
                "sum": weights.iter().sum::<f64>()
            });

            Ok(Response::builder()
                .status(StatusCode::OK)
                .header("Content-Type", "application/json")
                .body(response.to_string().into())?)
        }
        Err(e) => Ok(Response::builder()
            .status(StatusCode::BAD_REQUEST)
            .header("Content-Type", "application/json")
            .body(json!({"error": e}).to_string().into())?),
    }
}

#[tokio::main]
async fn main() -> Result<(), Error> {
    run(handler).await
}

This handler is minimal by design - it just validates the input, calls a library function, and formats the response. The heavy lifting now lives in our library, where we'll add allocate_from_json and related helpers next.


Expanding the library

We’ll expand src/lib.rs with new helpers to handle the core logic behind /api/allocate. These will parse input, convert that input into numerical form, and run the given allocation strategy.

Add these near the bottom of your lib.rs:

use nalgebra::DMatrix;
use serde_json::Value;
use cutup::{PortfolioAllocator, MvoConfig};

/// Convert a nested JSON prices array into a DMatrix<f64>
pub fn json_to_price_matrix(v: &Value) -> Result<DMatrix<f64>, String> {
    let prices = v.get("prices").and_then(|p| p.as_array()).ok_or("Missing 'prices' array")?;
    let cols = prices.len();
    if cols == 0 {
        return Err("Prices array is empty".to_string());
    }
    let rows = prices[0].as_array().ok_or("Each asset must be an array")?.len();

    let mut flat = Vec::with_capacity(rows * cols);
    for t in 0..rows {
        for j in 0..cols {
            let val = prices[j].as_array().unwrap()[t]
                .as_f64()
                .ok_or("Prices must be numeric")?;
            flat.push(val);
        }
    }
    Ok(DMatrix::from_row_slice(rows, cols, &flat))
}

/// Compute portfolio weights for a given price matrix and strategy
pub fn allocate_from_prices(
    price_matrix: DMatrix<f64>,
    strategy: &str,
    mvo_config: Option<MvoConfig>,
) -> Result<Vec<f64>, String> {
    let allocator = PortfolioAllocator::new(price_matrix);
    let weights_map = match strategy.to_lowercase().as_str() {
        "ew" | "equal" => allocator.ew_allocation(),
        "hrp" => allocator.hrp_allocation(),
        "mvo" | _ => {
            if let Some(cfg) = mvo_config {
                allocator.mvo_allocation_with_config(&cfg)
            } else {
                allocator.mvo_allocation()
            }
        }
    };

    let cols = weights_map.len();
    let mut weights = vec![0.0_f64; cols];
    for (idx, w) in weights_map {
        if idx < cols {
            weights[idx] = w;
        }
    }
    Ok(weights)
}

/// Convenience function: takes full JSON, returns weights
pub fn allocate_from_json(v: &Value) -> Result<Vec<f64>, String> {
    let matrix = json_to_price_matrix(v)?;
    let strategy = v.get("strategy").and_then(|s| s.as_str()).unwrap_or("mvo").to_string();
    let mvo_config = if let Some(cfg) = v.get("mvo") {
        Some(MvoConfig {
            regularization: cfg.get("regularization").and_then(|x| x.as_f64()),
            shrinkage: cfg.get("shrinkage").and_then(|x| x.as_f64()),
        })
    } else {
        None
    };
    allocate_from_prices(matrix, &strategy, mvo_config)
}

Now our library cleanly encapsulates three reusable layers:

  1. Parsing layer: JSON → matrix (json_to_price_matrix)
  2. Computation layer: matrix → weights (allocate_from_prices)
  3. Integration layer: end-to-end (allocate_from_json)

That’s the same compositional layering philosophy we’ve been practicing all book long - clear inputs, single-responsibility functions, and reusable, testable units.

Adding unit tests

To confirm that everything works, add a few basic tests right inside lib.rs:

#[cfg(test)]
mod tests {
    use super::*;
    use serde_json::json;

    #[test]
    fn it_converts_json_to_matrix() {
        let v = json!({
            "prices": [
                [100.0, 101.0, 102.0],
                [200.0, 199.0, 198.0]
            ]
        });
        let m = json_to_price_matrix(&v).unwrap();
        assert_eq!(m.ncols(), 2);
        assert_eq!(m.nrows(), 3);
    }

    #[test]
    fn it_allocates_equal_weight() {
        let v = json!({
            "prices": [
                [100.0, 101.0, 102.0],
                [200.0, 199.0, 198.0]
            ],
            "strategy": "ew"
        });
        let weights = allocate_from_json(&v).unwrap();
        let sum: f64 = weights.iter().sum();
        assert!((sum - 1.0).abs() < 1e-6);
    }
}

Now run cargo test to ensure everything passes:

cargo test

You'll get something like this:

running 6 tests
test tests::it_calculates_drift_and_shock ... ok
test tests::it_converts_json_to_matrix ... ok
test tests::it_allocates_equal_weight ... ok
test tests::it_generates_numbers ... ok
test tests::it_allocates_mvo_default ... ok
test tests::it_generates_monte_carlo_series ... ok

test result: ok. 6 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

     Running unittests api/allocate.rs (target/debug/deps/allocate-fea29df9cae34d44)

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

     Running unittests api/simulate.rs (target/debug/deps/test-e97e71001768ef97)

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

     Running unittests src/main.rs (target/debug/deps/tsmc_rust-3625809802e9c5b9)

running 1 test
test tests::cli_dt_conversion_works ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

   Doc-tests tsmc_rust

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

How it all fits together

At this point, you have two serverless Rust endpoints living side-by-side:

RouteFunctionPurpose
/api/simulateMonte Carlo simulatorGenerate synthetic prices
/api/allocatePortfolio allocatorCompute portfolio weights

Each one compiles into its own binary, is deployed automatically by Vercel, and exposes a simple HTTP interface. You can now send data from one to the other seamlessly - just like the frontend does when the user clicks “Allocate.”

And perhaps most importantly: all your compute, from price generation to allocation, is pure Rust - high performance, memory safe, and mathematically rigorous.

What we've built

With both endpoints running, you’ve now built the computational heart of a full-stack quantitative application. The backend simulates stochastic asset paths, allocates weights across strategies, and exposes both via elegant, stateless APIs.

Now let's add the frontend components to call /api/allocate, visualize the weights, and let users choose allocation strategies interactively.

Building the frontend allocation interface

Now that our backend can simulate prices and compute portfolio allocations, it’s time to complete the loop. In this section, we’ll expand the frontend to:

  1. Let the user select which allocation strategy to apply - Equal Weight, Mean-Variance Optimization, or Hierarchical Risk Parity.
  2. Add an “Allocate” button that sends the simulated prices to our new /api/allocate endpoint.
  3. Display a pie chart showing the resulting portfolio weights.

We’ll keep our design philosophy consistent: React for interactivity, Rust for computation, and Vercel as the seamless bridge between them.

Add the dropdown selector and allocate button

We’ll start in app/page.tsx.
Right below our simulation chart, we’ll add a simple dropdown and a button. These let users choose the allocation type and trigger a POST request to /api/allocate.

Here’s the new section you’ll add (simplified to show just the new parts):

// inside the Home() component, after the <LineChart />

<div className="mt-8 flex flex-col items-center gap-4 md:flex-row">
  <label
    htmlFor="allocation-type"
    className="text-lg font-medium text-gray-900 dark:text-gray-200"
  >
    Allocation Type:
  </label>
  <select
    id="allocation-type"
    value={allocationType}
    onChange={(e) => setAllocationType(e.target.value)}
    className="rounded-md border border-gray-300 bg-white p-2 dark:border-gray-700 dark:bg-gray-900 dark:text-gray-100"
  >
    <option value="ew">Equal Weight</option>
    <option value="mvo">Mean-Variance Optimization</option>
    <option value="hrp">Hierarchical Risk Parity</option>
  </select>
  <button
    onClick={handleAllocate}
    className="rounded-md border bg-blue-600 px-3 py-2 text-white transition hover:bg-blue-700 dark:border-gray-50 dark:bg-gray-800"
  >
    Allocate
  </button>
</div>

To support that UI, we’ll add two new state hooks at the top of the component:

const [allocationType, setAllocationType] = useState('ew')
const [allocation, setAllocation] = useState<AllocationData | null>(null)

and define our allocation handler:

const handleAllocate = async () => {
  if (!data.results || data.results.length === 0) return

  const body = JSON.stringify({
    prices: data.results,
    strategy: allocationType,
  })

  try {
    const res = await fetch(buildUrl('/api/allocate'), {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body,
    })
    if (!res.ok) throw new Error(`HTTP ${res.status}`)
    const result = await res.json()
    setAllocation(result)
  } catch (err) {
    console.error('Allocation error:', err)
  }
}

This function collects the simulated prices from our /api/simulate call and sends them to the allocation endpoint. Once the weights come back, we store them in allocation for display.

Add the pie chart visualization

We’ll use the same charting stack we’ve been using - react-chartjs-2 and chart.js.
Import the Pie chart near the top of the file:

import { Pie } from 'react-chartjs-2'
import { Chart as ChartJS, ArcElement, Tooltip, Legend, Title } from 'chart.js'

ChartJS.register(ArcElement, Tooltip, Legend, Title)

Now, below our dropdown and button, we render a new <Pie /> chart when allocation data exists:

{
  allocation && (
    <div className="mt-8 w-full max-w-md">
      <h2 className="mb-4 text-center text-2xl font-semibold text-gray-900 dark:text-white">
        Allocation ({allocation.strategy.toUpperCase()})
      </h2>
      <Pie
        data={{
          labels: allocation.weights.map((_, i) => `Asset ${i + 1}`),
          datasets: [
            {
              data: allocation.weights.map((w) => (w * 100).toFixed(2)),
              backgroundColor: allocation.weights.map(
                (_, i) => `hsl(${(i * 60) % 360}, 70%, 50%)`,
              ),
            },
          ],
        }}
        options={{
          plugins: {
            legend: { position: 'right' as const },
            title: {
              display: true,
              text: 'Portfolio Weights (%)',
            },
          },
        }}
      />
      <p className="mt-4 text-center text-gray-600 dark:text-gray-300">
        Total: {(allocation.sum * 100).toFixed(2)}%
      </p>
    </div>
  )
}

This gives us a clear, visual sense of how the algorithm has distributed weights across our assets - whether evenly for Equal Weighting, clustered for HRP, or skewed toward optimal variance for MVO.

Wire everything together

Now your page includes:

  • A Monte Carlo chart of simulated prices.
  • A dropdown to choose allocation strategy.
  • An Allocate button that sends data to the backend.
  • A pie chart to visualize results.

The overall flow is pretty simple:

  1. The user adjusts simulation parameters and runs the Rust-powered /api/simulate endpoint.
  2. The React frontend displays synthetic price paths.
  3. The user picks an allocation strategy and hits “Allocate.”
  4. The prices are sent to /api/allocate, which runs Rust allocation logic via cutup.
  5. The results (weights) are rendered as a pie chart instantly.

Verifying everything

Run it locally:

vercel dev

Then open:

http://localhost:3000

Move the sliders, hit Allocate, and you’ll see your simulated assets converted into portfolio weights - all in real time, all powered by Rust.

The full stack picture

At this point, your frontend and backend form a complete research workflow:

LayerLanguagePurpose
Rust (/api/simulate)SimulationGenerate stochastic asset prices
Rust (/api/allocate)AllocationCompute portfolio weights
React (Next.js)UIParameter control, visualization
VercelPlatformCompiles and serves both seamlessly

Now we've got a working asset allocation environment that runs fully serverless, driven by Rust and React.

In our next section we’ll take this even further - combining live simulation and allocation updates, adding real-time visual comparisons between strategies, and turning this simple web app into a fully interactive portfolio research studio.

Part 3 - More interactivity and polish

Now that we’ve built a fully functioning simulation engine and allocator on the backend, it’s time to make the choir sing together in the frontend and the backend. This step is where our Rust code meets usability - users can move around sliders, dropdowns, and dynamic charts that respond almost instantly as our Rust-powered backend recomputes prices and allocations. And in the process, we’ll add some polish to make the code maintainable and clean.

Frontend updates

Our goal now is simple: when the user loads the page, the frontend automatically simulates a set of asset paths, calls the /api/allocate endpoint with those prices using an equal-weighted (EW) strategy by default, and displays both sets of results side by side. From there, changing parameters like volatility or number of simulations will automatically re-run both the simulation and allocation steps without manual intervention.

Real-time interactivity

The top-level React component (app/page.tsx) orchestrates the flow between simulation and allocation. When the page first loads, it immediately fetches simulated price data from our Rust backend, as we wrote in Part 1:

const url = buildUrl(
  `/api/simulate?samples=${numSimulations}&size=${numDays}&mu=${
    Number(mu) / 10000.0
  }&sigma=${Number(sigma) / 10000.0}&starting_value=${startingValue}`,
)
const simData = await getBackendData(url)
setData(simData)

Once these prices are returned, we trigger an automatic Equal Weight allocation. This ensures that the app always begins with a meaningful baseline portfolio, so users can immediately see the difference between unallocated simulations (on the left) and portfolio-level returns (on the right).

There's a mathematical reason for choosing Equal Weight here: it should perform the best over large sample sizes due to diversification effects of our GBM model. This gives users a solid starting point for comparison.

Automatic allocation logic

Every time the backend simulation completes - whether at first load or after parameter changes - the app checks whether there are existing allocation strategies (stored in React state via portfolioPaths).

If so, it recomputes their weights using the latest simulated data. This makes the system reactive: change a slider, and both your stochastic simulations and portfolio allocations update in sync.

// Immediately bootstrap a default EW allocation once simulations are available
useEffect(() => {
  // only if simulations are ready, we have no paths yet, and we haven't bootstrapped
  if (!data.results || data.results.length === 0) return
  if (portfolioPathsRef.current.length > 0) return
  if (bootstrappedRef.current) return
  ;(async () => {
    try {
      const body = JSON.stringify({
        prices: data.results,
        strategy: 'ew',
      })

      const res = await fetch(buildUrl('/api/allocate'), {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body,
      })
      if (!res.ok) throw new Error(`HTTP ${res.status}`)
      const result = await res.json()
      setAllocation(result)

      const values = computePortfolioValue(data.results!, result.weights)

      // deterministic-ish color for EW, and stable on re-allocations
      const colorFor = (strategy: string) => {
        const found = portfolioPathsRef.current.find((p) =>
          p.label.toLowerCase().startsWith(strategy.toLowerCase()),
        )
        if (found) return found.color
        const hash = Array.from(strategy).reduce(
          (a, c) => a + c.charCodeAt(0),
          0,
        )
        return `hsl(${(hash * 37) % 360}, 70%, 50%)`
      }

      setPortfolioPaths([
        {
          label: result.strategy.toUpperCase(), // "EW"
          values,
          weights: result.weights,
          color: colorFor(result.strategy),
        },
      ])

      bootstrappedRef.current = true // prevent re-bootstrapping on minor state changes
    } catch (e) {
      console.error('Default EW bootstrap failed:', e)
    }
  })()
}, [data.results])

This approach keeps the user interface live and consistent - allocations never drift out of sync with simulated prices. And since the effect only depends on new price data (data.results), we avoid an infinite update loop between simulation and allocation.

Default equal weight initialization

To make the experience feel complete right from the start, the frontend automatically runs an Equal Weight allocation as soon as the first simulation finishes. This is done by invoking the same handleAllocate logic, but pre-selecting the "ew" strategy internally that runs only when we have nothing in the portfolio paths array.

useEffect(() => {
  if (!data.results || !data.results.length) return
  if (portfolioPathsRef.current.length === 0) {
    handleAllocate('ew')
  }
}, [data.results])

That way, when the app first renders, you immediately see:

  • On the left: the stochastic price paths generated by the backend
  • On the right: the corresponding simulated portfolio return curve, assuming an equal-weighted allocation across assets

This small addition gives the user an immediate, informative visual - no clicks required.

Allocation selection and performance visualization

From there, the UI provides a simple dropdown for allocation types:

<select
  id="allocation-type"
  value={allocationType}
  onChange={(e) => setAllocationType(e.target.value)}
>
  <option value="ew">Equal Weight</option>
  <option value="mvo">Mean-Variance Optimization</option>
  <option value="hrp">Hierarchical Risk Parity</option>
</select>

Clicking Allocate runs the same handleAllocate routine but with the user’s selected strategy. Each allocation produces:

  • A new line on the portfolio performance chart
  • A mini pie chart summarizing that strategy’s weights

If an allocation type already exists (e.g. EW), its results are simply updated in place rather than duplicated. This keeps the visualization clean and avoids clutter.

Unified two-panel layout

The visual layout presents both perspectives - market-level simulations and portfolio-level outcomes - side by side on larger screens, and stacked vertically on smaller ones:

<div className="mt-12 grid w-full grid-cols-1 gap-8 lg:grid-cols-2">
  <div> {/* Simulated Prices */} </div>
  <div> {/* Portfolio Returns and Pies */} </div>
</div>

This grid-based structure makes it easy to compare how random market fluctuations translate into total portfolio performance.

A live experiment in quantitative intuition

By connecting all these elements - sliders, backend simulations, and dynamic allocations - the frontend now serves as an interactive quantitative sandbox.

Users can experiment freely:

  • Increase σ to see diversification effects collapse
  • Boost μ to watch expected returns dominate volatility drag
  • Compare EW, MVO, and HRP to visualize when sophisticated optimizers actually outperform (or underperform) naive diversification

Every change updates the system in real time, turning what was once static math into something tangible, visual, and exploratory.

Up next, we’ll clean up the backend a bit more, optimize performance, and prepare everything for deployment. But at this point, you have a fully functioning, interactive Rust + React application that brings Monte Carlo simulations and portfolio allocation to life.

Backend cleanup

To finalize our full-stack application, we’ll make a few small adjustments to the backend code to ensure everything is clean, efficient, and ready for production deployment. This refactor focuses not on changing what the library does, but on making how it does it clearer, safer, and easier to reason about.

Explicit structure

Let's start by organizing the file into clear sections:

/// =======================
/// Simulation Functions
/// =======================
/// ...
/// =======================
/// Allocation Functions
/// =======================

Move the monte_carlo_series, calculate_drift_and_shock and generate_number_series functions under the Simulation Functions comment. Put the remaining functions - json_to_price_matrix, allocate_from_prices, and allocate_from_json - under the Allocation Functions comment.

This isn’t just aesthetic - it creates a mental boundary between two conceptual domains:

  • Simulation: numerical randomness and GBM mechanics
  • Allocation: portfolio math and optimization

Rust doesn’t care about these boundaries, but humans do.
One of the easiest ways to make your codebase scale gracefully is to separate concerns with visual and logical structure.

Safer distribution initialization

Now let's change some unrwapping to safer error handling that will explicitly fail if there's a problem.

Old:

let normal = Normal::new(0.0, 1.0).unwrap();

New:

let normal = Normal::new(0.0, 1.0).expect("Failed to create normal distribution");

In Rust, unwrap() is the “YOLO” of error handling - convenient, but it panics with zero context if something goes wrong. Replacing it with expect() provides a meaningful message, turning a potential runtime crash into a debuggable failure.

This doesn’t affect performance or behavior, but it dramatically improves traceability. If your Monte Carlo simulation ever panics, you’ll know why.

Functional simplicity in GBM

We rewrote the GBM step to emphasize what it represents, rather than the cryptic devide by 2.0 from before.

let drift = (mu - 0.5 * sigma.powi(2)) * dt;
let shock_val = sigma * shock * dt.sqrt();
(drift + shock_val).exp()

This isolates the stochastic and deterministic components of the process and expresses the mathematics in its canonical form - the exponential of drift plus diffusion. It’s both mathematically faithful and more readable. A good Rust function should communicate what it does; this one tells the story of drift and diffusion in a way that is now "normal" to the quant industry.

Using .last() for safety

We need to update some of our looping constructs to avoid manual indexing in monte_carlo_series.

Old pattern:

for (i, shock) in generated_shocks.iter().enumerate() {
    let previous_value = results[i];
    ...
}

New pattern:

for shock in generated_shocks.iter() {
    let last = *prices.last().unwrap();
    ...
}

The old version used an index lookup into the results vector. That’s fine, but indexing in Rust is bound-checked - and unnecessarily manual. The new version uses .last() to read the most recent value safely, without juggling indices. We covered this in chapter 14, in case you want a refresher.

Flattening JSON matrices with intent

In our json_to_price_matrix function we want to keep things logically identical but restructure for clarity and error hygiene. Instead of several unstructured unwrap() calls, we now validate that:

  • the array exists,
  • all assets have equal length, and
  • every value is numeric.

And we then flatten with deliberate order:

for t in 0..rows {
    for j in 0..cols {
        let val = prices[j].as_array().unwrap()[t]
            .as_f64()
            .ok_or("Prices must be numeric")?;
        flat.push(val);
    }
}

That nesting order (time outer, asset inner) is important - it matches how nalgebra::DMatrix::from_row_slice expects data: row-major format. These details matter. They’re what make the difference between “the function runs” and “the math is right.”

Your new function should look like this:

/// Converts a JSON object containing `"prices": [[...], [...]]` into a numeric matrix.
pub fn json_to_price_matrix(v: &Value) -> Result<DMatrix<f64>, String> {
    let prices = v
        .get("prices")
        .and_then(|p| p.as_array())
        .ok_or("Missing 'prices' array")?;

    if prices.is_empty() {
        return Err("Prices array is empty".into());
    }

    let rows = prices[0]
        .as_array()
        .ok_or("Each asset must be an array of floats")?
        .len();
    if rows == 0 {
        return Err("Each asset must contain at least one value".into());
    }

    // Validate equal lengths
    for (i, series) in prices.iter().enumerate() {
        let s = series
            .as_array()
            .ok_or(format!("Asset {} must be an array", i))?;
        if s.len() != rows {
            return Err("All asset series must have equal length".into());
        }
    }

    // Flatten into row-major order
    let cols = prices.len();
    let mut flat = Vec::with_capacity(rows * cols);
    for t in 0..rows {
        for j in 0..cols {
            let val = prices[j].as_array().unwrap()[t]
                .as_f64()
                .ok_or("Prices must be numeric")?;
            flat.push(val);
        }
    }

    Ok(DMatrix::from_row_slice(rows, cols, &flat))
}

Cleaner pattern matching in allocations

In allocate_from_prices, we improved the strategy selection logic to avoid nested if let constructs.

Old:

match strategy.to_lowercase().as_str() {
    "ew" | "equal" => allocator.ew_allocation(),
    "hrp" => allocator.hrp_allocation(),
    "mvo" | _ => { ... }
}

New:

"mvo" | _ => match mvo_config {
    Some(cfg) => allocator.mvo_allocation_with_config(&cfg),
    None => allocator.mvo_allocation(),
},

As we covered in Chapter 17, Rust encourages pattern matching as control flow, not as error suppression. Here we replaced nested if let Some(cfg) constructs with a direct match block, giving both branches equal visual weight and avoiding unnecessary nesting.

The result reads like a declarative policy: if you gave me a config, I’ll use it; otherwise, I’ll use defaults.

Reduced duplication and defensive bounds

And more improvements in allocate_from_prices - specifically in how we build the final weights vector. We add this tiny block:

for (idx, w) in weights_map {
    if idx < cols {
        weights[idx] = w;
    }
}

Which guards against the (admittedly rare) case of an allocator returning an index larger than expected. It’s defensive programming done right: simple, explicit, cheap.

Here's your final version of allocate_from_prices:

/// Allocates portfolio weights for a given strategy ("ew", "hrp", or "mvo").
pub fn allocate_from_prices(
    price_matrix: DMatrix<f64>,
    strategy: &str,
    mvo_config: Option<MvoConfig>,
) -> Result<Vec<f64>, String> {
    let allocator = PortfolioAllocator::new(price_matrix);

    let weights_map = match strategy.to_lowercase().as_str() {
        "ew" | "equal" => allocator.ew_allocation(),
        "hrp" => allocator.hrp_allocation(),
        "mvo" | _ => match mvo_config {
            Some(cfg) => allocator.mvo_allocation_with_config(&cfg),
            None => allocator.mvo_allocation(),
        },
    };

    // convert HashMap<usize, f64> → ordered Vec<f64>
    let cols = weights_map.len();
    let mut weights = vec![0.0; cols];
    for (idx, w) in weights_map {
        if idx < cols {
            weights[idx] = w;
        }
    }
    Ok(weights)
}

More expressive test names and ccope

We need to update some test names. Every test name now says what it proves:

#[test]
fn it_runs_monte_carlo_simulation() { ... }

#[test]
fn it_allocates_equal_weight() { ... }

Tests are executable documentation. They should read like examples in your book - “Here’s what this function is supposed to do.” When you write tests that read like English, future you will thank you.

We also grouped all tests under #[cfg(test)] - so they compile only in test builds, not in release mode. That keeps the production binary smaller and focused.

Here are your tests in their final form:

/// =======================
/// Tests
/// =======================
#[cfg(test)]
mod tests {
    use super::*;
    use serde_json::json;

    #[test]
    fn it_calculates_drift_and_shock() {
        let calc = calculate_drift_and_shock(&0.0, &0.0, &(1.0 / 252.0), &0.0);
        assert_eq!(calc, 1.0);

        let calc = calculate_drift_and_shock(&1.0, &0.0, &(1.0 / 252.0), &0.0);
        assert!(calc > 1.003 && calc < 1.004);
    }

    #[test]
    fn it_generates_random_numbers() {
        let series = generate_number_series(10);
        assert_eq!(series.len(), 10);
    }

    #[test]
    fn it_runs_monte_carlo_simulation() {
        let shocks = generate_number_series(10);
        let mc = monte_carlo_series(50.0, -0.002, 0.015, 1.0 / 252.0, shocks);
        assert_eq!(mc.len(), 11);
    }

    #[test]
    fn it_converts_json_to_matrix() {
        let v = json!({
            "prices": [[100.0, 101.0, 102.0], [200.0, 199.0, 198.0]]
        });
        let m = json_to_price_matrix(&v).unwrap();
        assert_eq!(m.ncols(), 2);
        assert_eq!(m.nrows(), 3);
        assert!((m[(0, 0)] - 100.0).abs() < 1e-8);
    }

    #[test]
    fn it_allocates_equal_weight() {
        let v = json!({
            "prices": [[100.0, 101.0, 102.0], [200.0, 199.0, 198.0]],
            "strategy": "ew"
        });
        let w = allocate_from_json(&v).unwrap();
        assert_eq!(w.len(), 2);
        assert!((w.iter().sum::<f64>() - 1.0).abs() < 1e-6);
    }

    #[test]
    fn it_allocates_mvo() {
        let v = json!({
            "prices": [[100.0, 101.0, 102.0], [90.0, 91.0, 92.0]],
            "strategy": "mvo"
        });
        let w = allocate_from_json(&v).unwrap();
        assert_eq!(w.len(), 2);
    }
}

Consistent Result error strings

Across all helper functions, we should replace bare strings like "Missing prices" with more descriptive, consistent ones:

.ok_or("Missing 'prices' array")?
.ok_or("Each asset must contain at least one value".into())

This matters for downstream error handling. When your backend sends a JSON response, a uniform tone and structure in your error strings make debugging from the frontend far easier. This exercise is left to the reader, but the principle is clear: consistency breeds clarity. Every .ok_or() becomes a breadcrumb back to what went wrong - clean, idiomatic, and human-readable.

A comment on readability

At no point did we change any public function signatures. allocate_from_json, allocate_from_prices, json_to_price_matrix, monte_carlo_series, and generate_number_series all behave exactly as before. This is a key principle of refactoring: don’t change what the code does; change how it says it.

But each now reads more like a guided explanation than an implementation detail. This is the hallmark of mature Rust: when clarity doesn’t cost correctness.

Refactoring in summary

Refactoring is often mistaken for optimization. It isn’t. It’s a philosophical statement - that our code should reflect our understanding, not obscure it.

When we rewrote this library, we didn’t make it faster. We made it clearer, safer, and closer to the mathematics it embodies.
And in doing so, we demonstrated one of Rust’s quietest powers: the ability to write software that is both precise and beautiful.

“Programs must be written for people to read, and only incidentally for machines to execute.” - Harold Abelson & Gerald Jay Sussman in "Structure and Interpretation of Computer Programs"

Was this page helpful?