Evidence levels · 1 to 5.
Every intervention and claim on this site carries an evidence weight. The one I use is the Oxford Centre for Evidence-Based Medicine (OCEBM) hierarchy — 1 at the top, 5 at the bottom. It's what clinicians use to weigh studies against each other.
Peakspan itself sits at Level 5. This site is a single-subject case report. That is the lowest rung of the hierarchy. Nothing I publish here constitutes evidence in the sense that higher-level research does. I use this scale to rate the inputs I rely on (guidelines, trials, mechanistic claims), not to elevate the outputs of this project. The outputs stay Level 5 forever — that's structural, not a target to climb.
Two things follow from sitting at Level 5:
- Any lifestyle or supplement protocol I describe is a data point, not a recommendation. If I want to argue that an intervention is generally worth trying, I borrow the argument from Levels 1–3 and cite it inline. My experience adds flavour; it never moves the evidence bar.
- Calibrate trust accordingly. Drug decisions (Ezetimib, B-complex) reference Level 1–2 evidence. The 2025 gut protocol was held together partly by Level 2 (zinc-L-carnosine, curcumin) and partly by Level 5 (L-glutamine, the specific stack order) — which is exactly why I can say the stack worked for me but not claim the stack works.
Cadence · half-yearly, with one annual deep dive.
Too-frequent measurement is noise. Too-infrequent measurement is hope. The compromise I've settled on:
- Annual: full YEARS check-up — complete lab panel, CPET for VO₂max, body-comp, HRV profile, grip, jump, audio/visual, skin scan. This is the one where the whole picture comes together. January each year.
- Half-yearly drops (H1 & H2): two public drops per calendar year. H1 wraps the annual data; H2, published around July, is a mid-year retest on whatever moved, plus any new interventions on a ~6-month clock. Published within 30 days of the relevant draw.
- Continuous: body weight, resting HR, HRV, sleep stages, workout load — wearable-tracked daily, summarised in each drop.
- Ad hoc: when symptoms change (e.g. the Sept 2025 gut retest after the protocol), outside the schedule.
One hard rule: no drop is skipped because the numbers are bad. A missed drop would be the loudest possible signal that the ledger isn't actually open.
Intervention rules.
- 01
Single-variable where possible — and annotated where not.
The gut protocol in 2025 was a stack. I could not tell you which component drove calprotectin from 434 to 9. That's a confound. I say so. Where I can isolate (e.g. Ezetimib starting alone in 2026), I do.
- 02
Minimum 12 weeks before retest.
Most clinical biomarkers take a full cell-turnover cycle to reflect a real change. Retesting sooner usually measures noise, which is part of why the public drop cadence is half-yearly, not quarterly. Exception: acute inflammation or symptomatic problems, where I retest earlier and say so.
- 03
Pre-declared target + stop criterion.
Every intervention gets a specified target (e.g. ApoB ≤70, homocysteine <10) and a stop criterion (no movement after two retests → reassess). No moving goalposts.
- 04
Null results get the same word count as wins.
If an intervention doesn't move a marker, I write about it with the same detail. Publication bias is the most common way individual data goes wrong.
- 05
Physician in the loop.
Dr. Alexandru Ardelean (YEARS) is my primary. Prescription-level interventions (e.g. Ezetimib) are his decision, not mine. Lifestyle and OTC supplement changes I own, but I'm not freelancing on pharmacology.
- 06
No new intervention in the month before a YEARS check-up.
Otherwise I pollute the comparison that's supposed to anchor the drop.
Data sources.
Everything below the dashboard tiles comes from one of these. If a number has no citation next to it, it's from one of these. Raw PDFs available on request: niko@nikohems.de.
Primary clinic · annual
Dr. Alexandru Ardelean, internal medicine. Full annual check-up: CPET, plethysmography, HRV, body-comp, skin scan, audio, vision, grip, jump, cognitive battery, and full labs via Bioscientia.
Half-yearly labs
Standing orders through YEARS. Ingelheim runs the stool / microbiome panels (calprotectin, α1-antitrypsin, SCFAs, diversity). Berlin runs the blood work.
US draws (2024)
Three draws Oct–Dec 2024 while living in California. LC-MS testosterone, 25-OH vitamin D, hs-CRP, full metabolic. Useful for the trans-continental baseline bridging June 2024 (aware.app) to Feb 2025 (YEARS).
Early baseline
June 2024 baseline draw, pre-Berkeley. Useful context for the 2024 → 2026 drift, but a different assay family than Bioscientia; treated carefully in direct comparisons.
Wearable · continuous
Resting HR, HRV (rMSSD), sleep stages, training load. Not clinical-grade, but the trend is honest, and the morning HRV reading drives daily training decisions.
Gym & training log
Every session, every set, every RPE, since mid-2025. Provides the lift-side picture the clinic tests can't capture.
What's in the panel — and what isn't.
Tracked half-yearly or annually
- Lipid panel: total, LDL-C, HDL-C, non-HDL-C, triglycerides, ApoB, Lp(a), remnant cholesterol
- Inflammation: hs-CRP, NT-proBNP, ferritin (contextualised against haematology)
- Gut: calprotectin, α1-antitrypsin, sIgA, microbiome diversity (annual), histamine
- Metabolic: fasting glucose, HbA1c, fasting insulin, HOMA-IR, OGTT (annual)
- Thyroid & endocrine: TSH, fT3, fT4, TPO-Ak, Tg-Ak, IGF-1, cortisol, DHEA-S, total & free testosterone, SHBG, oestradiol
- One-carbon + vitamins: homocysteine, active B12 (holotranscobalamin), folate, 25-OH vitamin D, magnesium, zinc, selenium, Omega-3 Index, Vitamin B6 (PLP)
- Renal: creatinine, cystatin C, eGFR, ACR
- Fitness: CPET (VO₂max, VT1/VT2), grip R/L, countermovement jump, body composition (BIA + skinfolds), spirometry / plethysmography
- Neurocognitive battery (YEARS annual)
- HRV profile (clinic + wearable)
Not (yet) tracked
- CGM · planned for a future drop, not standing
- Epigenetic age clocks · intentionally excluded until between-lab reproducibility improves
- Continuous BP · on the backlog
- Polysomnography · the Eight Sleep data is the current proxy
Honest limitations.
A public-facing health experiment carries some built-in biases. The ones I can name:
- Publication bias, in reverse. When a wellness space full of silver-bullet claims meets a single person with recovery data, there's a pull to over-dramatise the wins. I try to weigh wins and non-movers equally. Every drop has a "what didn't move" section.
- Observer effects. Knowing a retest is coming changes behaviour. I try to minimise it by keeping the retest window short and the intervention window long.
- Single-subject. Everything in this experiment is n=1. A number moving for me says nothing about what will happen for you.
- Assay drift. Different labs, different reference ranges, different analytic methods — especially between US (Quest, mass spec) and Germany (Bioscientia). I note the lab alongside the value; direct comparisons within one lab only.
- Confounds. My diet changed. My sleep changed. My stress changed. My training changed. Any of those can move most of the markers here. Ascribing a change to a single pill or protocol is frequently wrong and I try to say so.