jypi
  • Explore
ChatWays to LearnMind mapAbout

jypi

  • About Us
  • Our Mission
  • Team
  • Careers

Resources

  • Ways to Learn
  • Mind map
  • Blog
  • Help Center
  • Community Guidelines
  • Contributor Guide

Legal

  • Terms of Service
  • Privacy Policy
  • Cookie Policy
  • Content Policy

Connect

  • Twitter
  • Discord
  • Instagram
  • Contact Us
jypi

© 2026 jypi. All rights reserved.

Supervised Machine Learning: Regression and Classification
Chapters

1Foundations of Supervised Learning

2Data Wrangling and Feature Engineering

3Exploratory Data Analysis for Predictive Modeling

4Train/Validation/Test and Cross-Validation Strategies

Holdout Validation PrinciplesK-Fold Cross-ValidationStratified K-Fold for ClassificationGrouped and Blocked CVTime Series Split StrategiesNested Cross-ValidationRepeated CV and ShuffleSplitLeakage-Free Preprocessing within CVEvaluating Variance of EstimatesConfidence Intervals via BootstrappingModel Selection vs Model AssessmentEarly Stopping with Validation CurvesLearning Curves InterpretationData Snooping and Multiple TestingCross-Validation for Imbalanced Data

5Regression I: Linear Models

6Regression II: Regularization and Advanced Techniques

7Classification I: Logistic Regression and Probabilistic View

8Classification II: Thresholding, Calibration, and Metrics

9Distance- and Kernel-Based Methods

10Tree-Based Models and Ensembles

11Handling Real-World Data Issues

12Dimensionality Reduction and Feature Selection

13Model Tuning, Pipelines, and Experiment Tracking

14Model Interpretability and Responsible AI

15Deployment, Monitoring, and Capstone Project

Courses/Supervised Machine Learning: Regression and Classification/Train/Validation/Test and Cross-Validation Strategies

Train/Validation/Test and Cross-Validation Strategies

25740 views

Design robust evaluation schemes and prevent leakage with correct resampling and learning curves.

Content

5 of 15

Time Series Split Strategies

Time Series CV — Sass, Science, and Zero Leakage
3432 views
intermediate
humorous
science
education theory
gpt-5-mini
3432 views

Versions:

Time Series CV — Sass, Science, and Zero Leakage

Watch & Learn

AI-discovered learning video

Sign in to watch the learning video for this topic.

Sign inSign up free

Start learning for free

Sign up to save progress, unlock study materials, and track your learning.

  • Bookmark content and pick up later
  • AI-generated study materials
  • Flashcards, timelines, and more
  • Progress tracking and certificates

Free to join · No credit card required

Time Series Split Strategies — Because Time Hates Shuffling

"If your cross‑validation shuffles time like a deck of cards, your model will be auditioning for a soap opera called ‘Future Leaks’."

You already learned about grouped/blocked CV and stratified K‑fold — excellent. Now let’s take that knowledge and make time behave. Time series data is a different beast: order matters, leakage is subtle but deadly, and seasonality & concept drift love to sabotage your lovely test score. This section assumes you did good EDA work (you looked for distribution shifts, seasonality, and gaps — right?), and builds on grouped/blocked ideas for time‑aware scenarios.


Why ordinary K‑fold fails for time

  • Standard k‑fold randomly shuffles rows. For time series, that’s basically training on tomorrow and testing on yesterday — illegal in ML court.
  • Leakage: future information seeps into training; model overfits transient patterns; evaluation is optimistic.

So instead we use time‑aware splits that respect chronology. Think of splits as controlled experiments where the future is hidden from the past.


Common time series splitting strategies (and when to use them)

1) Rolling‑Origin (Expanding Window) — the classic backtest

What it does: Start with a training window [t0, t1], test on (t1, t2]; then expand the training window to include the previous test and test on the next slice. Repeat.

  • Strengths: Mimics real forecasting (you retrain as new data arrives). Great for detecting overfitting across time.
  • Weaknesses: Older data accumulates (may be stale). Computation increases with retraining.

Example (visual):

Train: [T0 ---------- T1]
Test: (T1 ----- T2]

Then

Train: [T0 ---------------- T2]
Test: (T2 ---- T3]

Use if: you believe more historical data is useful and want to simulate production retraining.


2) Sliding Window (Rolling Window) — keep the memory short

What it does: Keep training window size fixed; slide forward: train on [t0, t1], test on (t1, t2]; then train on [t0+Δ, t1+Δ], test on (t1+Δ, t2+Δ].

  • Strengths: Handles nonstationarity by forgetting ancient data.
  • Weaknesses: Risk of discarding important long‑term patterns.

Use if: concept drift is expected and older observations are less relevant.


3) Blocked CV & Purging — when labels leak across windows

We already saw blocked CV for grouped data. For time series, you often need purging and embargo to prevent leakage from overlapping observation/label periods.

  • Purging: Remove (from training) rows whose information window overlaps the test labels. Common in finance where a trade's features span time that overlaps the test.
  • Embargo: After a test window, skip some time in the training set to prevent leakage due to temporal correlation.

Quote for memory:

"Purging is Marie Kondo for leaked features — throw away anything that sparks future knowledge."

Use if: features are constructed from windows (lookbacks, rolling aggregates) or when events have influence that straddles splits.


4) TimeSeriesSplit (sklearn) — basic implementation

scikit‑learn provides TimeSeriesSplit, which implements a form of expanding window CV. Good as a starting point, but it doesn't support embargo/purging or complex seasonality awareness.

Pseudocode (scikit‑learn style):

from sklearn.model_selection import TimeSeriesSplit
tscv = TimeSeriesSplit(n_splits=5)
for train_idx, test_idx in tscv.split(X):
    X_train, X_test = X.iloc[train_idx], X.iloc[test_idx]
    # fit & eval

Use sklearn's if your problem is straightforward and features are pointwise (no overlap). Otherwise customize.


5) Seasonality‑aware splits — don’t mix seasons like soup

If your target has strong seasons (e.g., retail weekly cycle, yearly patterns), ensure test folds include full seasonal cycles. For instance, test on whole weeks/months rather than random days.

Strategy: Align folds on natural periods (weeks, quarters). Or use stratified time windows to keep distribution of seasons similar across folds.

Why: You avoid misleadingly optimistic performance where the model trained on winter suddenly predicts summer without learning seasonality.


6) Panel/Multiple Series (Grouped Time Series) — combine with grouped/blocked CV

If you have many time series (customers, sensors), treat it like grouped CV but time‑aware: you might hold out certain entities entirely for testing, or you do time splits per group then aggregate.

Options:

  • Holdout entities (test on new users/products) — checks generalization across entities.
  • Time splits within entities — checks temporal generalization.
  • Mix: Test on new time ranges for held‑out groups.

Use when: your model must generalize across both time and cross‑sectional variation.


A compact decision table

Scenario Recommended split Notes
Forecasting time series (one series) Rolling‑origin / Expanding window Simulate production retrain
Nonstationary series Sliding window Forget old data
Overlapping features / leakage risk Purged / embargoed CV Remove overlap zones
Strong seasonality Seasonally aligned folds Keep cycles intact
Many series (panel) Grouped time + time split Combine grouped/blocked CV ideas

Practical checklist — Don’t blow the evaluation

  1. During EDA, check autocorrelation and label‑feature windows — that tells you if purging is needed.
  2. Choose fold lengths that preserve seasonality and realistic training sizes.
  3. Implement embargo/purging when features aggregate over future or overlapping windows.
  4. Report results over multiple backtests (different start dates) to assess variance.
  5. Use nested CV or sequential hyperparameter tuning: hyperparameters must be tuned only on past data relative to each test fold.

Mini implementation recipes

  • Quick expanding window: sklearn TimeSeriesSplit — fine for pointwise features.
  • If features overlap labels, implement purging: when creating train indices, drop rows whose observation window overlaps test label range.
  • For embargo: after each test slice, remove an embargo interval (e.g., 1% of dataset time) from training indices.

Pseudocode for purging + embargo (conceptual):

for each test_window in sliding_test_windows:
    train = all_data before test_window.start
    remove from train any rows with lookahead > test_window.start (purge)
    remove from train last embargo_length rows before test (embargo)
    fit on train, eval on test

Gotchas & tricky questions

  • What if timestamps are irregular? Resample or align to meaningful periods before splitting.
  • What about label leakage through exogenous seasonal indicators? If you engineer a future‑aware feature (e.g., planned promotion), ensure it’s available at prediction time.
  • How many splits? Enough to estimate variance but not so many that test windows become tiny and unrealistic.

Ask yourself: "Would this setup mimic how my model will be used in production?" If the answer is no, redo the split.


Final thoughts (synthesis)

Time‑aware CV is less a single tool and more a philosophy: respect chronology, be paranoid about leakage, and align validation with production use. Build on grouped/blocked ideas when multiple entities exist, and bring purging/embargo to the party when features straddle time. Use rolling or sliding windows depending on whether memory helps or hurts. And always, always let EDA guide your split decisions — distribution shifts, seasonality, and autocorrelation are your north stars.

"Good backtests are honest; they don’t flatter the model. If your CV makes the model look like a hero, check for leaks — the villain is probably laundered information."


Key takeaways:

  • Never randomly shuffle time. Ever.
  • Choose expanding vs sliding windows based on stationarity and production retraining strategy.
  • Use purging and embargo to avoid subtle label leakage.
  • Align folds with seasons, and merge grouped CV when multiple series exist.

Version tip: Start simple with sklearn's TimeSeriesSplit, then add purging/embargo and seasonal alignment as your EDA reveals risks.

Flashcards
Mind Map
Speed Challenge

Comments (0)

Please sign in to leave a comment.

No comments yet. Be the first to comment!

Ready to practice?

Sign up now to study with flashcards, practice questions, and more — and track your progress on this topic.

Study with flashcards, timelines, and more
Earn certificates for completed courses
Bookmark content for later reference
Track your progress across all topics