Different Ways to Normalize Time Series
How Scaling Choices Shape Your Models
Time series data has one job: describe how something changes over time. But raw values rarely play nice. Units vary. Magnitudes drift. Outliers throw elbows. Before modeling or comparing signals, you usually need to normalize the data. The trick is choosing the right method.
This guide walks through the most common ways to normalize time series, explains when each one shines, and gives practical Python examples you can drop straight into your workflow.
Note: Unlike previous articles, I’ve chosen to write this one in format that goes straight to the point. I will be showing pros and cons directly with Python code, otherwise, it will take ages if every method needs to have its history explained.
However, I am starting a new series of articles that deal with Quantitative Research where I will present in deeper details the concepts. Stay tuned!
Why Normalization Matters
Normalization is more than cosmetics. It affects:
Model stability
Training speed
Similarity comparisons
Anomaly detection sensitivity
Feature importance
Interpretability
For time series, the stakes are even higher because the order of data points matters. A poorly chosen normalization can flatten a trend, exaggerate noise, erase seasonality, or distort what your model believes is important.
Let’s walk through the main options.
Min–Max Scaling - A simple scaling that preserves shape
Min–max scaling transforms your time series so its values sit between 0 and 1 (or any range you choose).
Formula:
x_scaled = (x - min) / (max - min)Best for
Models that assume bounded inputs
Visualization
Comparing signals with similar ranges
Neural networks
Weaknesses
Sensitive to outliers
Future values outside the observed range break the scale
Python Example
import numpy as np
ts = np.array([12, 15, 14, 20, 18, 25])
min_val = ts.min()
max_val = ts.max()
scaled = (ts - min_val) / (max_val - min_val)
print(scaled)The Signal Beyond 🚀.
From classic tools that have stood the test of time to fresh innovations like multi-market RSI heatmaps, COT reports, seasonality, and pairs trading recommendation system, the new report is designed to give you a sharper edge in navigating the markets.
Free trial available.
Z-Score Standardization - Centering the data so variation matters more than magnitude
Z-score standardization shifts the series to mean 0 and standard deviation 1.
Formula:
x_scaled = (x - mean) / stdBest for
Most statistical models
Anywhere you want variation to matter
Machine learning pipelines
Features with different units
Weaknesses
Outliers distort mean and standard deviation
If distribution changes over time, scaling must be updated
Python Example
import numpy as np
ts = np.array([12, 15, 14, 20, 18, 25])
mean = ts.mean()
std = ts.std()
z = (ts - mean) / std
print(z)Robust Scaling - A tougher version of Z-score that ignores outliers
Robust scaling uses the median and interquartile range instead of mean and standard deviation.
Formula:
x_scaled = (x - median) / IQRBest for
Data with outliers
Finance series with spikes
Sensor readings that sometimes fail
Real-world messy data
Weaknesses
If distribution is clean, it can under-represent real variation
Python Example using scikit-learn
import numpy as np
from sklearn.preprocessing import RobustScaler
ts = np.array([12, 15, 14, 20, 18, 25]).reshape(-1, 1)
scaler = RobustScaler()
scaled = scaler.fit_transform(ts)
print(scaled.flatten())Log Scaling - Useful when big values dominate the story
Taking the logarithm compresses large numbers and spreads out small ones.
Formula:
x_scaled = log(x + c)c is often 1 to avoid log(0).
Best for
Exponential growth
Financial time series
Data with long-tail distributions
Stabilizing variance
Weaknesses
Can’t handle negative values without shifts
Interpretation becomes less intuitive
Python Example
import numpy as np
ts = np.array([5, 8, 12, 200, 350, 800])
log_scaled = np.log(ts + 1)
print(log_scaled)Rolling Normalization - Dynamic scaling that adapts over time
Instead of scaling with global statistics, you normalize within a moving window.
Example:
Use the last 30 days to compute mean and standard deviation
Scale today’s value relative to that rolling window
Best for
Non-stationary time series
Drift detection
Online learning
Signals where “normal” changes over time
Weaknesses
Computationally heavier
Results differ based on window size
Python Example
import pandas as pd
ts = pd.Series([12, 14, 16, 30, 28, 26, 40, 42])
rolling_mean = ts.rolling(window=3).mean()
rolling_std = ts.rolling(window=3).std()
z_rolling = (ts - rolling_mean) / rolling_std
print(z_rolling)Max-Abs Scaling - Scaling for signals that cross zero
Max-Abs scaling keeps the sign and scales values by the largest absolute value.
Formula:
x_scaled = x / max(abs(x))Best for
Sparse signals
Waveforms
Signed sensor data
Machine learning models expecting inputs in [-1, 1]
Weaknesses
Sensitive to a single large value
Python Example
import numpy as np
ts = np.array([-4, -2, 0, 3, 5])
scaled = ts / np.max(np.abs(ts))
print(scaled)Quantile or Rank Transformation - Turning arbitrary shapes into smooth, uniform distributions
This method transforms each value according to its percentile.
Best for
Strongly skewed data
When you want identical marginal distributions across features
Preparing inputs for models sensitive to non-normal shapes
Weaknesses
Destroys distance relationships
Flattens patterns if used carelessly
Python Example
import numpy as np
from sklearn.preprocessing import QuantileTransformer
ts = np.array([2, 2, 3, 10, 50, 200]).reshape(-1, 1)
qt = QuantileTransformer(output_distribution=’uniform’)
scaled = qt.fit_transform(ts)
print(scaled.flatten())Seasonal Normalization - Let each season define its own baseline
If your data has hourly, daily, weekly, or annual patterns, you can normalize separately for each period. Example: normalize all Monday values using Monday statistics.
Best for
Strong seasonality
Demand forecasting
Energy load prediction
Website traffic patterns
Weaknesses
Assumes seasonal patterns are stable
Needs enough data per season
Python Example
import pandas as pd
df = pd.DataFrame({
“value”: [20, 22, 25, 30, 18, 40, 42],
“day”: [”Mon”, “Tue”, “Wed”, “Thu”, “Fri”, “Mon”, “Tue”]
})
seasonal = df.groupby(”day”).transform(
lambda x: (x - x.mean()) / x.std()
)
print(seasonal)Normalization is a small step that changes everything downstream. The wrong method hides patterns. The right one reveals them. The best normalization is the one that respects the story your data is trying to tell.
Do you want to master Deep Learning techniques tailored for time series, trading, and market analysis🔥? My book breaks it all down from basic machine learning to complex multi-period LSTM forecasting while going through concepts such as fractional differentiation and forecasting thresholds. Get your copy here 📖!



Excellent analysis! Normalization always feels like a balancing act, especially with real-world data.