Why most strategy backtests are wrong — and how to avoid the same mistakes.
Backtesting is one of the most powerful tools available to traders. With the right data and a well-constructed strategy, it is possible to evaluate trading ideas across thousands of historical market conditions in a matter of seconds. However, many backtests are unintentionally flawed. This guide covers the eight fundamental rules that separate honest, realistic backtests from misleading ones.
Platforms such as TradingView make backtesting extremely accessible. With Pine Script strategies, traders can simulate entries, exits, and position management on historical data to determine whether a trading idea has the potential to be a profitable strategy.
However, there is a major problem. Many backtests are unintentionally flawed. A strategy might show exceptional performance historically — high win rates, low drawdowns, and steady equity growth — yet fail almost immediately when traded live.
This disconnect usually occurs because the strategy violates one or more fundamental rules of honest and realistic backtesting.
What is honest (realistic) backtesting?
Honest backtesting does not attempt to "prove" a strategy will work in the future. Instead, it answers a narrower and more useful question: given a specific set of rules, did this idea show a positive expectancy on a meaningful sample of historical data under execution assumptions that resemble reality?
A good backtest is a filter. It helps you reject fragile ideas early, identify the market conditions a system needs, and estimate what the strategy might feel like — trade frequency, drawdown shape, losing streaks, and so on.
Why TradingView is unique (and why it can be tricky)
TradingView is unique because it combines three backtesting tools in one place:
That power comes with gotchas. Different symbols and brokers can have different data quality, sessions, spreads, and execution characteristics. If you want a concrete example of how broker constraints distort otherwise clean-looking research, the How I Build a TradingView Strategy That Matches My Broker's Constraints guide is a useful companion. Pine Script also has options that change how and when code executes (historical vs realtime, bar close vs intrabar), and community scripts may use shortcuts that inflate results.
These advantages might include access to future data, perfect execution without slippage, indicators that repaint past signals, multiple entries triggered from a single event, and position sizing that assumes unrealistic risk tolerance. When these factors are present, the strategy may appear extremely profitable in historical testing but behave completely differently in real market conditions.
One of the most serious mistakes that can occur in Pine Script strategies is the accidental use of future data. This problem most often appears when requesting higher timeframe data using the request.security() function.
In TradingView, the lookahead parameter controls whether the script can access the final value of a higher timeframe bar before it has actually closed.
//@version=6
// This gives the strategy knowledge of the future!
request.security(syminfo.tickerid, "60", close, lookahead = barmerge.lookahead_on)
When lookahead_on is enabled, the strategy can see the completed value of a higher timeframe candle even though that candle has not finished forming yet in real market conditions. This effectively gives the strategy knowledge of the future.
Imagine a 1-minute strategy referencing hourly data. If lookahead is enabled, the strategy might enter trades knowing exactly how the hourly candle will close — information that is impossible to know in real trading. This produces signals that appear extremely accurate in backtests but cannot exist in live markets.
The correct approach is always to disable lookahead:
//@version=6
request.security(syminfo.tickerid, "60", close, lookahead = barmerge.lookahead_off)
This ensures that higher timeframe values only update after the candle has fully closed, replicating real-world conditions accurately. One way to confirm whether lookahead_on is enabled is to use the bar replay feature to see if signals get repainted after the bar has closed.
Markets are dynamic environments where prices fluctuate constantly during the formation of each and every candle. Entry conditions that appear mid-bar may disappear before the candle closes. If a strategy executes trades during this formation period, it may act on signals that never truly existed in the final candle data.
There are two ways to ensure that we are trading on unchanged bar conditions:
//@version=6
longCondition = rsi > 70
if barstate.isconfirmed and longCondition
strategy.entry("Long", strategy.long)
Using barstate.isconfirmed ensures that the script only evaluates the longCondition value after the candle has fully closed. Without this safeguard, strategies may behave very differently in live markets compared to backtests because intrabar fluctuations are not replicated perfectly in historical testing.
Pyramiding refers to opening multiple positions in the same direction. TradingView strategies allow this behaviour by default.
If pyramiding is not explicitly disabled, a strategy might open multiple trades during a single trend. For example, a breakout condition might remain true for several candles. Without restrictions, the strategy could enter multiple long positions during the same move. This multiplies profits artificially and produces unrealistic equity curves.
//@version=6
strategy(
"My Strategy",
pyramiding = 0
)
Most real-world traders operate with a single position at a time unless they are intentionally scaling into trades.
Many strategies are backtested with zero transaction costs. This creates the illusion of perfect execution. In the real world, every trade incurs costs such as broker commissions, swaps, spreads, and slippage — particularly during volatile conditions.
Even small trading costs can dramatically affect strategy profitability.
//@version=6
strategy(
"My Strategy",
commission_type = strategy.commission.percent,
commission_value = 0.01,
slippage = 1
)
Consider a strategy targeting small profit margins such as scalping systems. If each trade aims to capture only a few points, trading costs can quickly eliminate any statistical edge. It's imperative that your expected trading costs are factored into the trading strategy. This is also why Strategy Order Types Explained matters: entry method and execution friction are tightly linked.
If you are a swing trader and are holding positions for many days, it's absolutely essential that you understand the cost to hold positions each day and over the weekend. It might make the difference between a profitable and unprofitable trading strategy. Additionally, not every broker charges the same amount!
Another common issue occurs when strategies repeatedly trigger entries from a single signal. If a breakout condition remains true for multiple candles, the strategy may enter again on each bar.
//@version=6
if longCondition and strategy.opentrades == 0
strategy.entry("Long", strategy.long)
This simple safeguard ensures that only one position is open at any time and prevents the strategy from artificially amplifying results through repeated entries.
Repainting indicators are tools that modify historical signals after new data becomes available. Examples include ZigZag indicators, some divergence tools, and certain pivot calculations.
These indicators may appear extremely accurate historically because they adjust past signals using information that was not available at the time.
//@version=6
pivotHigh = ta.pivothigh(high, 3, 3)
if not na(pivotHigh)
// pivot confirmed — safe to use
Ensuring signals only trigger after confirmation prevents strategies from relying on signals that would not have existed in real time.
Position sizing is a critical yet often overlooked element of backtesting. Many beginner strategies allocate the entire account balance to every trade. While this may produce impressive equity curves, it ignores the principles of risk management used by professional traders.
Most professional systems risk a fixed portion of capital per trade, often between 0.5% and 2%. For a deeper treatment of fixed-risk sizing, ATR-based stops, and expectancy, see Risk Management & Position Sizing and ATR Position Sizing.
//@version=6
riskAmount = 200
stopDistance = math.abs(entryPrice - stopPrice)
positionSize = riskAmount / stopDistance
This ensures that each trade risks a consistent and controlled amount of capital, allowing the strategy to survive losing streaks and maintain long-term stability.
When these eight principles are applied together, they dramatically improve the reliability of backtesting results. Strategies become more conservative but far more realistic. Instead of producing exaggerated performance metrics, they begin to reflect behaviour that could realistically occur in live trading.
Ignoring these rules can easily produce strategies that appear highly profitable but collapse immediately when deployed in real markets.
Backtesting should be treated as a scientific process rather than a simple experiment. Reliable strategies require careful attention to data integrity, execution assumptions, and risk management.
By following the eight rules outlined in this article you ensure that your TradingView strategies:
These principles form the foundation of professional strategy development and provide confidence that your backtests reflect reality rather than an illusion of profitability. Once those structural basics are in place, the next layer is regime testing in Why Backtesting Through Major Market Events Matters.
The seven rules above address execution integrity. Rule 8 addresses something equally dangerous: curve-fitting.
A strategy developed and optimised on a single data set will tend to overfit that data. The parameters are not capturing a genuine edge — they are capturing the noise in that specific history. The result looks impressive on paper and collapses the moment it encounters new data.
The solution is out-of-sample testing. Divide your historical data into two segments:
If the strategy performs reasonably well on both periods, that is meaningful. If it excels on in-sample data but fails on out-of-sample data, the strategy is overfit regardless of how clean the equity curve looks.
A more rigorous version of this is walk-forward optimisation — rolling the in-sample and out-of-sample windows forward through time and checking that the strategy continues to perform on each new unseen period. Strategies that survive walk-forward testing have passed a much harder filter than a simple one-period backtest.
A strategy that was optimised on the same data it is evaluated on is not a backtest — it is a curve fit. Out-of-sample testing is the minimum bar for taking a strategy seriously. Walk-forward testing is the gold standard.
A related trap: a strategy backtested only on EURUSD from 2020 to 2023 has not been tested — it has been tailored. That period had specific volatility characteristics, specific trend behaviour, and specific macro conditions. A strategy that works only there is not proven; it is fitted to that environment.
Before treating any strategy as having a genuine edge, test it across:
If the strategy only works on one instrument in one era, the honest conclusion is that the edge has not been demonstrated. It may still be real, but the sample size is too narrow to know.
Even a well-constructed backtest with out-of-sample validation is still a single path through history. The sequence of wins and losses you observed is just one possible ordering of those trades. A different ordering — which is equally possible — might produce a very different drawdown profile.
Monte Carlo simulation addresses this by randomly reshuffling the trade sequence hundreds or thousands of times and calculating the distribution of possible outcomes. This gives you a realistic range of:
Most traders never run this. Those who do discover that their "acceptable" drawdown from the backtest often sits near the bottom of the probability distribution — meaning it was one of the better outcomes, not a representative one.
Tools like TradingView's built-in Monte Carlo option in the Strategy Tester, or external tools such as Microsoft Excel and Python, can run this analysis. The output should inform your position sizing decisions, not just your entry logic.