Most financial-market categories take decades to find their place in the global capital stack. Equities took 200 years from the Dutch East India Company to the modern listed exchange. Futures took roughly 150 years from Chicago grain pits to global rates and energy curves. Crypto took 15 years from Bitcoin's white paper to a multi-trillion- dollar asset class. Prediction markets are now in the early-S-curve phase that those previous categories went through, and the structural reasons they belong there are stronger than the reasons many of those earlier categories had at the same point in their evolution.
This post is the long-form thesis on why prediction markets become a core financial primitive over the next decade — not just a niche consumer product, but a category that earns a place alongside equities, futures, and credit markets in how serious institutions allocate capital and how serious operators monetise audiences.
If you're earlier in your evaluation, the prediction-market primer covers the basics, and the structural comparison covers how they differ from equities and futures mechanically. This post takes the next step: why the structural advantages compound into the dominant information-aggregation infrastructure of the 2030s.
The information-aggregation problem nobody else solves cleanly
Most consequential decisions in the world depend on someone, somewhere, forecasting a future event. A central bank deciding to cut rates needs a forecast of inflation. A retailer deciding inventory needs a forecast of demand. An insurer pricing a policy needs a forecast of claims. A government planning a deployment needs a forecast of a geopolitical scenario. A startup committing to a hire needs a forecast of fundraising conditions.
For each of those decisions, the historical default has been expert forecasting — analysts, polls, commentators, trade publications, internal modelling. The forecast quality of those sources is, on average, mediocre. Tetlock's two decades of forecasting research established it definitively: experts in political and policy domains forecast at roughly chance levels on multi-year horizons. The same pattern shows up in financial analyst forecasts (which are calibrated against price action, not the other way around), in macro economic forecasts (where the median Wall Street forecast for the following year's GDP growth is rarely within 1.5 percentage points of the actual), and in sports forecasts (where the median expert pick is barely better than the public consensus).
Prediction markets solve this problem in a structurally different way. They put real money on the line, aggregate the views of thousands of participants who individually have small information edges, and produce a single price that reflects the weighted collective view. The mechanism is "the wisdom of crowds, with skin in the game." It works for the same reason equity markets produce more accurate firm valuations than equity-research analyst notes: when participants are paid for being right, the aggregate signal is sharper than any individual signal would be.
The Iowa Electronic Markets demonstrated this for US elections in the 1990s, consistently out-predicting polling. Prediction markets have demonstrated it at scale for political, macro, sports, and event categories in the 2020s. The empirical record is now strong enough that "prediction-market implied probability" is a primary input in serious institutional research, even when the institution doesn't directly trade on the markets.
Why crowds beat experts (with conditions)
The "wisdom of crowds" argument is sometimes oversold, so it's worth being precise about when it works and when it doesn't.
Crowd aggregation works when:
- The crowd has diverse information. Different participants have different pieces of the puzzle, and the market aggregates those pieces into a sharper estimate than any individual could produce.
- Participants are willing to put money on disagreement. The pricing pressure that moves the market comes from traders who believe the consensus is wrong and are willing to back that belief with capital.
- The cost of trading is low enough to make small information edges profitable. A trader with a 3% edge can't afford to trade on a 5% spread. As market infrastructure improves and transaction costs decline, more information edges become tradable.
Crowd aggregation does NOT work when:
- The crowd has correlated information. If everyone is reading the same source, the "crowd" is just one expert with a thousand voices, and the aggregation provides no edge.
- Participants are not paid for being right. Polls, commentary, and unstructured prediction (no skin in the game) collapse to whatever the dominant narrative is.
- The market is too thin. A market with only ten participants doesn't aggregate information; it just averages ten guesses.
Prediction markets at the scale they're now operating at meet the conditions consistently. The diversity of participants is wide, the financial incentives are real, and the infrastructure is modern enough that small edges are tradable. The empirical out-performance vs experts is the result.
Why AI doesn't replace this — it amplifies it
The most common counter-argument we hear is "AI forecasting is going to be so good that prediction markets become unnecessary." The argument is exactly backwards. AI makes prediction markets more important, not less.
There are three reasons.
AI participants increase liquidity, not replace it. Modern prediction markets already have algorithmic participants — desks running pricing models, market-makers running spread engines, and research-driven systematic traders. AI just sharpens those participants. The result is tighter spreads and deeper books, which makes prediction markets more useful for human traders, not less.
AI forecasting is best validated against market prices. The test of "is this AI forecast any good?" is whether it's better than the prediction-market price. If it is, the AI gets to trade on the gap and earn return; if it isn't, the AI is just expensive expert commentary. Prediction markets are the calibration mechanism for AI forecasting, not its competition.
AI generates more decisions that need forecasting. Every AI agent making decisions for an institution is generating more forecastable events: "Will my AI's deployment hit budget?" "Will this contract go into legal dispute?" "Will the supplier deliver on time?" These were once-private internal forecasts; with AI proliferation, they become explicit decisions worth pricing. The demand for forecastable contracts grows with AI adoption.
The pattern is the same one we saw with the relationship between search engines and Wikipedia, or between LLMs and Stack Overflow. The new technology raises the value of the human-curated calibration substrate. Prediction markets are that substrate for forecasting.
Capital efficiency at scale
We covered the capital-efficiency story in the structural comparison post, but it's worth restating because it's the property that makes prediction markets useful as a hedging instrument at institutional scale.
A binary prediction contract is the cheapest possible expression of a discrete-event view. The trader pays the implied probability, the maximum loss is bounded, the maximum gain is bounded, and the position settles at a known date. There are no margin calls, no funding rates, no rolls.
For an institutional desk hedging a $50M Treasury position against a Fed-cut scenario, expressing that hedge through binary CPI or rate-decision contracts costs a small fraction of what the same hedge through Treasury options or rates futures would cost. The trade-off is that the payoff is binary rather than linear — but for hedging a discrete scenario, binary is exactly the right shape.
The institutional uptake of prediction markets in 2025 was driven by exactly this realisation. Funds that had previously expressed event-driven views through complicated derivative structures discovered that binary contracts on the same events were cheaper, cleaner, and didn't require margin lines. As more funds make this transition, the institutional share of volume grows, which attracts more market-makers, which tightens spreads, which makes the contracts useful for an even wider range of strategies. This is a flywheel, and it's already turning.
Programmability: the property neither equities nor futures have
The property that distinguishes prediction markets most sharply from equities and futures is programmability. A prediction contract is a programmable unit. It can be embedded into other contracts, used as a settlement input, composed with other prediction contracts, and traded across operator surfaces with shared liquidity.
What this enables, concretely:
- Conditional contracts. "Pay $1 if event A happens AND event B happens." Composing two prediction contracts into a joint contract is a few lines of code. The same operation in equities or futures requires a custom derivative structure that costs millions to design and audit.
- Event-linked smart-contract payouts. A DAO can route treasury funds based on the resolution of a prediction contract. An insurance protocol can settle parametric claims based on a prediction-market resolution. A grant programme can release funds conditional on a milestone contract resolving YES. These are real use cases now, and they're impossible in traditional market infrastructure.
- Cross-operator composability. A trader can hold a position on one operator's deploy and offset it on another operator's deploy through the shared liquidity layer, with settlement flowing through the protocol. The fragmentation that hurts traditional venues becomes invisible to the trader.
Programmability is the same property that made stablecoins valuable in DeFi — they're not just digital cash, they're programmable cash. Prediction-market contracts are not just probabilistic claims, they're programmable probabilistic claims. The use cases that fall out of programmability are still being discovered.
Use cases beyond gambling
The category's biggest perception problem is the conflation with gambling. The mechanical comparison is strained — gambling is zero-sum entertainment with negative expected value; prediction markets are positive-sum information aggregation with bounded variance — but the cultural overlap is real. The category will shed the gambling association as the use cases beyond consumer trading become visible.
Three non-gambling use cases are already operational at scale:
Macro risk hedging. Funds, banks, and corporate treasuries hedge specific event scenarios through prediction contracts. The volume is institutional, the contracts are macroeconomic, and the use case is risk management.
Insurance and parametric claims. Some parametric insurance products now settle based on prediction-market resolutions of weather, flight delays, or natural disasters. The market price is the parametric trigger; the settlement is automated.
Corporate decision markets. Inside large organisations, prediction markets are used to forecast project completion dates, hiring outcomes, M&A close probabilities, and product launch timelines. The signal quality of an internal market beats the signal quality of internal expert estimates by wide margins.
Three more are emerging:
Sovereign forecasting. Some governments and central banks have started experimenting with prediction-market signals for policy inputs — particularly inflation, growth, and election-related forecasts. The political sensitivity is high, but the signal quality argument is strong enough that the experimentation is happening.
Scientific forecasting. Replication-rate prediction markets, clinical-trial outcome contracts, and scientific-grant performance markets are all live in pilots. The information-aggregation problem is the same one that prediction markets solved in politics and macro; the application is just earlier.
Distributed governance. DAOs and decentralised organisations use prediction-market signals as inputs into governance decisions. "What is the community's probability that this proposal passes its 6-month KPIs?" priced in real markets is a sharper signal than committee discussion.
The aggregate of these use cases is much larger than the consumer gambling category. We expect non-gambling volume to overtake consumer-trading volume within five years.
The fee compression argument
Every financial market category goes through fee compression as it matures. Equity commissions went from $50 per trade in 1980 to zero today. Futures clearing went from dollars per side to a few cents. Crypto-exchange fees went from 0.5% per side to 0.05% in five years.
Prediction markets are early in this curve. Headline trader fees on the most-trafficked platforms today are roughly 1% per side. We expect a similar compression to play out — the all-in cost of trading a prediction contract should compress to a few basis points within a decade as infrastructure matures, market-making gets more competitive, and protocol-level efficiencies compound.
Fee compression is good for the category overall but creates an interesting strategic situation for operators. The operators who do well in a fee-compressed world are those whose business is audience and distribution, not infrastructure. If the all-in trader cost is going to compress 10× over a decade, the infrastructure layer captures less margin per trade, and the operator capturing distribution captures more of the value chain.
That's the structural reason we built Kuest as an operator-aligned protocol layer rather than a venue. The future of the category is hundreds of operator brands serving distinct audiences, with shared infrastructure underneath. The pattern matches exactly what payments looked like as it commoditised: many merchants, shared infrastructure, distributed margin.
Critiques and rebuttals
It's worth taking the strongest critiques seriously, because they're the framing the category will have to answer in the next few years.
"It's just gambling with extra steps." The mechanical comparison fails — prediction-market contracts are not zero-sum against the house; the participants are net trading against each other, with the platform earning a small fee. But the cultural framing is real. The answer is the proliferation of non-gambling use cases (institutional hedging, parametric insurance, corporate decision markets) that make the gambling association look increasingly out of date. As those use cases scale, the framing shifts.
"Liquidity is too thin to be useful for institutional flow." This was true in 2022. It's not true in 2026. Combined volume on Polymarket and Kalshi is now sufficient to absorb meaningful institutional positions, and the protocol-layer shared-liquidity model on Kuest extends that depth across operator brands. The critique remains valid for very deep tail markets but is no longer valid for the core categories where institutional volume sits.
"Resolution risk is too high." This is partly true and is the right risk to flag. Resolution disputes are a real failure mode, and the design of optimistic-oracle systems is what protects against them. We covered the mechanics in the resolution post. The pattern that's emerging is that mature platforms have a resolution dispute rate well under 0.5%, which is acceptable for the category to be useful at scale. Newer or sloppily designed platforms will continue to fail on this dimension; that's a selection pressure that filters operators over time.
"Regulation will kill it." The regulatory trajectory has moved from existential threat (2018–2020) to constructive (2024–2026). The CFTC's no-action posture, the EU's discussion papers, and the Brazilian Ministry of Finance consultations all indicate a category-level path toward recognition rather than prohibition. Reverses are possible — we expect at least one tightening cycle in 2027 — but the long-term direction is clear.
The long-term trajectory
Predicting the trajectory of a financial-market category over a decade is exactly the kind of problem prediction markets are better than experts at, so this section is more honest if it's expressed as a probability distribution rather than a confident forecast.
The base case (we'd put 65% probability on this): prediction markets become a recognised asset class with a few major regulated venues globally, hundreds of operator brands serving local and vertical-specific audiences on shared infrastructure, and combined annual notional volume in the $1–5 trillion range within a decade. Institutional share crosses 50%; non-gambling use cases overtake gambling; programmability becomes a primary property used in DeFi, parametric insurance, and corporate decision-making.
The bear case (15%): regulatory tightening in one or more major jurisdictions slows growth materially. Volume continues to grow but at single-digit annual rates, the category remains a specialty product, and the long-term equilibrium is a few trillion dollars in annual notional rather than tens of trillions. Prediction markets are real but small.
The bull case (20%): the convergence with AI agents, parametric insurance, and DeFi compounds faster than the base case estimates, and prediction-market contracts become a standard component of every fintech, brokerage, and corporate risk-management product. Combined annual notional crosses $20T within a decade, comparable to today's interest-rate swap market.
The asymmetry is what matters strategically. The downside is a real but bounded category. The base case is a $1–5T market that reshapes how institutions price events. The upside is a foundational financial primitive comparable to swaps. The expected outcome is somewhere between base and bull, and the operators who position for the base case will be well-positioned even if the trajectory ends up bullish.
The investment thesis underneath Kuest is exactly this expected distribution. We're not betting that the bull case is certain. We're betting that the base case is by far the most likely, that the operators who own audience and run on shared infrastructure will capture most of the value in the base case, and that the bull case turns the same positioning into something significantly larger.
The future of finance is being structured right now around an expansion of what financial markets are allowed to price. For two hundred years, markets priced ownership of firms and forward delivery of commodities. For the next century, markets will price the truth of any sufficiently well-specified future event. That expansion is the future of finance, and prediction markets are how it's denominated.
The operators, allocators, and institutions positioning into the category now will be the ones whose names appear in the histories of the next decade's market structure. The cost of being early is low; the cost of being late will be the gap between watching the category form and shaping it.
