Every CFO in America can tell you, within basis points, their company's exposure to euro volatility, Brent crude, or three-month SOFR. Ask that same CFO what happens to their operating margin if AI inference costs rise 40% next quarter, and you will get silence. That silence is not ignorance — it is the absence of infrastructure. There are no forwards. No swaps. No options. No benchmark index with sufficient liquidity to anchor a hedge. AI compute has become one of the fastest-growing line items on the corporate income statement, and it is, at this moment, the single largest category of variable operating expense that cannot be financially hedged. This is the unhedgeable risk.
In Part One of this series, we mapped the terrain of token economics — how AI inference is priced, why costs have fallen 1,000x in three years, and why that deflationary trajectory is structurally fragile. This installment confronts the consequences. What happens when a volatile, subsidy-dependent, geopolitically exposed input cost becomes material to corporate earnings — and no financial instrument exists to manage it? What happens when that cost begins to displace the most predictable operating expense a company has ever known — human labor? And what happens when the very mechanism that made AI affordable (falling token prices) simultaneously guarantees that total AI spending will rise, not fall?
The answers point toward an uncomfortable truth that corporate finance is not yet equipped to process: AI is creating a new category of balance sheet risk, and the tools required to manage it do not exist.
The headcount substitution: from salary lines to token lines
Something extraordinary happened in early 2026. Block, the fintech company behind Square and Cash App, cut 4,000 employees — 40% of its entire workforce. CEO Jack Dorsey's explanation was unusual in its candor: the reduction was driven not by financial difficulty, but by AI capability. Dorsey wrote in a shareholder letter that intelligence tools had fundamentally changed what it means to build and run a company.¹ No euphemism about "restructuring." No vague invocations of "strategic realignment." AI, plainly stated, was replacing people.
Block was not alone. Challenger, Gray & Christmas estimates that 23% of Q1 2026 layoffs explicitly cited AI automation in SEC filings or press releases, up from 14% in Q4 2025.² In 2025, employers announced over 55,000 job cuts directly attributed to AI — twelve times the number just two years earlier.³ Salesforce CEO Marc Benioff stated publicly that he reduced his support headcount from 9,000 to approximately 5,000 because AI meant he needed fewer people.⁴ Amazon CEO Andy Jassy told employees that as the company rolls out more generative AI and autonomous agents, overall headcount is expected to shrink.⁵ Anthropic CEO Dario Amodei warned that AI could eliminate 50% of all entry-level white-collar jobs within five years. The World Economic Forum's 2025 Future of Jobs Report flagged bank tellers, administrative assistants, customer service representatives, and sales clerks among the most vulnerable roles.⁶
Now, there is important nuance here. The "AI washing" phenomenon — using AI as a convenient narrative cover for financially motivated headcount reductions — is real and well-documented. A Resume.org survey found that 59% of hiring managers admitted they emphasize AI's role in explaining layoffs because it resonates better with stakeholders than citing financial pressure. Only 9% said AI had fully replaced certain roles.⁷ Gartner predicted that half of companies citing AI in customer service headcount cuts would rehire for similar functions by 2027.⁸ Goldman Sachs analyst Eric Sheridan captured the shift more precisely: the labor substitution narrative has moved from hypothetical to operational, but implementation remains uneven and messy.⁹
The nuance matters, but so does the direction of travel. The question is not whether AI is replacing headcount today at the scale executives claim. The question is whether the trajectory — more capable models, lower costs, agentic architectures that execute multi-step workflows autonomously — points toward a future where AI inference becomes a structural substitute for labor. And on that question, the evidence is overwhelming.
Here is why this matters enormously to CFOs: human labor is the most predictable operating expense on a corporate income statement. Salaries are contracted. Benefits are actuarially estimated. Payroll taxes are formulaic. Headcount growth is planned quarters in advance, governed by hiring plans, approval chains, and onboarding timelines. When a CFO builds a three-year operating model, compensation and benefits — typically 40–70% of total operating expenses for knowledge-economy firms — is the line they can forecast with the highest confidence.
AI token costs have none of these properties. They are variable. They are usage-dependent. They are subject to vendor pricing changes with as little as 30 days' notice. They fluctuate with prompt complexity, context window length, model routing decisions, and user behavior patterns that are difficult to predict even with sophisticated monitoring. OpenMetal's infrastructure analysis documents monthly AI cost swings of 30–40% as routine.¹⁰ Gartner has warned that companies scaling AI face cost estimation errors of 500–1,000%.¹¹ A Drivetrain analysis of AI SaaS unit economics found that two accounts on the same plan can generate dramatically different costs to serve because each user engages with AI features at varying intensity.¹²
The corporate finance implication is profound: every dollar of salary replaced by AI inference is a dollar moved from a predictable, hedgeable expense category into an unpredictable, unhedgeable one. The P&L looks better on Day One — the token costs are lower than the fully loaded headcount they replace. But the risk profile of the income statement has fundamentally changed. The CFO has traded a known quantity for an unknown one.
This is not an abstract concern. Marell Evans, founder of Exceptional Capital, predicted that as companies increase AI budgets, they will pull money directly from their labor and hiring pools. Rajeev Dham of Sapphire agreed that 2026 budgets are shifting resources from labor to AI. Jason Mendel of Battery Ventures added that AI will move beyond augmenting existing workers to automating work itself.¹³ When investors see this shift on income statements — labor costs down, AI costs up — their first question will be about margin predictability. And CFOs will not have a satisfying answer.
The Jevons Paradox of intelligence: why cheaper tokens mean bigger bills
In 1865, the English economist William Stanley Jevons noticed something that defied the common sense of his era: as steam engines became more efficient, Britain did not burn less coal. It burned dramatically more. Efficiency did not suppress demand. It detonated it. Cheaper energy made new applications viable — factories, railways, steamships — and each new application created its own demand. The per-unit savings were overwhelmed by the explosion in total usage.¹⁴
Jevons could not have imagined large language models, but his paradox is now the defining economic dynamic of the AI inference market. When Microsoft CEO Satya Nadella learned that DeepSeek had achieved competitive performance at a fraction of the cost of Western models, his response was not concern — it was triumphalism. "Jevons paradox strikes again!" Nadella wrote. "As AI gets more efficient and accessible, we will see its use skyrocket."¹⁵
He was right, and the data is unambiguous. The cost to achieve GPT-3.5-level performance fell 280x between November 2022 and October 2024. Yet total AI inference spending did not decline. It surged. One analysis documented a 320% increase in enterprise generative AI spending in 2025 despite per-token costs dropping 1,000x.¹⁶ Data center electricity consumption — a proxy for total compute — grew from roughly 200 TWh in 2025 to projections of 325–580 TWh by 2028. The International Energy Agency estimates global data center consumption will double to 945 TWh by 2030, with AI's share growing from 5–15% today to potentially 35–50% by decade's end.¹⁷
The mechanism is straightforward. Every reduction in token cost makes previously uneconomical AI applications viable. When inference costs $60 per million tokens, you run AI on only the highest-value tasks. When it costs $0.06, you run it on everything — every customer interaction, every document review, every code commit, every email draft. The addressable market for AI expands geometrically with each price reduction. Economists call this the rebound effect. When the rebound exceeds 100% — when total consumption increases despite efficiency gains — that is Jevons Paradox in full effect.¹⁸
And the effect is amplified by a factor unique to AI: the tokens-per-task multiplier. Early chatbot interactions consumed perhaps 500–1,000 tokens per query. Today's reasoning models like OpenAI's o1 and DeepSeek-R1 produce extensive internal monologues, sometimes thousands of tokens to answer a simple question.¹⁹ Agentic AI workflows — where models plan, execute, evaluate, and iterate autonomously — consume 5–30x more tokens per task than standard chatbot interactions. IDC projects over one billion actively deployed AI agents worldwide by 2029, executing more than 217 billion actions per day and consuming 3.7 TeraTokens daily.²⁰ The cost per token is falling. The tokens per task are exploding. The net effect is that total spending goes up, not down.
This creates a paradox that traditional supply-demand analysis cannot easily resolve. In normal commodity markets, falling prices signal either oversupply or weakening demand — both of which eventually stabilize through market clearing. In AI inference, falling prices signal increasing demand. The supply curve and the demand curve are moving in the same direction. This is not a market approaching equilibrium. It is a market in perpetual expansion, where efficiency gains are continuously converted into usage gains, and every effort to reduce unit costs increases aggregate cost exposure.
For CFOs, this paradox demolishes the most common budgeting assumption about AI: that costs will decline over time as token prices fall. The per-unit cost will indeed decline. The total cost will not. Organizations that built three-year AI budgets on the assumption of falling unit costs are now discovering that their total AI spending is rising 30–50% year-over-year even as the per-token rate drops.²¹ One AI FinOps practitioner put it starkly: if your per-token cost drops from $0.06 to $0.00006, suddenly it makes sense to run AI inference on every customer service interaction, generate personalized content at the individual user level, and implement real-time AI analysis on streaming data. The budget that assumed a gentle downward slope now looks like a hockey stick pointing up.²²
The five reasons this risk cannot be hedged today
The corporate treasurer's toolkit — developed over decades to manage currency, interest rate, commodity, and credit risk — is impotent against AI token cost volatility. This is not because treasurers lack sophistication. It is because the fundamental prerequisites for financial hedging do not exist in the AI inference market. Understanding why is essential, because it reveals exactly what must be built.
There is no standardized underlying asset. A barrel of Brent crude is a barrel of Brent crude, regardless of who produced it or where. An AI token is not actually fungible. One million output tokens from Claude Opus 4.6 costs $25. The same million from GPT-5 nano costs $0.40. From DeepSeek V3.2, $0.42.²³ These are not interchangeable commodities. They differ in capability, latency, reliability, and quality of output. A forward contract requires such fungibility: the ability to deliver one unit that is economically equivalent to any other unit of the same grade. AI tokens fail this test completely. There is no "WTI equivalent" for intelligence.
There is no transparent price discovery mechanism. Commodity markets function because bid-ask spreads, trading volumes, and settlement prices are publicly visible in real time. AI model pricing is set unilaterally by providers, changed with minimal notice, and at best disclosed through blog posts and documentation pages rather than through market-based mechanisms. When OpenAI cuts GPT-4o prices by 50%, there is no futures curve that adjusts, no options market that reprices, no basis trade that captures the differential. The "market" learns of price changes through developer posts on X or Hacker News threads, not through a centralized order book.
There is no secondary market or transferability. When a corporate treasurer buys a currency forward, that position can be offset, novated, or traded. Reserved capacity commitments with cloud providers, the closest thing to a forward contract in AI, are generally non-transferable, non-refundable, and lack any mechanism for secondary trading. Similarly, in most cases, AI tokens themselves are nontransferrable and expire after a fixed period of time (e.g. 1yr). A company that committed to $10 million in annual Azure AI capacity cannot sell that commitment if their needs change. They are locked in, with no exit, no offset, and no ability to manage the position dynamically. Paradoxically, decreased usage increases costs on a per token basis.
There is no reliable benchmark index. Effective hedging requires a reference price that both counterparties trust. Ornn AI's Compute Price Index (OCPI), now listed on Bloomberg terminals, is a promising start, but it tracks GPU spot prices.²⁴ The relationship between GPU compute cost and API token cost is mediated by model architecture, inference optimization, batch utilization, and provider margin, all of which vary across vendors and change over time. The basis is unacceptable. It’s not a reasonable tool to manage exposure to token price volatility. A token price index with sufficient granularity, coverage, and independence to serve as a derivatives settlement benchmark does not yet exist.
There is no counterparty infrastructure. Derivatives markets require clearinghouses, margin systems, standard contract documentation (ISDA equivalents), and a critical mass of participants on both sides of the trade. AI labs want revenue predictability. Enterprise buyers want cost predictability. In theory, there is natural two-sided demand for risk transfer. In practice, the legal, regulatory, and operational infrastructure to match these counterparties does not exist. Architect Financial Technologies' partnership with Ornn to develop exchange-traded compute futures under CFTC-aligned standards is the first serious attempt, but GPU price risk management is again an adjacency, and it remains pre-market.²⁵
The cumulative effect of these five gaps is that a corporate CFO managing, say, $50 million in annual AI inference spend has no mechanism to lock in future costs, no way to protect against sudden price increases, no ability to manage vendor concentration risk through diversification instruments, and no financial product that converts a variable expense into a fixed one. They can negotiate enterprise agreements. They can prepay for credits. They can commit to reserved capacity. But none of these provide the dynamic risk management capabilities that treasurers expect for any other material cost input.
Without risk management infrastructure, price volatility will quickly erode the purchasing power of that $50m budget.
This is not a niche problem. Andreessen Horowitz documented that inference costs can account for 60–80% of total operating expenses for AI-first companies.²⁶ The AI inference market was estimated at approximately $97 billion in 2024, with projections of $254 billion by 2030.²⁷ For context, the global weather derivatives market peaked at $45 billion in notional value — and it emerged because a much smaller economic exposure demanded financial instruments to manage it.²⁸
The AI inference market is already 2x larger than the weather risk that spawned an entire asset class, and it has zero hedging infrastructure.
The investor problem: when "growth" becomes "variance"
The headcount-to-token substitution creates a second-order problem that CFOs are only beginning to confront: investor expectations around margin predictability.
Wall Street rewards consistency. When a company reports that its operating margin will be 22% next quarter, plus or minus 50 basis points, that precision earns a premium multiple. It signals operational control, management competence, and earnings visibility. The entire framework of forward guidance, the ritual through which public companies communicate expectations to analysts, depends on the ability to forecast costs with reasonable accuracy.
AI inference costs undermine this framework. When your largest and fastest-growing operating expense can swing 30–40% month-over-month, as OpenMetal documents is common,²⁹ and when Gartner warns that AI cost estimation errors routinely reach 500–1,000%,³⁰ the confidence interval around any forward guidance widens dramatically. A company that previously guided operating margins within a 100-basis-point range may now face variance of 300–500 basis points driven entirely by AI cost fluctuations that management cannot predict or control.
In the future, AI inference will have two factors contributing to volatility. The first is usage based, which can swing significantly month to month, and the second is token prices. To date, the trend has been one direction, and downwards. However, for the reasons previously stated, this deflationary period is more likely a temporary blip them a long term, structural trend. Price volatility and extreme swings in usage will compound the inability to forecast with accuracy.
This is particularly acute in the AI-washing-in-reverse scenario. Consider a mid-cap software company that announces it has replaced 200 customer support agents with AI, saving $30 million annually in fully loaded headcount costs, with AI inference projected at $8 million. The market applauds. The stock rises. Twelve months later, usage has expanded (Jevons Paradox), agentic workflows have been deployed (token multiplier), and the AI lab has raised prices to improve its path to profitability (subsidy cliff). The AI line comes in at $19 million. The "savings" are $11 million, not $22 million. The margin expansion story, which was priced into the stock, evaporates, and the stock price craters.
PwC's 2026 Global CEO Survey found that 56% of CEOs report neither increased revenue nor decreased costs from their AI investments. Only 12% reported both.³¹ Gartner found that just 36% of CFOs feel confident about driving AI impact at an enterprise level. Within finance functions specifically, only 44% expressed confidence in accelerating AI adoption, and just 42% were confident in their ability to hire and retain the digital talent needed.³² Dennis Gannon, VP Analyst at Gartner, called the low confidence in driving AI value "a wake-up call."³³
The confidence gap is rational. These are experienced financial executives confronting a cost category that violates every principle of their training. Traditional variable costs like sales commissions, raw materials, and logistics have decades of historical data, well-understood cyclical patterns, and established hedging mechanisms. AI inference has three years of price history severely skewed by subsidies, wildly non-stationary demand patterns, and zero hedging infrastructure. A CFO cannot credibly guide to a margin target when their second or third largest operating expense has the predictability profile of a startup's burn rate.
The asymmetric trap: why this gets worse before it gets better
The structural dynamics of the AI inference market create an asymmetric risk profile that amplifies the hedging problem. Prices can fall gradually over years as hardware improves and optimization techniques advance, but they can spike suddenly due to supply shocks, subsidy withdrawal, or demand surges.
Consider the supply concentration risk. TSMC controls more than 90% of advanced AI chip production. Its CoWoS advanced packaging is fully booked. Blackwell GPUs are backordered for over a year. HBM3 memory pricing has risen 20–30% year-over-year.³⁴ A single event like a severe earthquake in Taiwan's Hsinchu Science Park, an escalation in cross-strait tensions or an expansion of U.S. export controls could constrain global GPU supply overnight. The resulting price shock would propagate directly to compute costs within weeks, as cloud providers reprice to reflect scarcity.
Or consider the subsidy withdrawal scenario. OpenAI is projecting $14 billion in losses for 2026 alone. It has raised $57.9 billion across eleven funding rounds. Its transition to a for-profit structure signals mounting pressure from investors who expect returns.³⁵ When OpenAI, Anthropic, or Google decide that market share capture has run its course and margin improvement must begin, prices will rise. The timing is unknowable. The magnitude is unknowable. But the direction is certain. Subsidized prices are, by definition, temporary.
Energy cost transmission is equally unpredictable. When data centers consume 26% of Virginia's electricity supply, as they did in 2023, the marginal cost of additional power capacity is not a smooth curve. It is a step function that jumps when existing generation and transmission infrastructure is exhausted.³⁶ Carnegie Mellon's projection that data centers could drive an 8% increase in average U.S. electricity bills, exceeding 25% in high-demand markets, illustrates the non-linear nature of this risk.³⁷ A carbon pricing event, whether through U.S. legislation or expanded EU mechanisms, would add further unpredictable cost pressure.
And then there is the regulatory vector. The EU AI Act's high-risk system obligations take full effect August 2, 2026. European Commission studies estimate roughly 17% overhead on AI spending for high-risk systems.³⁸ If CFTC or SEC rulemaking begins treating AI inference as a material cost that requires specific disclosure and risk factor analysis, which is entirely plausible given the magnitude, the compliance burden adds another layer of cost that cannot currently be quantified or hedged.
The asymmetry is this: a 50% decline in token prices over two years would be absorbed gradually by corporate budgets and rewarded by markets as margin expansion. A 50% increase in token prices over two months, which is entirely plausible under any of the supply shock, subsidy withdrawal, or energy cost scenarios described above, would be a margin catastrophe for AI-dependent companies. To make matters worse, there are currently no financial instruments available to buffer the impact.
This is the profile of an unhedgeable risk. Not merely unhedged. Rather, actively unhedgeable given existing market infrastructure.
What must be true for this risk to become manageable
The gap between the current state with no hedging infrastructure, and the required state with a functioning derivatives market for AI compute, is large but can be bridged.
Historical precedent from weather, electricity, carbon, and bandwidth markets reveals a consistent set of prerequisites that must be satisfied before a new commodity class becomes hedgeable. The AI inference market has satisfied some. It has not yet satisfied others.
A standardized unit of measurement must exist. Weather markets use Heating Degree Days and Cooling Degree Days. Electricity markets use megawatt-hours. Carbon markets use metric tonnes of CO₂ equivalent. AI inference needs a "Standard Inference Unit", a normalized measure of compute that abstracts across models, providers, and hardware. This is the hardest problem. A token from Claude Opus is not equivalent to a token from GPT-5 nano, just as a megawatt-hour of baseload power is not equivalent to a megawatt-hour of peaking power. The solution will likely involve quality tiers or capability grades, similar to how crude oil (WTI, Brent, Dubai) and power (peak, off-peak) are traded in grades that are related but not identical, linked through basis differentials.
A reliable, independent price index must be published. Ornn's Compute Price Index is the first serious attempt, and its Bloomberg Terminal listing provides institutional credibility.³⁹ But the index is for GPU spot prices. A similarly robust and credible price index for tokens across major providers, tiers, and geographies is required, with sufficient granularity to serve as a settlement reference for derivatives contracts.
Sufficient two-sided interest in risk transfer must be demonstrated. AI labs want revenue predictability. Enterprise buyers want cost predictability. Cloud providers want to manage capacity utilization. Hardware manufacturers want to smooth investment cycles. The natural counterparties exist. What is needed is a mechanism to match them, initially through bilateral OTC agreements (the weather market began this way), and eventually through centralized exchanges.
Legal and regulatory frameworks must accommodate the new asset class. CFTC jurisdiction over cash-settled futures on compute indices would provide the regulatory certainty that institutional participants require.⁴⁰ ISDA-style master agreements for bilateral compute hedging contracts would reduce transaction costs and legal uncertainty. Accounting standards must clarify how compute hedging relationships qualify under ASC 815 (or IFRS 9 for international firms), enabling hedge accounting treatment that smooths P&L volatility.
Market makers and liquidity providers must emerge. No derivatives market functions without intermediaries willing to hold inventory and provide continuous bid-ask quotes. Energy trading firms, quantitative hedge funds, and the trading desks of major banks are natural candidates with several already expressing interest in GPU compute as an asset class.⁴¹ The extension to AI token prices is natural, but a cold start is a challenge for any new market.
And critically, the cultural shift must occur within corporate treasury. CFOs must begin treating AI inference as what it is: a material, volatile, strategically significant cost input, and not a technology budget line item managed by the CIO. The moment AI spend moves from "IT procurement" to "treasury risk management" is the moment institutional demand for hedging instruments becomes real.
Conclusion: the $250 billion problem that treasury has not yet claimed
The corporate finance profession has spent more than fifty years building increasingly sophisticated tools to manage increasingly complex risks. Currency hedging evolved from simple forwards in the 1970s to exotic barrier options in the 2000s. Commodity risk management progressed from bilateral supply contracts to exchange-traded futures, swaps, collars, and structured products. Interest rate management spawned an $800 trillion notional derivatives market.
In every case, the catalyst was the same: a cost input became too large and too volatile to manage through procurement alone.
With the rapid adoption of AI, inference has already reached that threshold.
With global enterprise AI spending projected at $2.5 trillion in 2026,⁴² with inference comprising 60–80% of operating expense for AI-intensive companies,⁴³ with monthly cost fluctuations of 30–40% documented as routine,⁴⁴ and with structural forces (subsidy withdrawal, energy costs, supply concentration, regulatory burden, agentic demand multiplication), tail risk now exists that current budgeting frameworks cannot capture.
The case for financial hedging instruments is not theoretical. It is urgent.
The companies that will navigate this transition successfully are those that begin building the institutional muscle now, before the market and the hedging instruments even exist. That means establishing granular token cost attribution across business units and applications. It means modeling AI cost scenarios with the same rigor applied to FX and commodity exposure. It means building relationships with the emerging ecosystem of compute index providers, FinOps platforms, and derivatives market builders. And it means recognizing that the headcount-to-token substitution, while operationally compelling, represents a fundamental restructuring of the corporate risk profile that demands treasury-grade governance.
The tools do not exist today. But the blueprints are visible, the market participants are assembling, and the economic pressure is building.
In the final installment of this series, we will lay out what the hedging infrastructure must look like: the indices, the contract structures, the clearing mechanisms, the regulatory frameworks, and the market architecture required to transform AI compute from an unhedgeable risk into a manageable one.
As we said before, the AI token is the new oil barrel.
The CFO who treats it as a technology problem, rather than a financial one, will be the CFO explaining margin misses to the board.
Sources
- PrepoAI, "11 Companies Replacing Workers With AI (2026)," April 2026; Block shareholder letter, February 26, 2026.
- Tech Insider, "Tech Layoffs 2026: 150,000+ Jobs Cut," April 2026; Challenger, Gray & Christmas Q1 2026 analysis.
- CBS News, "More companies are pointing to AI as they lay off employees," February 27, 2026.
- Gulf News, "AI Job Cuts: Major Companies Replacing Humans with Bots," February 21, 2026.
- Gulf News, ibid.; Amazon CEO Andy Jassy internal memo, 2025.
- We Are Tenet, "60+ Shocking AI Job Replacing Statistics Relevant for 2026," February 18, 2026; World Economic Forum, Future of Jobs Report 2025.
- Built In, "Did AI Take Your Job? The Truth About AI Washing," March 2, 2026; Resume.org survey of 1,000 U.S. hiring managers, 2025.
- PrepoAI, ibid.; Gartner customer service practice survey, October 2025.
- Tech Insider, ibid.; Goldman Sachs research note, March 27, 2026.
- OpenMetal, "FinOps for AI Gets Easier with Fixed Monthly Infrastructure Costs," January 29, 2026.
- Gartner Infrastructure Research, cited in ThinkAI Corp, "AI Inference: A Hidden Cost Crisis," January 13, 2026.
- Drivetrain, "Unit economics for AI SaaS companies: A survival guide for CFOs," October 6, 2025.
- TechCrunch, "Investors predict AI is coming for labor in 2026," December 31, 2025.
- Jevons, W.S., The Coal Question, 1865.
- Northeastern University News, "What is Jevons Paradox? And why it may — or may not — predict AI's future," February 7, 2025.
- Artur Markus, "The Inference Cost Paradox: Why Generative AI Spending Surged 320% in 2025," January 2, 2026.
- IEA, "Energy demand from AI," 2025; The Substrat3, "The Jevons Paradox in AI: Why Efficiency Creates More Demand," January 23, 2026.
- Wikipedia, "Jevons paradox"; The Lancet Digital Health, "The Jevons Paradox in global health," November 13, 2025.
- The Substrat3, ibid.; CloudZero, "AI's False Efficiency Curve," September 16, 2025.
- IDC, "Agent Adoption: The IT Industry's Next Great Inflection Point," 2025.
- CloudZero, "The State of AI Costs in 2025," August 2025.
- Artur Markus, ibid.
- Awesome Agents, "LLM API Pricing Comparison — March 2026"; AI Magicx, "LLM API Pricing in 2026."
- PR Newswire, "Architect Financial Technologies Partners with Compute Index Provider Ornn," January 21, 2026; The Innermost Loop, "The First Tradable Compute Price Index," April 2026.
- The Block, "Former FTX US president Brett Harrison's Architect expands crypto-style perpetual futures into AI compute markets," January 21, 2026.
- Andreessen Horowitz, cited in Monetizely, "The AI Inference Cost Problem," June 18, 2025.
- Medium (Ahmed's Tech Brief), "AI Inflation: How Compute Costs Are Reshaping Tech's Margins," November 25, 2025.
- Carbon Credits, "Weathering the Storm: The Rise of $25B Weather Derivatives Market," 2024.
- OpenMetal, ibid.
- Gartner Infrastructure Research, ibid.
- The Register, "Majority of CEOs report zero payoff from AI splurge," January 20, 2026; PwC 2026 Global CEO Survey.
- CFO Tech News, "CFOs juggle cost cuts, growth bets & AI confidence gap," December 10, 2025; Gartner 2026 CFO Agenda report.
- CFO Tech News, ibid.
- Sourceability, "AI demand sparks memory supply chain strain," 2025; Capacity, "AI chip demand continues to strain big tech supply chains," 2026.
- TapTwice Digital, "8 OpenAI Statistics (2025)"; eMarketer, "OpenAI's forecast $143 billion cash outflow raises stakes," 2025.
- Pew Research Center, "US data centers' energy use amid the artificial intelligence boom," October 24, 2025.
- Pew Research Center, ibid.
- Boundless, "What is the EU AI Act? Employer compliance guide," 2025; Data Innovation, "Artificial Intelligence Act" cost analysis, 2021.
- The Innermost Loop, "The First Tradable Compute Price Index," April 2026; Ornn, ornnai.com.
- Commodity Exchange Act, 7 U.S.C. § 1 et seq.; CFTC Regulation 40.2(a).
- DRW CEO remarks cited in multiple publications, 2025; Pulse 2.0, "Ornn: $5.7 Million Seed Funding Raised," October 2025.
- Process Excellence Network, "Global AI spending will total $2.5 trillion in 2026, says Gartner," 2026.
- Gartner Infrastructure Research; ThinkAI Corp, ibid.
- OpenMetal, ibid.