What AI Broke
Lab — early draft from Era Haus

AI displacement got specific

May 11, 2026What AI Broke

AI displacement stopped being uniform this week. Microsoft's first-quarter diffusion report put U.S. software developer employment 4% higher than a year ago — a record high, not a collapse. Apple admitted iOS users will pick their own AI model. Anthropic shipped ten finance agents tied to Moody's data, aimed at the analyst-pyramid work the banking industry was assumed to keep. Nvidia crossed $40 billion in equity bets on its own customers and suppliers. The pattern is specific, not general.

AI did not eat developer jobs this quarter

The defensible read 12–18 months ago was that code-generation agents — Copilot, Cursor, Devin, Cognition — were collapsing the bottom of the software engineering hiring funnel. Graduate hiring was anecdotally falling, mid-career engineers were reportedly being asked to do the work of two with AI pairing, and the soft consensus was that total developer headcount had peaked in 2024–2025. A serious operator running an engineering organization was supposed to model flat-to-declining headcount for the next 24 months.

Microsoft's Q1 2026 Global AI Diffusion Report, published May 7, put U.S. software developer employment in March 2026 roughly 4% higher than March 2025, on a base of about 2.2 million developers. 2025 full-year stock rose 8.5% year-over-year — a record high for the profession. Microsoft has an obvious conflict here — it sells GitHub Copilot — but the underlying Bureau of Labor Statistics occupation codes are not Microsoft's to manufacture. Note the disconnect from sentiment: the platform competing most directly with junior developer labor is also the platform that needs more developers to integrate its output.

Anyone running a software business who priced their hiring plans against the "AI replaces engineers" narrative should reset. Particularly: venture investors telling portfolio chief executives to flatline engineering hiring; bootcamps pivoting away from web-development curricula; computer science faculty hand-waving graduate job prospects. The augmentation read — fewer junior hires per unit of output but more total units — is winning the data. Move: re-plan hiring against integration and deployment workload, not against the abstract "AI replaces developers" headline.

Apple admitted it cannot pick your model

Apple's distribution power in consumer AI was supposed to be irreversible. The iPhone reaches roughly 1.4 billion active devices; Apple Intelligence was assumed to channel AI features through Apple-selected partners — ChatGPT today, possibly one of Gemini or Claude tomorrow. The defensible read was that model labs would accept Apple-as-channel-gatekeeper terms because there was no alternative path to iOS surface area. Labs were structurally preparing for years of revenue splits and branded integrations under Apple's exclusive-pick framework.

On May 5, Bloomberg reported that iOS 27 — to be unveiled at Apple's WWDC developer conference on June 8 — will introduce an Extensions framework letting users route Apple Intelligence requests across Siri, Writing Tools, and Image Playground to a user-selected provider. Gemini, Claude, and ChatGPT are confirmed; the architecture lets any compatible app register as a model provider. 9to5Mac and MacRumors confirmed independently. Apple has separately signed a Google deal for a custom Gemini-based default tier. Users will swap the engine driving Siri with a single Settings toggle.

Any AI product team that was waiting for an Apple deal to reach iPhone users should reconsider its iOS go-to-market. The premium of being Apple's chosen partner just collapsed; differentiation moves back to the product layer, where it always was. Model labs are now distribution-equal on iOS in a way they were not on May 4. Move: rebuild iOS strategy against direct app distribution and a user-preference toggle, not against the assumption that Apple is going to pick one winner for its base.

The investment-banking analyst pyramid lost its data moat

The defensible read on investment-banking analyst work — pitchbooks, credit memos, know-your-customer packets, earnings analysis, month-end close — was that AI could not eat it because the work bundles judgment, proprietary data, regulated workflow, and Microsoft Office output. Goldman, JPMorgan, Morgan Stanley, BMO ran in-house AI projects but kept analyst headcount mostly intact. The moat was the data — Moody's, S&P, Bloomberg licensing — and the integration into Office, Slack, and bank-specific compliance stacks. The assumption was that horizontal model providers could not bundle all of that.

On May 5, Anthropic launched ten pre-built financial-services agents on Claude Opus 4.7, covering exactly the workflows the pyramid was supposed to protect: pitchbooks, earnings analysis, credit memos, underwriting, know-your-customer, month-end close, statement audits, insurance claims. The launch shipped with full Microsoft 365 integration. Moody's embedded its platform — credit ratings and risk data on more than 600 million companies — into Claude as a native app. FIS, the listed banking-technology operator, deployed a Claude-built Financial Crimes Agent at BMO and Amalgamated Bank to compress anti-money-laundering investigations from hours into minutes. The bundle the analyst pyramid relied on as protection just arrived intact.

Junior and mid-level analysts whose output is a pitchbook, credit memo, or compliance packet should be uncomfortable. The bigger group: any vertical AI startup selling "AI for finance" priced against the assumption that integrating Moody's, S&P, or FactSet was a five-year, hundred-million-dollar moat. Anthropic just shipped one of the partners; the second, third, and fourth will arrive faster than horizontal-wrapper competitors can ship comparable bundles. Move: stop competing on Claude or GPT wrappers for finance — compete on the data partnerships and regulated workflow integrations the frontier labs do not yet have.

Nvidia stopped pretending to be a neutral supplier

The defensible read on Nvidia was that it was the picks-and-shovels supplier to the AI boom: dominant gross margins, arm's-length sales to a customer base of hyperscalers and labs, no equity entanglement that would attract antitrust attention. The 12-month-old playbook was that Nvidia's job was to ship GPUs and that the customers' job was to fund themselves through public markets, venture, and operating cashflow. Nvidia's position rested on the chips, not on financing the demand for them.

By May 9, Nvidia had committed more than $40 billion to AI equity investments in the first four months of 2026 alone. The single largest line is $30 billion into OpenAI in late February; the remainder spans at least seven multi-billion-dollar deals — including up to $2.1 billion in data-center operator IREN, tied to a 5-gigawatt deployment commitment of Nvidia's rack-scale infrastructure, and up to $3.2 billion in Corning to build three new U.S. optical-fiber factories dedicated to Nvidia gear — plus roughly two dozen private startup rounds. Nvidia is now financing both its largest customer's demand and the fiber that connects its racks.

Three groups should be uncomfortable. AMD, Intel, and any non-Nvidia accelerator competitor counting on a level playing field at the operator level — IREN's 5 gigawatts is Nvidia infrastructure by contract. Venture investors whose AI-infrastructure portfolio competes with Nvidia-funded peers — the cost of capital is not the same. Antitrust regulators in the U.S. and EU, whose competition policy assumed chip suppliers and data-center operators were separate parties. Move: re-underwrite any AI-infrastructure thesis to include "Nvidia is the most aggressive financier in your category."

Read the four together

AI is not eating jobs uniformly. It is eating analyst pyramids that produced Office deliverables from proprietary data, and the eating tool is a vertically bundled agent stack tied to named data partners. Software developers, integrators, and infrastructure operators are getting more work, not less. Apple's retreat from picking the iOS winner closes one distribution lever for the labs; Nvidia's $40 billion in equity bets opens another, in the opposite direction — capital, not procurement. The competitive question for the next twelve months is not which model wins. It is which functional bundle wins: which combination of model, proprietary data partner, workflow integration, and capital structure can be assembled fast enough to take a vertical before someone else does. The bundle, not the model, is the unit of competition.