Stratechery (Ben Thompson) · · 13 min read

The Deployment Company, Back to the 70s, Apple and Intel

Mirrored from Stratechery (Ben Thompson) for archival readability. Support the source by reading on the original site.

Listen to this post:

Good morning,

President Trump is on the way to China, and Sharp China is your go-to podcast for understanding what happens next. Add it to your podcast player now in anticipation of the next few episodes breaking down the trip.

On to the Update:

The Deployment Company

From Reuters:

OpenAI said on Monday it is setting up a new company with more than $4 billion in initial investment to help organizations build and deploy artificial intelligence systems, and will acquire an AI consulting firm, Tomoro, to quickly scale up the unit.

After its early models saw strong resonance with consumers, OpenAI has been working aggressively to sign corporate contracts and establish a large presence in the business world where its AI will see large-scale deployment. The venture, which will be majority owned and controlled by OpenAI, also comes as rival Anthropic enjoys strong success in its enterprise AI push with its Claude family of models seeing rapid adoption among businesses. The new firm, called OpenAI Deployment Company, will help the ChatGPT maker embed engineers specializing in frontier AI deployment into organizations that will then work closely with various teams to identify where AI can make the biggest impact, OpenAI said.

Its acquisition of Tomoro, a consulting firm that helps enterprises deploy AI, will bring around 150 experienced AI engineers and “deployment specialists” to the new unit from day one. Tomoro was formed in 2023 in alliance with OpenAI, and counts companies such as Mattel, Red Bull, Tesco and Virgin Atlantic as its clients, according to its website.

That was on Monday; on Tuesday, from The Information:

Google plans to hire hundreds of engineers to help customers start using its business-focused AI products, according to a person familiar with the situation. Google’s new “forward deployed engineers” will form a new team within Google Cloud, the unit’s chief, Thomas Kurian, said on LinkedIn on Tuesday, without disclosing the size of the effort. Matt Renner, Google Cloud’s chief revenue officer, said in a separate post that the move would help Google “show up for our customers with more technical resources (vs just an ocean of salespeople).”

The announcement is one of several in the industry in recent weeks as tech companies are deploying armies of humans—often described as “forward deployed engineers”—and partnerships with consulting companies to get customers using AI-driven technology intended to automate work. On Monday, OpenAI launched the “OpenAI Deployment Company” in partnership with consulting and investment firms. Last week, Anthropic announced the creation of a joint venture with private equity firms to sell its AI to the PE firms’ customers.

It is, needless to say, tempting to drop some snark about AGI apparently not being good enough to deploy AI, but instead I’m going to go with “as predicted”.

Back to the 70s

In 2024’s Enterprise Philosophy and the First Wave of AI, I made the case that the proper analogy for AI in the enterprise was not SaaS, but rather the first wave of computing in the 1970s.

Agents aren’t copilots; they are replacements. They do work in place of humans — think call centers and the like, to start — and they have all of the advantages of software: always available, and scalable up-and-down with demand…Benioff isn’t talking about making employees more productive, but rather companies; the verb that applies to employees is “augmented”, which sounds much nicer than “replaced”; the ultimate goal is stated as well: business results. That right there is tech’s third philosophy: improving the bottom line for large enterprises.

Notice how well this framing applies to the mainframe wave of computing: accounting and ERP software made companies more productive and drove positive business results; the employees that were “augmented” were managers who got far more accurate reports much more quickly, while the employees who used to do that work were replaced. Critically, the decision about whether or not to make this change did not depend on rank-and-file employees changing how they worked, but for executives to decide to take the plunge.

Specifically, I don’t think that the Deployment Company is going in to help employees use chatbots; that’s even more clearly the case with the PE firms that both OpenAI and Anthropic are doing deals with. I expect there to be an ever-increasing number of deals where PE buys software firms with reliable cash flows and conducts significant layoffs, forcing AI to pick up the slack, solving stock-based compensation issues in the process.

I don’t know if the mandate for the Deployment Company is going to be quite so harsh, but I assume this is a company that is hired by the executive suite to fundamentally rethink business processes in a way that hasn’t been done since the mainframe:

Most historically-driven AI analogies usually come from the Internet, and understandably so: that was both an epochal change and also much fresher in our collective memories. My core contention here, however, is that AI truly is a new way of computing, and that means the better analogies are to computing itself. Transformers are the transistor, and mainframes are today’s models. The GUI is, arguably, still TBD.

To the extent that is right, then, the biggest opportunity is in top-down enterprise implementations. The enterprise philosophy is older than the two consumer philosophies I wrote about previously: its motivation is not the user, but the buyer, who wants to increase revenue and cut costs, and will be brutally rational about how to achieve that (including running expected value calculations on agents making mistakes). That will be the only way to justify the compute necessary to scale out agentic capabilities, and to do the years of work necessary to get data in a state where humans can be replaced. The bottom line benefits — the essence of enterprise philosophy — will compel just that.

What I wonder is how much of the work ends up reworking data; that, as I noted in that article, is why I was bullish on Palantir:

That leaves the data piece, and while Benioff bragged about all of the data that Salesforce had, it doesn’t have everything, and what it does have is scattered across the phalanx of applications and storage layers that make up the Salesforce Platform. Indeed, Microsoft faces the same problem: while their Copilot vision includes APIs for 3rd-party “agents” — in this case, data from other companies — the reality is that an effective Agent — i.e. a worker replacement — needs access to everything in a way that it can reason over. The ability of large language models to handle unstructured data is revolutionary, but the fact remains that better data still results in better output; explicit step-by-step reasoning data, for example, is a big part of how o1 works. To that end, the company I am most intrigued by, for what I think will be the first wave of AI, is Palantir…

That integration looks like this illustration from the company’s webpage for Foundry, what they call “The Ontology-Powered Operating System for the Modern Enterprise”:

Palantir's

What is notable about this illustration is just how deeply Palantir needs to get into an enterprise’s operations to achieve its goals. This isn’t a consumery-SaaS application that your team leader puts on their credit card; it is SOFTWARE of the sort that Salesforce sought to move beyond.

Google’s Kurian, by the way, did dismiss any sort of Palantir comparison in a Stratechery Interview last month:

This all makes perfect sense, particularly this bit about the Knowledge Catalog definitely fits how I’ve been thinking. I wrote about this a few years ago about this importance of this whole layer and understanding it, it’s a bit of a big lift to get this in place. You have some sort of analog, say, with like a Palantir that’s putting in like their ontology thing. They have FDEs out on the site, multi-month projects doing this. You have OpenAI talking about Frontier, their agent layer, and they’re partnering with all the tech consultancies to build this out. Is this going to entail a lot of boots on the ground to get this graph working and functional in a way that your agents can operate effectively across it?

TK: We’re not competing with Palantir, we’re not building a semantic dictionary or an ontology. What we’re doing is, today I’ll give you the closest analogy.

Okay.

TK: Today when you use a model, let’s say you use Gemini, and you ask a question, Gemini goes through reasoning, and then it shows you a citation. A citation is, “How did I answer the question and what’s the source I derived from?”

Now imagine that citation was a query that needed to go to a folder in, for example, a storage system because there’s some documents there and a database because, for example, in a part number, just think about there’s a part number document that lists all the part numbers and sits in a drive and then that part number you need to fetch out to say it’s the modem that the guy is coming to repair, and that’s mapped to a table in a database.

So what the graph does, we use Gemini, so we don’t need humans, we use Gemini to say, “Hey, go and read all these documents in these drives and extract the information from it and then match that to the database table that has the reference to the part number”, and so then when Gemini turns around and says, “I got this query about how much inventory of modems they are”, the first thing it does is it says, “Okay, go to the Knowledge Catalog and it says modem is part number one, two, three, four, five”, and then it says, “By the way the table in the database that has the inventory information about this part number is this table, here’s a SQL”, it then makes the quality of what we generate higher and then when it answers the question it shows back — back to your, “Trust my data”, it shows a grounding citation saying, “That’s where we got it from.”

Well, so much for not needing humans! I joke, mostly — Kurian was referring to not needing a Palantir-like ontology, not necessarily dismissing the need for FDEs — but it sure is interesting how AI is creating the need for new kinds of jobs. It’s almost as if the world is more dynamic, and pure intelligence, unadulterated by what already exists and the burden of reflexivity, is more static, than the most pessimistic prognosticators may have anticipated. More prosaically, OpenAI and Anthropic need the revenue, enterprises need the imagination, and Google needs to stay in the game.

Apple and Intel

From the Wall Street Journal:

Apple and Intel have reached a preliminary agreement for Intel to manufacture some of the chips that power Apple devices, according to people familiar with the matter. Intensive talks between the two companies have been ongoing for more than a year, and they hammered out a formal deal in recent months, these people said. Bloomberg News previously reported the talks. It’s still unclear which Apple products Intel would make chips for, these people said. Apple ships more than 200 million iPhones a year as well as millions of iPads and Mac computers.

Ming-Chi Kuo reported on X late last year that Intel would make Apple’s most basic M processor on its 18A process; he didn’t specify which generation.

Regardless, while the Wall Street Journal cites Trump administration pressure, and an earlier Bloomberg article Apple’s concentration risk on TSMC and Taiwan, the most obvious reason for a deal — assuming it exists — is economic. Specifically, Apple has for two quarters running said it can’t satisfy demand because it can’t get enough capacity at TSMC. CEO Tim Cook referenced this point multiple times on the last earnings call, but I think this was the most important articulation:

The constraint in the March quarter and the June quarter, the primary constraint is the availability of the advanced nodes our SoCs are produced on, not memory. And so I don’t want to predict for supply and demand to match because if I look at it realistically, I think on the Mac mini and the Mac Studio, I believe it will take several months to reach supply-demand balance. And so we’re not at the point where we’re saying this is going to end anytime soon. And it’s not because of a problem per se other than we just undercalled the demand. And there are lead times to this, as you well understand, and it takes a while to correct that. And the primary constraint from a product point of view, or the majority of it for this quarter, for the June quarter will be on the Mac. And it’s Mac mini, Mac Studio and the MacBook Neo. It’s all of those.

Cook talked about lead times last quarter as well, and the important thing to note is that while it does take five months or so to make new chips, assuming Apple realized it needed more iPhone 17 Pro chips right away, those new A19 Pro lines only started producing chips partway through last quarter (which is why iPhone 17 Pro sales weren’t as high as they could be). Critically, however, what seems likely is that Apple took capacity away from the Mac to make more iPhone chips, and now doesn’t have enough chips for the Mini and Studio either.

The long-and-short of it is this: Apple doesn’t have flexible access to TSMC capacity anymore, because so much of that capacity is going to AI in particular, and it’s costing Apple meaningful money across multiple product lines. This was always the thing that would bring companies to Intel; I wrote in TSMC Risk:

Becoming a meaningful customer of Samsung or Intel is very risky: it takes years to get a chip working on a new process, which hardly seems worth it if that process might not be as good, and if the company offering the process definitely isn’t as customer service-centric as TSMC. I understand why everyone sticks with TSMC.

The reality that hyperscalers and fabless chip companies need to wake up to, however, is that avoiding the risk of working with someone other than TSMC incurs new risks that are both harder to see and also much more substantial. Except again, we can see the harms already: foregone revenue today as demand outstrips supply. Today’s shortages, however, may prove to be peanuts: if AI has the potential these companies claim it does, future foregone revenue at the end of the decade is going to cost exponentially more — surely a lot more than whatever expense is necessary to make Samsung and/or Intel into viable competitors for TSMC.

This, incidentally, is how the geographic risk issue will be fixed, if it ever is. It’s hard to get companies to pay for insurance for geopolitical risks that may never materialize. What is much more likely is that TSMC’s customers realize that their biggest risk isn’t that TSMC gets blown up by China, but that TSMC’s monopoly and reasonable reluctance to risk a rate of investment that matches the rest of the industry means that the rest of the industry fails to fully capture the value of AI.

We’re already here (reportedly). TSMC’s failure to invest aggressively enough over the last several years will, in the end, give Intel the single most important thing it needs to become a viable competitor: the customer who did more than any other to make TSMC into the leader in the first place.


This Update will be available as a podcast later today. To receive it in your podcast player, visit Stratechery.

The Stratechery Update is intended for a single recipient, but occasional forwarding is totally fine! If you would like to order multiple subscriptions for your team with a group discount (minimum 5), please contact me directly.

Thanks for being a subscriber, and have a great day!

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from Stratechery (Ben Thompson)