OpenAI is basically just Netscape at this point. An innovative product with no means of significant revenue generation.
One one side it's up against large competitors with an already established user base and product line that can simply bundle their AI offerings into those products. Google will do just what Microsoft did with Internet Explorer and bundle Gemini in for 'Free' with their already other profitable products and established ad-funded revenue streams.
At the same time, Deepseek/Qwen, etc. are open sourcing stuff to undercut them on the other side. It's a classic squeeze on their already fairly dubious business model.
It's a fun trope to repeat but that's not what OpenAI is doing. I get a ton of value from ChatGPT and Codex from my subscription. As long as the inference is not done at a lost this analogy doesn't hold. They're not paying me to use it. They are generating output that is very valuable to me. Much more than my subscription cost.
I've been able to help setup cross app automation for my partner's business, remodel my house, plan a trip of Japan and assist with the cultural barrier, vibe code apps, technical support and so much more.
To be fair, I would get a ton of value out of someone selling dollars for 20 cents apiece.
But ya, OAI is clearly making a ton of revenue. That doesn't mean it's a good business, though. Giving them a 20 year horizon, shareholders will be very upset unless the firm can deliver about a trillion in profit, not revenue, to justify the 100B (so far) in investment, and that would barely beat the long term s&p 500 average return.
But Altman himself has said he'll need much more investment in the coming years. And even if OAI became profitable by jacking up prices and flooding gpt with ads, the underlying technology is so commodified, they'd never be able to achieve a high margin, assuming they can turn a profit at all.
I think there's something off with their plans right now: it's pretty clear at this point that they can't own the technological frontier, Google is just too close already and from a purely technological PoV they are much better suited to have the best tech in the medium term. (There's no moat and Google has way more data and compute available, and also tons of cash to burn without depending on external funding).
But ChatGPT is an insane brand and for most (free) customers I don't think model capabilities (aka “intelligence”) are that important. So if they stopped training frontier models right now and focus on driving their costs low by optimizing their inference compute budget while serving ads, they can make a lot of money from their user base.
But that would probably mean losing most of its paying customers over the long run (companies won't be buying mediocre token at a premium for long) and more importantly it would require abandoning the AGI bullshit narrative, which I'm not sure Altman is willing to do. (And even if he was, how to do that without collapsing from lack of liquidity due to investors feeling betrayed is an open question).
Being an insane brand means literally nothing if people can trivially switch to competitors, which they can.
There isn't even a tenth of enough money if you group together all of advertising. Like, the entire industry. Ads is a bad, bad plan that wont work. Advertising is also extremely overvalued. And even at it's overvalued price tag, it's nowhere near enough.
It's Coca Cola vs Pepsi. Yes some might even say Pepsi has been shown to taste better, but people still buy loads of Coke.
Of course the tech savvy enterprises will use the best models. But the plumber down the road doesn't care whether she asks Gemini or ChatGPT about the sizing of some fittings.
Right, again, even if you take every advertiser in the world and shove them in a dungeon and then point a cannon at them and say "give me all the money you have", you won't even have 1/10th the money you need.
Everyone is vastly, vastly overestimating advertising. Advertising is a side hustle, because the product is the main hustle.
Yes, and if you take all that money, it's not even 1/10 enough.
Consumers can spend what they can spend. Not even 1 quadrillion dollars in advertising can change that. There is a hard, hard cap to the value of advertisement because of that. It's just how the thing works.
It's not enough to pay for the size of the rollout the AI companies are doing. The difference between Google and OpenAI is that Google's add revenue comes at basically 0 cost. Google serves multiple adds for every search, and actually completing the search costs a tiny fractions of a cent (the majority of which is the cost to figure out what add to display). OpenAI is in a totally different boat. They get a similar number of adds per search, but each query will cost them far more than a simple web search. A paragraph or a single image costs OpenAI over 1c to generate, so you can't even cover the marginal costs on ads alone (google adds cost ~.1c per view).
Furthermore, OpenAI has to make up for a ton of debt they are taking on. They've already lost $9B, and are planning on losing another $75B in the next 2 years. As such, they have a ton of digging to do to get themselves out of the massive hole they're digging.
> OpenAI is in a totally different boat. They get a similar number of adds per search, but each query will cost them far more than a simple web search. A paragraph or a single image costs OpenAI over 1c to generate
First of all, your numbers a off by an order of magnitude at least: even GPT-5 can generate 1000 tokens for 1c, which is much more than a paragraph.
And then again that's why my entire argument revolved around the fact that OpenAI would need to stop aiming for the technological edge. Deepseek generates 25k tokens for a cent and it's still a gigantic model. I'd you use a model comparable in size to gpt-oss-120b you can even increase that up to 100-200k tokens per cent (going from 32GB worth of active parameters, 32B at q8 for Deepseek, to 4GB, 8B using MXFP4 for gpt-oss-120b). That would mean being able to serve more than 100 answers per cent spent on inference.
If they can serve .1c worth of ads per request, that's 90% gross margin for you.
People could trivially switch their search engine to Bing or Yahoo, but they don't.
If ads are so overpriced, how big is your short position on google? Also ads are extremely inefficient in terms of conversion. Ads rendered by an intelligent, personalized system will be OOM more efficient, negating most of the "overvalue".
I'm not saying they should serve ads. It's a terrible strategy for other reasons.
Funny that you mention Yahoo, as in my mind they're the perfect example of what the poster above you noted: people quickly switched to Google once a better alternative to Yahoo appeared.
> People could trivially switch their search engine to Bing or Yahoo, but they don't.
Well those are obviously worse products.
> If ads are so overpriced, how big is your short position on google?
I hate hearing this stupid, stupid line.
Most companies are run by neanderthals with more money than brains. Companies burn money on advertising because why not? Making your product better is hard and takes time, advertising is the easiest thing you can do. Does it work? Not really, no, but you get extra business for as close to zero effort you can possibly get. Hit a wall? Just advertise more!
> Ads rendered by an intelligent, personalized system will be OOM more efficient, negating most of the "overvalue".
This is exactly what people said about personalized ads. "No you don't understand! It's not like a billboard!"
And that's true, but consumers are not fucking braindead, and there's also the laws of economics. If I have 50 bucks, I'm not spending 20 fucking dollars on your dumbass paint, no matter how much you advertise it. And that's not a me thing, that's a consumer thing. You can spend 1 quadrillion dollars advertising ferraris and guess what - you will STILL quickly saturate that market and hit a hard ceiling. Because consumer's can't afford it.
And that's not even touching on the fact that most of the metrics around advertisements are just obviously bullshit. How many human eyeballs are actually on ads? Much, much less than everyone thinks.
Yes, sure, we can build highly personalized ads. Whatever. But at the end of the day, consumers still have the exact same amount of disposable income as before. We have created Z E R O value, what we have done is consolidated it.
Hmm, what happens when markets consolidate too much? Well, I guess that would mean advertising becomes completely worthless, wouldn't it? What a conundrum! It's a good thing our markets haven't been consolidating for the past 70 years...
Do you think consumer brands lose money when they pay Google to do advertising? Do you think digital ads have a negative ROI for the brands that buy them? If so, why do they keep buying more? Wouldn’t they lose to more efficient companies?
I think you underestimate how valuable being the top slot on google is. Just the other day i googled “bluetooth speaker” and bought the first result (an ad). One hour of that can net you millions of dollars. That’s why consumer brands bid more and more every year on digital advertising.
> Do you think consumer brands lose money when they pay Google to do advertising?
For many brands, yes, and they don't know it.
> I think you underestimate how valuable being the top slot on google is.
The more you advertise, the less valuable each ad space becomes. Consumers have a lot of money they have to dole out. Giving them more ads won't increase that pot of money - it will make your cut smaller and smaller as it's split across more brands.
Which brands lose money on ads? Why are they still in business?
> consumers have a lot of money that they dole out. More ads wont increase the cut of money
Consumer spending is not a fixed pie chart or a zero sum game. US consumer spending has grown from $14 to $19 trillion since 2020. $5 trillion in new pie!!
Your model of ads is: “I, a consumer, have decided to buy a bluetooth speaker, and the ads push and pull me towards particular brands”. But that’s not how ads work! Ads don’t just compete for fixed spending, they induce NEW spending. An ad can give a customer the idea of buying, and grow the market.
> US consumer spending has grown from $14 to $19 trillion since 2020. $5 trillion in new pie!!
All that's telling you is the economy is not doing nearly as well as some of our metrics would have you believe.
Real wages are about the same as before, probably lower. Consumers are buying the same amount of stuff - no value has been created. Rather, the dollar has been devalued, much more than we're willing to let on.
There's real value, like actual physical goods, service and labor, and fake value. Fake value tries to proxy real value, but historically it's often way off.
Money is fake value. Stocks are even more fake value. It doesn't matter if your stock price is through the roof if you're not selling a product people want, for example. The product is the value, the stock price is people trying to approximate the value and future value.
Look it's fine to have contrarian opinions that left is right, everything is backwards, whatever. But when it comes to business and money, these things are quantitative and falsifiable. If you have a better understanding than the idiots in charge, then go be rich! If you have a better model for real value, you'll outcompete them. Until you do that, you are playing word games, ones which have somehow deluded you into believing that the most profitable company on earth is not valuable.
It's not even contrarian, it's just true. Money has always been a proxy for real value, which we created because real value can be hard to measure.
> If you have a better understanding than the idiots in charge, then go be rich!
Doesn't work this way because most markets are dumb as rocks.
> If you have a better model for real value, you'll outcompete them.
Doesn't work this way because most markets are dumb as rocks.
Look, after a certain point you have to detach from what you're being told and look at the world around you.
Prime example: tobacco. For humanity, Tabacoo has a negative value. You should be getting paid to smoke. Why? Because it kills you, and that's very expensive.
But that's hard to measure, right? So we just sell the cigarettes and say their value is what they're sold for. But that's not their actual value.
Their actual value, in the real world, in your hands and in your lungs, is negative. That's not an opinion. That's objective. That's just what it is.
When you look around our markets, almost all products are like this to some degree. The value we're creating is not necessarily real value.
Ads are another prime example. Do they enrich the world? Do they help consumers? No. They have zero real value. They just move money around via manipulation. That's not my opinion. That's just the objective reality.
Eventually, the real world catches up to la la land. You can't just say "well do ads and you make money". When there's no more money to move around, then even our fake value estimates of ads approach zero.
> Being an insane brand means literally nothing if people can trivially switch to competitors, which they can.
Logically speaking, yes it is easy to switch between OAI and Gemini, or Coke and Pepsi. But brand loyalty is more about emotions (comfort, familiarity,..) rather logical reasoning.
The best way to drive inference cost down right now is to use TPUs. Either that or invest tons of additional money and manpower into silicon design like Google did, but they already have a 10 year lead there.
Altman's main interest is Altman. ChatGPT will be acquihired, most people will be let go, the brand will become a shadow of its former self, and Altman will emerge with a major payday and no obvious dent in his self-made reputation as a leading AGIthinkfluenceretc.
I don't think ads are that easy, because the hard part of ads isn't taking money and serving up ad slop, it's providing convincing tracking and analytics.
As soon as ad slop appears a lot of customers will run - not all, but enough to make monetisation problematic.
This! Most people that don't work on adtech have no idea how hard it is to:
1. Build a platform that offers new advertising inventory that advertisers can buy
2. Convince advertisers to advertise on your platform
3. Show advertisers that their advertising campaigns in your platform are more successful than in the several other places they can advertise
- the best performance for inference is found by spending more and more tokens (deep thinking)
- pricing is based on cost per token
Then the inference providers/hyperscalers will take all of the margin available to app makers (and then give it to Nvidia apparently). It is a bad business to be in, and not viable for OpenAI at their valuation.
What I'm saying ils that I'm not sure the first point is true.
I think they all have become sufficiently good for most people to stick to what they are used to (especially in terms of tone/“personality” + the memory shared between conversations).
This. Netscape was THE browser in the early phases of the Internet. Then Microsoft just packaged IE into Windows and it was game over. The brand means nothing long term. If Google broadly incorporates Gemini into all the Google-owned things everyone already has then it’s game over for OpenAI.
The mass commoditization of the tech is rapidly driving AI to be a feature, not a product. And Google is very strongly positioned to take advantage of that. Microsoft too, and of course they have a relationship with OpenAI but that’s fraying.
To be completely fair the later versions of Netscape were increasingly giant bloated piles of crap while IE slowly caught up and surpassed in terms of speed and features. The first versions IE were only good for downloading Netscape.
Netscape, to a large degree, killed itself.
Not to say IE turned into anything good though. But it did have its hayday.
The whole US economy is so deep into La-La Land at this point that they don't really need to be a good business. There are already murmurings that they may pull off a trillion dollar IPO, I don't see why they wouldn't, Amazon was making it cool to lose money hand over fist during your IPO as far back as 1997. They have the President willing to pump up their joint ventures with executive orders, we may just see tech become more like the financial industry, where a handful of companies are dubbed "too big to fail" based on political connections, and get bailed out at the taxpayer's expense when things get too rough. None of these guys function according to the real rules of the economy or even the legal system at this point, they just make stuff up as they go along and if they're big enough or know someone big enough they often get away with it.
People did say the same thing about Youtube, which was unprofitable and extremely expensive to run in the early years. I remember thinking everyone would leave once ads were added.
At youtube's ad income rate (~$13/year), the current (but growing) ~800 million chatgpt users would add ~$10 billion. At facebook's rate (~$40-50/year) $32-40 billion. Potentially, an assistant would be more integrated into your life than either of those two.
The "audience retention" is the key question, not the profitability if they maintain their current audience. I've been surprised how many non-technical people I know don't want to try other models. "ChatGPT knows me".
The network effects aren't the same. All the viewers watch youtube because it has all the content, and all the creators post on youtube because it has all the viewers.
How can a model achieve this kind of stickiness? By "knowing you"? I don't think that's the same at all. Personally, one of the reasons I prefer Claude is that it doesn't pretend to know me. I can control the context better.
the problem with the YouTube analogy is that media platforms have significant network affects that NN providers don't. OpenAI can't command a premium because every year that goes by the cost to train an equivalent model to theirs decreases.
Youtube didn't either at the time. The front page was widely seen as garbage, and everyone I knew watched videos because they were embedded or linked from external sites. "If they introduced ads, people will just switch to other video hosts, wont they?". Many of the cooler creators used Vimeo. It was the good recommendation algorithm that came later, that I think allowed an actual network effect, and I don't remember people predicting that.
The field is too young to know what will keep users, but there are definitely things that plausibly could create a lock-in effect. I mentioned one ("ChatGPT knows me") which could grow over time as people have shared more of themselves with ChatGPT. There's also pilots of multi-person chats, and the social elements in Sora. Some people already feel compelled to stick to the "person" they're comfortable talking to. The chance of OpenAI finding something isn't zero.
That's a bit revisionist. Network effects were obvious when Google acquired Youtube. Google Video had the edge technically, but it didn't matter because Youtube had the users/content and Google saw that very clearly in their user growth before they made their offer.
I'm not sure about it having the edge, I thought Google video had a worse interface between them at the time. But that point feels eerily relevant anyway: a lot of normal people I see don't care if Claude/Gemini/etc are better models technically, they're comfortable with ChatGPT already.
A lot of YT's growth at the time was word of mouth and brand among the population, which is currently ChatGPT's position.
ChatGPT is losing their brand positioning to Google, Anthropic, and Chinese Open Source
Altman knows this and why he called code red. If OpenAI hasn't produce a fully new model in 1.5 years, how much longer can they hang on before people will turn to alternatives that are technically better? How long before they could feasibly put out a new model if they are having issues in pre-training?
They're losing their benchmark lead to those companies. But no chance that your average user is even aware of Anthropic, much less OSS models. The brand is mostly fine IMO, it's the product that needs to catch up.
Maybe ChatGPT is sticky enough that people won't switch. But since we're talking about something as old as Google Video, we could also talk about AltaVista, which was "good enough" until people discovered a better and more useful alternative.
A lot of "normal people" are learning fast about ChatGPT alternatives now. Gemini in particular is getting a lot of mainstream buzz. Things like this [1] with 14k likes are happening everyday on social. Marc Benioff's love for Gemini broke through into the mainstream also.
Youtube didn't have a significant competitor, once the quality started declining and the ads started creeping up, there were no alternatives to switch to (as a user) because the content creators were in on the profit.
The same isn't true about ChatGPT.
Anthropic and Google provides a similar product, and switching to a better/cheaper platform is fairly easy as it only depends on you and not on others (content creators or friends) doing the same.
It also didn't generate billions in income before even adding ads^1, nor grow anywhere near as quickly. ChatGPT is larger on most axes.
YouTube was ambitious for its time. "In 2007, YouTube consumed as much bandwidth as the entire Internet had in 2000" but they weren't believed to start breaking even until 2015.
Whether it "generated billions" is a wrong angle to look at this. What is relevant is the relation between the spend, or the committed spend and the income. I don´t believe that YT at any point committed to investing literally 1000x of its yearly revenue into a partner company, nor do I remember its CEO using made up words such as "annualised revenue" that keep being spat out by both OpenAI and Antropic CEOs, in the sense of "projecting the max monthly revenue as annual and fooling the investors".
I suspect some of the downvoters hate the idea of ads, which is understandable.
But a lot of HN users use gmail, which has the same model. And there are plenty of paid email providers which seem far less popular (I use one). Ads didn't end up being a problem for most people provided they were kept independent of the content itself.
1. Yes, we're also talking about ChatGPT's free plan here too.
Ads could fund more quota or bigger models for users who don't wish to pay (and/or just make it more sustainable)
Google will almost certainly be doing this with Gemini, and if ChatGPT can't offer as much it leaves an easy reason for people to switch.
2. It does have ads in the default interface, though they're quite unobtrusive. You might also have a blocker. But yes, I suspect their size allows them to provide it mildly "at a loss" to support their ads elsewhere.
Good question. Salesforce does well because they provide the application layer to the data.
The WWW in the 1990s was an explosion of data. To the casual observer, the web-browser appeared to be the internet. But it wasn't and in itself could never make money (See Netscape). The internet was the data.
The people who build the infrastructure for the WWW (Worldcom, Nortel, Cisco, etc.) found the whole enterprise to be an extremely loss-making activity. Many of them failed.
Google succeeded because it provided an application layer of search that helped people to navigate the WWW and ultimately helped people make sense of it. It helped people to connect with businesses. Selling subtle advertising along the way is what made them successful.
Facebook did the same with social media. It allowed people to connect with other people and monetized that.
Over time, as they became more dominant, the advertising got less subtle and then the income really started to flow.
Salesforce is similar in that it helps businesses connect with and do business with each other. They just use a subscription model, rather than advertising. This works because the businesses that use it can see a direct link to it and their profitability.
Because they lock you in. ChatGPT has no lock in, in fact none of the LLMs do just because of how they work.
Salesforce doesn't make a good product, and certainly not the best product. It doesn't matter, you don't need to if you can convince idiots with money to invest in you. And then the switching cost is too much, too late.
That business model is a dying one and all the software companies know it. That's why Microsoft has spent the last 15 years opening up their ecosystems. As automation increases, switching cost decreases. You cant rely on it.
There's no doubt you're getting a lot of value from OpenAI, I am too. And yes the subscription is a lot more value than what you pay for. That's because they're burning investor's money and it's not something that is sustainable. Once the money runs out they'll have to jack up prices and that's the moment of truth, we'll see what users are willing to pay for what. Google or another company may be able to provide all that much cheaper.
Inference is profitable for openai as far as I can tell. They dont need to jack up prices much, what they really need is users who are paying/consuming ads. Theyre burning money on free tier users and data center expansion so they can serve more users.
This assumes your model is static and never needs to be improved or updated.
Inference is cheap because the final model, despite its size, is ridiculously less resource intensive to use than it is to produce.
ChatGPT in its latest form isn't bad by any means, but it is falling behind. And that requires significant overhead, both to train and to iterate on model architecture. It is often a variable cost as well.
As long as revenue rises faster than training costs and inference remains profitable I dont think this is an issue. Eventually theyll be able to profitably amortize training across all the users.
> As long as revenue rises faster than training costs
And this is definitely not happening. They are covering training costs with investors money, and they can't really stop it without their competitors catching up
If making money on inference alone was possible, there would be a dozen different smaller providers who'd be taking the open weights models and offering that as service. But it seems that every provider is anchored at $20/month, so you can bet that none of them can go any lower.
> If making money on inference alone was possible, there would be a dozen different smaller providers who'd be taking the open weights models and offering that as service.
There are! Look through the provider list for some open model on https://openrouter.ai . For instance, DeepSeek 3.1 has a dozen providers. It would not make any sense to offer those below cost because you have neither moat nor branding.
Maybe, but arguably a major reason you can't make money on inference right now is that the useful life of models is too short, so you can't amortize the development costs across much time because there is so much investment in the field that everyone is developing new models (shortening useful life in a competitive market) and everyone is simultaneously driving up the costs of inputs needed for developing models (increasing the costs that have to be amortized over the short useful life). Perversely, the AI bubble popping and resolving those issues may make profitability much easier for the survivors that have strong revenue streams.
You need a certain level of batch parallelism to make inference efficient, but you also need enough capacity to handle request floods. Being a small provider is not easy.
Why would the open weights providers need their own tools for agentic workflows when you can just plug their OpenAI-compatible API URL into existing tools?
> when you can just plug their OpenAI-compatible API URL into existing tools?
Only the self-hosting diehards will bother with that. Those that want to compete with Claude Code, Gemini CLI, Codex et caterva will have to provide the whole package and do it a price point that is competitive even with low volumes - which is hard to do because the big LLM providers are all subsidizing their offerings.
As a developer - ChatGPT doesn't hold a candle compared to claude for coding related tasks and under performs for arbitrary format document parsing[1]. It still has value and can handle a lot of tasks that would amaze someone in 2020 - but it is simply falling behind and spending much more doing so.
1. It actually under performs Claude, Gemini and even some of the Grok models for accuracy with our use case of parsing PDFs and other rather arbitrarily formatted files.
Predictions are supposedly profitable but not enough to amortize everything else. I don't see how they would justify their investments even if predictions cost them nothing.
That the product is useful does not mean the supplier of the product has a good business; and of course, vice versa. OpenAI has a terrible business at the moment, and the question is, do they have a plausible path to a good one?
>. As long as the inference is not done at a lost this analogy doesn't hold.
I think that there were some article here that claimed that even inference is done at loss - and talking about per subscriber. I think it was for their 200$ subscription.
In a way we will be in a deal with it situation soon where they will just impose metered models and not subscription.
> It's a fun trope to repeat but that's not what OpenAI is doing.
This is literally what OpenAI is doing. They are bleeding cash, i.e. spending more than they earn. How useful it is to you is not relevant in the context of the sustainability. You know what is also super useful to some people? Private yachts and jets. It does not mean they are good for the society as a whole. But even leaving out the hollistic view for a moment - their business model is not sustainable unless they manage to convince the politics to declare them national infrastructure or something like that, and have taxpayers continue to finance them, which is what they already probed for in the last months. Out of interest, why would you want ChatGPT plan your trip to Japan? Isn't planning it yourself a part of the excitement?
I pay $100/month for Claude Max, and I've already said it, I would go up to $500 a month and wouldn't hesitate for a second. I'd probably start to hesitate for $1,000 maybe, only cuz I know I wouldn't be able to use it enough to maximize that value. But I might still suck it up and pay for it (I don't use it enough yet to need the $200/month but if I started hitting limits faster, I would upgrade), or at that point start looking for alternatives.
It's worth that much to me in the time saved. But I'm a business owner, so I think the calculus might be quite different (since I can find ways to recoup those costs) from an individual, who pays out of their main income.
I outlined examples of how I used CC/AI a couple months ago [1]. Since then I've used it even more, to help reduce our cloud bills.
Right I am sure some find it is worth 5-10x the cost.
The challenge is that if the numbers are accurate they need 5-10x to break even on inference compute costs, before getting into training costs and all the other actual overhead of running a company like compensation.
Will everyone be willing to pay 5-10x? Probably no.
Will half of users pay 10-20x? Or a quarter pay 20x++?
Or we end up with ads … which already seem to be in motion
95% of ChatGPT users aren't paying customers, if they won't pay $10 per month, there's zero chance of them paying $100 or $500.
That's not to say that there aren't many, like you, for whom $500 is a perfectly good deal, there's just not nearly enough for OpenAI to ever turn a profit.
I mean Claude is good for business use-cases, other than that it's completely censored cuck garbage and the CEO is worse than the pope. With Grok you can actually roleplay without it wagging its finger at you. OH MY GOSH YOU SAID BOOB!
Normies literally see no difference between GPT and Claude, just that Claude is much more expensive and CEO is even more of a dummie than Altman.
The parent isn't arguing you're not getting a good value out of the product. It says that users' contributions don't cover production costs, which may or may not be true but doesn't have much to do with how much value they get from it.
> I've been able to help setup cross app automation for my partner's business, remodel my house, plan a trip of Japan and assist with the cultural barrier, vibe code apps, technical support and so much more.
you could have done all of this without a chatbot.
You are mostly missing the point. You’re saying you get value out of what OpenAI is offering you. Thats not at issue here.
The question is, does OpenAI get value out of the exchange?
You touched on it ever so briefly: “as long as inference is not done at a loss”. That is it, isn’t it? Or more generally, As long as OpenAI is making money . But they are not.
There’s the rub.
It’s not only about whether you think giving them your money is a good exchange. It needs to be a good exchange for both sides, for the business to be viable.
That's not the parent point though? His point is that if the models are not largely available, and then are better competitors; then what's the point of ChatGPT? Maybe you decide to stick with ChatGPT for whatever reason, but people will move to cheaper and better alternatives.
This analogy only really works for companies whose gross margin is negative, which as far as I know isn’t the case for OpenAI (though I could be wrong).
It’s an especially good analogy if there is no plausible path to positive gross margin (e.g. the old MoviePass) which I think is even less likely to be true for OpenAI.
I'm not confident at all. I didn't say "there is definitely a path". I said the existence of such a path is plausible. I'm sure Sam Altman believes that too, or he'd have changed jobs ages ago.
very clever! I hadn't seen anybody make this point before in any of these threads /s
obviously the nature of OpenAIs revenue is very different than selling $1 for $0.2 because their customers are buying an actual service, not anything with resale value or obviously fungible for $
FWIW the selling $1 for $0.2 is widely applied to any business that is selling goods below cost.
For example: free shipping at Amazon does not have resale value and is not obviously fungible, but everyone understands they are eating a cost that otherwise would be borne by their customers. The suggestion is that OpenAI is doing similar, though it is harder to tease out because their books are opaque.
They don't eat all of it anymore, even for Prime members. To the extent they do, it's largely to reduce friction for what is still a purchase that will generally take longer to arrive than going to the store. Plus, it's a nice perk that makes you feel like you're getting a deal.
As for profits, I haven't looked recently, but IIRC profits are mostly:
1. AWS
2. Prime membership fees
The latter drives loyalty and therefore volume and predictability, which allows Amazon to e.g. operate their own mini-UPS in the quest to make money on most parcels. They also rolled back free shipping on everything over the years and use it more surgically and with minimum order sizes.
They sell a product, not a model. ChatGPT is a product, GPT5 is a technology.
If you hope that ChatGPT will be worthless because the underlying technology will commodify, then you are naive and will be disappointed.
If that logic made sense, why has it never happened before? Servers and computers have been commodified for decades! Salesforce is just a database, social media is just a relational database, Uber is just a GPS wrapper, AWS is just a server.
People pay money, setup subscriptions, and download apps to solve a problem, and once they solve that problem they rarely switch. ChatGPT is the fifth most visited website in the world! Facebook and Deepseek making opensource models means you can make your own ChatGPT, just like you can make your own Google, and nobody will use it, just like nobody uses the dozens of “better” search engines out there.
The underlying technology already is commodified - there are many, many other models that are extremely competitive.
The problem is: suppose Google has an equivalent model (they do, but if you disagree, just pretend). Suppose they do. What then is OpenAI offering that makes its product more intriguing? Nothing. They have a chat interface. An intern can make a chat interface.
> ChatGPT is the fifth most visited website in the world!
To me, this is absolutely worthless information. That DOES NOT mean that ChatGPT is in the clear and nobody else will overtake them.
Your analogies really paint the picture here aptly. Salesforce is not just a database, it's a lot of stuff on top of it. AWS is not just a server, it's a lot of stuff on top of it. Uber is not just a GPS wrapper, it's a taxi service. That's a different thing.
ChatGPT... is just a model. What they add on top approaches zero. Because that's just how the technology works. It takes text and gives it to a model and then spits out the output. What more can you add onto that system, removing the model? Make it easier to input text? Make it easier to get output? Well that's truly trivial to do, and I would argue ChatGPT isn't even in the top 10 when it comes to that. Today.
ChatGPT is not a model, it is a product. My credit card charges me for ChatGPT, not GPT5. Products have moats because once you solve the problem and become a habit, consumers never switch. Nobody switches from Google, nobody switches from Instagram, nobody switches from AWS to whatever godforsaken hertzner self hosted solution hackernews says is cheaper than the cloud.
Nobody wants the same thing but cheaper, or the same thing but marginally better. You either solve the problem first, or you lose. The first site to ever threaten the dominance of google.com is chatgpt.com! Why? Because it’s NOT just “google but better”, it’s an entirely new thing.
> To me, this is absolutely worthless information. That DOES NOT mean that ChatGPT is in the clear and nobody else will overtake them
Do you think chatgpt.com will be worse the 5th most visited website 5 years from now? I’ll gladly take that bet, let’s do $100, i’ll even give you 2:1 odds. Do you think openAI will be bankrupt in <10 years? Let’s bet $1000, hell I’ll give you 10:1 odds.
Chatgpt.com alone is clearly at least as valuable as instagram.com, soon to be as valuable as google.com, and long term more than either.
People need to understand that OpenAI is not a publicly traded company. Sam is allowed to be outrageously optimistic about his best case scenarios, as long as he is correct with OpenAI's investors. But those investors are not "the public", so he can publicly state pretty much anything he wants, as long as it is not contradicting facts.
So he cannot say "OpenAI made 20B profit last year." but can say "OpenAI will make 20B revenue next year." Optimism is not a crime.
Kind of, but there are limits. The investors still have LPs who aren’t going to be happy if things get messy. Things can still get really ugly even for a private company.
That ship has sailed. CNBC talks about the AI bubble and over-valuation every day. Retail investors won’t touch OpenAI. It’s increasingly looking like these LPs will be left holding the bag when the music stops.
In 2024, OpenAI claimed the bulk of its revenue was 70-80% through consumer ChatGPT subscriptions. That's wildly impressive.
But now they've had an order of magnitude revenue growth. That can't still be consumer subscriptions, right? They've had to have saturated that?
I haven't seen reports of the revenue breakdown, but I imagine it must be enterprise sales.
If it's enterprise sales, I'd imagine that was sold to F500 companies in bulk during peak AI hype. Most of those integrations are probably of the "the CEO has tasked us with `implementing an AI strategy`" kind. If so, I can't imagine they will survive in the face of a recession or economic downturn. To be frank, most of those projects probably won't pan out even under the rosiest of economic pictures.
We just don't know how to apply AI to most enterprise automation tasks yet. We have a long way to go.
I'd be very curious to see what their revenue spread looks like today, because that will be indicative of future growth and the health of the company.
I'm reading 5% on a quick search. Isn't that an unsurprising conversion rate for a successful app with a free tier? Why would it increase further in ChatGPT's case, other than by losing non-paying customers?
consumer subs arent even close to saturated and business subs are where the real money is anyway. Most white collar workers are still on free tier copilot, not paying openai.
This is pretty much all that OpenAI is at the moment.
Mozilla is a non-profit that is only sustained by the generous wealthy benefactor (Google) to give the illusion that there is competition in the browser market.
OpenAI is a non-profit funded by a generous wealthy benefactor (Microsoft).
Ideas of IPO and profitability are all just pipe dreams in Altmans imagination.
Few months ago, the founder was talking about "AGI" and ridiculous universal basic compute. At this point, I don't even know whom to believe. My first hand experience tells ChatGPT and even ClaudeCode are no where near the expertise they are touted to be. Yet, the marketing by these companies is so immense that you get washed away, you don't know who are agents and who are putting their true opinions.
- Making functions async without need; it doesn't know the difference between the two or in which scenarios to use them.
- Consistently fails to make changes to the frontend if a project grows above 5000 LOC or a file goes near 1000 LOC.
- The worst part is it lies after making changes.
ChatGPT:
- Fails to implement mid-complex functionality such as scrolling to the bottom when new logs are coming in and not scrolling when the user is checking historical logs.
These models are good at mainstream tasks, the snippets of which you find a lot in repositories. Try to do something off-beat such as algorithmic trading; they fail spectacularly.
I'm unsure how someone could use LLMs regularly and not encounter significant mistakes. I use them a lot less than some devs and still run into basic errors pretty often, to the point that I rarely bother using them for niche or complicated problems even though they are pretty helpful in other cases. Just in the past few days I've had Claude trip all over itself on multiple basic tasks.
One case was asking how to do a straightforward thing with a popular open source JavaScript library, right in the sweet spot of what models should excel at. Claude's whole approach was completely broken because it relied on a hallucinated library parameter that didn't exist and didn't have an equivalent. It invented a keyword that doesn't appear in the entire open source library repo, to control functionality the library doesn't have.
> Mozilla is a non-profit that is only sustained by the generous wealthy benefactor (Google) to give the illusion that there is competition in the browser market.
Good way of phrasing things. Kinda sad to read this, I tried to react with 'wait there is competition in the browser market', but it is not a great argument to make - without money for using Google as a default search engine, Mozilla would effectively collapse.
The main issue there is you need someway to pay the engineers in that transitional period the moment Mozilla collapses. Otherwise they leave, find new jobs, and you lose all the expertise and knowledge of the codebase.
anecdotal, but my wife wasn't interested in switching to claude from chatgpt. as far as she's concerned chatgpt knows her, and she's got her assistant perfectly tuned to her liking.
ChatGPT is to AI as Facebook is to social media. OpenAI captured a significant number of users due to first-mover advantage, but that advantage is long gone now.
And Facebook only makes money because it is essentially just an advertising platform. Same with Google. It's fundametally just ads.
The only way OpenAI can survive is to replicate this model. But it probably doesn't have the traffic to pull it off unless it can differentiate itself from the already crowded competition.
Ads make sense in an AI search engine product like Perplexity. ChatGPT could try to make a UI like that.
But the thing is, the world already has an AI search engine. It's called Google, and it's already heavily integrated with Gemini. Why would people switch?
this is my horror as well. I don't mind my youtube account to be blocked but what about all the recommendations that I have curated to my liking. It will be huge chunk of lost time to rebuild and insert my preferences into the algorithm. increasingly "our preferences shaped by time and influences and encounters both digital and offline" are as much about us as we are physically.
You could ask GPT for what it knows about you and use it to seed your personal preferences to a new model/app. Not perfect and probably quite lossy, but likely much better than starting from scratch.
I have no YouTube account, and it can figure out my viewing history with just watching a few of my favorite channels usually... including specific videos I watched years ago.
> Google will do just what Microsoft did with Internet Explorer and bundle Gemini in for 'Free' with their already other profitable products and established ad-funded revenue streams.
“will do”? Is there any Google product they haven't done that with already?
Oh God I love the analogy of OpenAI being Netscape. As someone who was an adult in the 1990s, this is so apt. Companies at that time were trying to build a moat around the World Wide Web. They obviously failed. I've thought that OpenAI too would fail but I've never thought about it like Netscape and WWW.
OpenAI should be looking at how Google built a moat around search. Anyone can write a Web crawler. Lots of people have. But no one else has turned search into the money printing machine that Google has. And they've used that to fund their search advantage.
I've long thought the moat-buster here will be China because they simply won't want the US to own this future. It's a national security issue. I see things like DeepSeek is moat-busting activity and I expect that to intensify.
Currently China can't buy the latest NVidia chips or ASML lithography equipment. Why? Because the US said so. I don't expect China to tolerate this long term and of any country, China has desmonstrated the long-term commitment to this kind of project.
Literally got an email this morning from Google, to say my Google One plan now 'includes AI benefits' - including
"More access to Gemini 3 Pro, our most capable model
More access to Deep Research in the Gemini app
Video generation with limited access to Veo 3.1 Fast in the Gemini app
More access to image generation with Nano Banana Pro
Additional AI credits for video generation in Flow and Whisk
Access Gemini directly in Google apps like Gmail and Docs" [Thanks but no thanks]
I know it's been said before but it's slightly insane they're trying to compete on a hot new tech with a company with 1) a top notch reputation for AI and 2) the largest money printer that has ever existed on the planet.
Feel like the end result would always be that while Google is slow to adjust, once they're in the race they're in it it.
The problem for Google is that there is no sensible way to monetize this tech and it undercuts their main money source which is search.
On top of that the Chinese seem to be hellbent to destroy any possible moat the US companies might create by flooding the market with SOTA open-source models.
Although this tech might be good for software companies in general - it does reduce the main cost they have which is personnel. But in the long run Google will need to reinvent itself or die.
Gemini has been in Google search for a while now. I use it somewhat often when I search for something and want follow up questions. I don’t see any ads in Gemini but maybe I would see it if I search for ads relevant things idk. But I definitely use google search more often because Gemini is there and probably that goes for a lot of people.
Well maybe not in 1999. Adwords didn't launch until 2000? Google's 1999 revenue was...... I forget, but it was incredibly small. Costs were also incredibly small too though, so this isn't a good analogy given the stated year of 1999.
Google in 1999 was already far superior to Yahoo and other competitors. I don't think OpenAI is in a similar position there. It seems debatable as to whether they're even the best, let alone a massive leap ahead of everyone else the way Google was.
Google was better but it wasn't far superior to AltaVista is what I remember.
Yahoo was always more a directory of websites.
AltaVista was better than Lycos or Yahoo but then Google was faster, gave better results than AltaVista and the very minimal UI was something interesting. I quite liked AltaVista but I never went back to it after using Google either.
I might even say Gemini 3 is better than GPT5 than what Google was to AltaVista. GPT5 feels rather useless to me after my time now with Gemini.
As I remember it (I was just starting college at the time), Google search was an absolute revelation. You could type in a search term and the first hit would usually be what you wanted. AltaVista required a lot of looking through results to find the right thing and messing around with boolean operators. People switched over and never looked back. Google went from zero to a substantial majority market share in only about one year.
> Google was better but it wasn't far superior to AltaVista is what I remember.
Everyone's entitled to their opinion, but I remember it being significantly better. Alta Vista, you'd have to dig into page 8 before getting to the good stuff. History is written by the victors, as they say, but I remember Google search results being significantly better than Altavista. It wouldn't be until two decades later that I got to work there though.
OpenAI has tons of funnels for their products. Azure’s AI smoke and mirrors offerings uses openAI behind the scenes, big with enterprise users (who has a lot of money)
It can be bundled for "free" if they raise the price of google workspace. LLMs are right now most valuable as an enterprise productivity software assistant. Very useful to have a full suite of enterprise productivity software in order to sell them.
> Google will do just what Microsoft did with Internet Explorer and bundle Gemini in for 'Free' with their already other profitable products and established ad-funded revenue streams.
Just some numbers to show what OpenAI is against:
GMail users: nearing 2 billion
Youtube MAU: 2.5 billion
active Android devices: 4 billion (!)
Market cap: 3.8 trillion (at a P/E of 31)
So on one side you've got this behemoth with, compared to OpenAI's size, unlimited funding. The $25 bn per year OpenAI is after is basically a parking ticket for Google (only slightly exaggerating). Behemoth who came with Gemini 3 Pro "thinking" and Nano Banana (that name though) who are SOTA.
And on the other side you've got the open-source weights you mentioned.
When OpenAI had its big moment HN was full of comments about how it was game over for Google for search was done for. Three years later and the best (arguably the best) model gives the best answer when you search... Using Google search.
Funny how these things turns out.
Google is atm the 3rd biggest cap in the world: only Apple and NVidia are slightly ahead. If Google is serious about its AI chips (and it looks like they are) and see the fuck-ups over fuck-ups by Apple, I wouldn't be surprised at all if Alphabet was to regain the number one spot.
That's the company OpenAI is fighting: a company that's already been the biggest cap in the entire world and that's probably going to regain that spot rather sooner than later and that happens to have crushed every single AI benchmark when Gemini 3 Pro came out.
I had a ChatGPT subscription. Now I'm using Gemini 3 Pro.
the branding is all wrong. I could see Apple buying anthropic, but OpenAI is exactly the wrong ai company for Apple. openai is the tacky, slop based ai company. their main value is the brand and the users, but Apple already has a strong brand and billions of users. Apple needs an ai company with deployment experience and a good model, but paying for a brand and users doesn't make sense for them.
It is insignificant when they're spending more than $115bn to offer their service. And yes, I say "more than," not because I have any inside knowledge but because I'm pretty sure $115bn is a "kind" estimate and the expenditure is probably higher. But either way, they're running at a loss. And for a company like them, that loss is huge. Google could take the loss as could Microsoft or Amazon because they have lots of other revenue sources. OAI does not.
Google is embedding Gemini into Chrome Developer Tools. You can ask for an analysis of individual network calls in your browser by clicking a checkbox. That's just an example of the power of platform. They seem to be better at integration than Microsoft.
OpenAI has this amazing technology and a great app, but the company feels like some sort of financial engineering nightmare.
We live in crazy times, but given what they’ve spent and committed to that’s a drop in the bucket relative to what they need to be pulling in. They’re history if they can’t pump up the revenue much much faster.
Given that we’re likely at peak AI hype at the moment they’re not well positioned at all to survive the coming “trough of disillusionment” that happens like clockwork on every hype cycle. Google, by comparison, is very well positioned to weather a coming storm.
That's a good point. Google was sleeping on AI and wasn't able to come up with a product before OpenAI and they only scrambled to come out with something when OpenAi became all the rage. Big companies are hard to budge and move in a new direction.
Google and Microsoft have existing major money printing businesses to keep their AI business afloat and burn money for a while. That's how Microsoft broke into gaming (and then squandered it years later for unrelated incompetence)
Every F500 CEO told their team "have an AI strategy ASAP".
In a year, when the economy might be in worse shape, they'll ask their team if the AI thing is working out.
What do you think happens to all the enterprise OpenAI contracts at that point? (Especially if the same tech layperson CEOs keep reading Forbes and hearing Scott Galloway dump on OpenAI and call the AI thing a "bubble"?)
It’s really even easier than that. I already do all my work on AWS and use Bedrock that hosts every popular model and its own except for OpenAIs closed source models.
I have a reusable library that lets me choose between any of the models I choose to support or any new model in the same family that uses the same request format.
Every project I’ve done, it’s a simple matter of changing a config setting and choosing a different model.
If the model provider goes out of business, it’s not like the model is going to disappear from AWS the next day.
Given a choice between being “locked in” to a major cloud provider and trusting your business to a randomish little company, you are never going to get a compliance department to go for the latter. “no one ever got fired for choosing AWS”.
This is the API - it’s basically the same for all supported languages
Real companies aren’t concerned about cost as much as working with other real companies, compliance, etc and are comparing cost or opportunities between doing a thing and not doing a thing.
One of my specialties is call centers. Every call deflected by using AI vs talking to a human agent can save from $5 - $15.
Even saving money by allowing your cheaper human agents to handle a problem where they are using AI in the background, can save money. $15 saved can buy a lot of inference.
And the lock in boogeyman is something only geeks care about. Migrations from one provider to another costs so much money at even a medium scale they are hardly ever worth it between the costs, distractions from doing value added work, and risks of regressions and downtime.
99% of people who use it do so because of A. existing agreements wrt compliance and billing (including credits, spend agreements etc.) B. IAM/org permissioning structures that they already have set up.
> Isn't the API worse
No, for general inference the norm is to use provider-agnostic libraries that paper over individual differences. And if you're doing non-standard stuff? Throw the APIs at Opus or something.
> Aren't the p95 latencies worse?
> The costs higher?
The costs for Anthropic models are the same, and the p95 latencies are not higher, they're more stable if anything. The open weights models do look a bit more expensive but as said many businesses don't pay sticker price for AWS spend or they find it worth it anyway.
on less vendor to vet, one less contract to negotiate, one less 3rd party system to administer. you're already locked into AWS anyway. integrates with other AWS services. access control is already figured out.
I forgot to mention that. But funny enough AWS and GCP made a joint announcement that they are introducing a service to let users easily connect the infrastructure of the two providers between their private networks without going over the public internet.
This isn’t some type of VPN solution, think more like DirectConnect but between AWS and GCP instead of AWS and your colo.
It’s posited that AWS agreed to this so sales could tell customers that they don’t have to move their workloads from AWs to take advantage of Google’s AI infrastructure without experiencing extreme latency.
One one side it's up against large competitors with an already established user base and product line that can simply bundle their AI offerings into those products. Google will do just what Microsoft did with Internet Explorer and bundle Gemini in for 'Free' with their already other profitable products and established ad-funded revenue streams.
At the same time, Deepseek/Qwen, etc. are open sourcing stuff to undercut them on the other side. It's a classic squeeze on their already fairly dubious business model.