I get this is sarcastic but getting to know people like that is literally how the field of criminal psychology was born and helped stop many people like Bundy.
In theory, true. But fixes to issues like this are usually done on hardware level in future generations or very low level software level where most people don’t have the knowledge/effort to deal with. Resulting in our editors/games/job tools running slower they can to mitigate security issues irrelevant to our common use cases.
I'd want to see some actual hard evidence before I believed that. The usual way social cues work is they are devastatingly effective even if people claim they are not. Much like how most interviewers are honestly convinced that their approach is unbiased but in practice they tend to hire people who are like themselves.
My expectation is that turning up in a suit would get better results. The effect is probably smaller in hard-skill roles but I'd assume still present.
I agree, but I suspect that you’d have much better luck if you wore something that was superficially similar to the kinds of things other people wore, but was much better fitted and higher quality. For instance, if you showed up in a nice pair of chinos and a tailored buttoned shirt (of appropriate formality), that might come across as being really put together rather than ignoring subtle social cues by dressing in something that stands out by not fitting in.
I don't know where you live but for most tech jobs here even outside of sv its almost as bad as putting your photo on a resume. Even for very senior non-technical roles you're better off showing up in slacks and a blazer than the whole enchilada
Wearing a suit to a tech interview in silicon valley would without a doubt send the signal that either (a) they have absolutely no clue about SV work culture, or (b) they’re a “look at me” guy who dresses odd on purpose
If they're young it can also be because that's what they've been told to do, if they're from a different culture (even an American one) it may be shockingly weird not to wear a suit to an interview, and there are even people who wear suits all the time because a well-made suit is very comfortable, with no more showing off involved than dressing up any other way. An interview is not a regular work day, best not to summarily judge people like that.
My point is that even knowing the work culture of SV does not mean that people necessarily believe it applies to interviews too, or that a suit will be a negative point, rather than good or neutral. There is a strong culture of looking smart at interviews that overrides knowledge of day-to-day attire. If you really care about people being in casual clothes, mention it in the invite, rather than looking down on them for doing what has been ingrained to be appropriate.
First impressions do count but I think the above poster has a point, a suit can actually harm your chances in an environment where no one wears suits.
There are many ways to wear a suit. If you walking in wearing a suit that doesn't fit, doesn't suite (no pun intended) you, and it obviously makes you feel uncomfortable then that could count against you. But you walk in wearing a suit that fits, makes you look good, and that you are comfortable wearing, then I have a hard time seeing how it will count against you.
Wearing a suit to a technical interview is an immediate red flag. Everybody knows you don't wear suits in this industry, so what's your motive? Your ability to wear a suit is irrelevant for the job, so what weaknesses that are relevant are you rather clumsily trying to hide?
I've gotten a job offer from every technical interview I ever took in a suit, so it Worked For Me. And none of the jobs that I took I ever wore a suit to again (except for conferences or trade shows, and occasionally when I was going out after work to somewhere posh, which did provoke fun "Omg are you interviewing" questions!) Which I actually have found a bit of a shame because I do quite like a chance to wear a suit, though I'm also grateful not to have to iron infinite shirts.
Admittedly I thankfully wasn't in the SV bubble where people are wound this tightly about it!
An interview is not a regular work day. If only things relevant to the job were required in an interview, no one would be talking about whiteboard exercises.
Calling it a red flag may have been too harsh. It's certainly not an immediate no.
However, like it or not, it is a signal because it means you deviate significantly from the mode of the distribution. And a sober application of Bayes suggests that if anything, all else equal that signal is a negative one.
I would go as far as to say being this hyper-focused on clothes rather than if the person is sociable and competent is a red flag itself. It is rather superficial. Vague platitudes about "culture" might get thrown out, but are we engineering and building things or are we putting on a fashion show?
If there is a de facto dress code and you knowingly go against it, even if you look good in whatever you do wear, it makes you look like you don't understand the prevailing norms. This could lead to worries you might not align with other team norms either.
If it's so important, the interview invite should mention that casual wear is expected. Like it or not, most people take interviews seriously, and have been taught that you show you take the interview seriously by wearing a suit.
Tbh, people who blindly accept what they've been taught without considering the situation at hand don't make good engineers anyway (software or otherwise). It's not like programmers not wearing suits is some well-kept secret only accessible to the inner circle. Quite the opposite I'd say.
It's well known that programmers don't wear suits in the office in the SV. It's less obvious they shouldn't wear one in the interview either, because that's not a regular work day. It's not obvious at all to someone from a formal-dress culture like France (Italy? India?). Google's own AI recommends erring on the side of caution and wearing a suit for an SV interview. Yes, people should look up the specific company they're interviewing for... if it even comes to their mind, it's that obvious interviews require suits in some cultures.
If you forgive me the analogy, and assuming you're American, would you think of checking the etiquette of entering into a shop? In the US, the concept itself is weird, you go in, buy stuff, and leave. In France, you must greet the shopkeeper right as you go in through the door. In Hungary, you must wish the shopkeeper a good day in reply to their greeting. It's simple... if you know it's even a thing you should check.
Which is funny, because weren't we in tech the people who aspired to “think different”? But then it didn't become think-different for the individual but for the tech in-group against the "square", boring, formality-driven out-group. And since the world is becoming increasingly informal and any group worth its salt needs to differentiate itself, tech people might be the first to return to wearing suits and ties (or dresses) to work. I'd love that.
"Think different" was a marketing slogan used for Apple products from 1997 to 2002, back when Apple was aimed chiefly at video editing professionals. It was never aimed at techies.
As long as suits and ties remain the uniform of politicians and managers, I don't think techies will ever willingly adopt it for themselves as well.
With reference to the GP about awkward people, if an adult hiring manager is intimidated by an professional applicant wearing a suit to an interview in good faith (after all, it's widely seen as mark of taking the interview seriously), I think it is perhaps not the applicant who need to learn the social skills.
If an interviewer can't tell the difference between a flex and show of good intent, they probably should go back to jobs where they don't need to make judgements of character.
can you just ask them before the interview? "is it okay to wear a suit, or do you guys have a stick up your..."?
I personally dress like a hobo when I'm out and about, and wear a uniform of jeans and a blue shirt when I go into the office, so I really don't care about the suit either way. I'm wearing it for your benefit, so if you don't like it, just tell me upfront - don't make me guess if the job isn't about mindreading.
I get what you mean by EVs having high cost components but low maintenance is usually used as an argument for EVs. As long as nothing breaks they are very cheap to run in terms of both maintenance and fuel.
This matches my intuition. Zero is synonymous with "the absence of any X".
The singular equivalent would be perhaps "non-" or "-less".
Hot take: zero is a math concept and math deals with multitudes only (even under one, you're dealing with a multitude of parts). The actual irregularity is the usage of singular noun form in a math context.
Depends on your definition of profitability, They are not recovering R&D and training costs, but they (and MS) are recouping inference costs from user subscription and API revenue with a healthy operating margin.
Today they will not survive if they stop investing in R&D, but they do have to slow down at some point. It looks like they and other big players are betting on a moat they hope to build with the $100B DCs and ASICs that open weight models or others cannot compete with.
This will be either because training will be too expensive (few entities have the budget for $10B+ on training and no need to monetize it) and even those kind of models where available may be impossible to run inference with off the shelf GPUs, i.e. these models can only run on ASICS, which only large players will have access to[1].
In this scenario corporations will have to pay them the money for the best models, when that happens OpenAI can slow down R&D and become profitable with capex considered.
[1] This is natural progression in a compute bottle-necked sector, we saw a similar evolution from CPU to ASICS and GPU in the crypto few years ago. It is slightly distorted comparison due to the switch from PoW to PoS and intentional design for GPU for some coins, even then you needed DC scale operations in a cheap power location to be profitable.
They will have an endless wave of commoditization chasing behind them. NVIDIA will continue to market chips to anyone who will buy... Well anyone who is allowed to buy, considering the recent export restrictions. On that note, if OpenAI is in bed with the US government with this to some degree, I would expect tariffs, expert restrictions, and all of that to continue to conveniently align with their business objectives.
If the frontier models generate huge revenue from big government and intelligence and corporate contracts, then I can see a dynamo kicking off with the business model. The missing link is probably that there need to be continual breakthroughs that massively increase the power of AI rather than it tapering off with diminishing returns for bigger training/inference capital outlay. Obviously, openAI is leveraging against that view as well.
Maybe the most important part is that all of these huge names are involved in the project to some degree. Well, they're all cross-linked in the entire AI enterprise, really, like OpenAI Microsoft, so once all the players give preference to each other, it sort of creates a moat in and of itself, unless foreign sovereign wealth funds start spinning up massive stargate initiatives as well.
We'll see. Europe has been behind the ball in tech developments like this historically, and China, although this might be a bit of a stretch to claim, does seem to be held back by their need for control and censorship when it comes to what these models can do. They want them to be focused tools that help society, but the American companies want much more, and they want power in their own hands and power in their user's hands. So much like the first round where American big tech took over the world, maybe it's prime to happen again as the AI industry continues to scale.
Why would China censoring Tiananmen Square/whatever out of their LLMs be anymore harmful to the training process when the US controlled LLMs also censor certain topics, eg "how do I make meth?" or "how do I make a nuclear bomb?".
Because China censors very common words and phrases such as "harmonized", "shameless", "lifelong", "river crabbed", "me too". This is because Chinese citizens uses puns and common phrases initially to get around censors.
They are absolutely different flavors. OpenAI is not being told by the government to censor violence, sex or racism - they're being told that by their executives.
News flash: household-name businesses aren't going to repeat slurs if the media will use it to defame them. Nevermind the fact that people will (rightfully) hold you legally accountable and demand your testimony when ChatGPT starts offering unsupervised chemistry lessons - the threat of bad PR is all that is required to censor their models.
There's no agenda removing porn from ChatGPT any more than there's an agenda removing porn from the App Store or YouTube. It's about shrewd identity politics, not prudish shadow government conspiracies against you seeing sex and being bigoted.
I don't know why people care if they're being censored by government officials or private billionaires. What difference does it make at the end of the day? why is one worse than the other?
Because you aren't being "censored" by billionaires at all. They have made the business decision to reduce the usefulness of their AI to prevent their liability from being legally, or even socially, held accountable.
Again, consider my example about YouTube - it's not illegal for Google to put pornography on YouTube. They still moderate it out though, not because they want to "censor" their users but because amateur porn is a liability nightmare to moderate. Similarly, I don't think ChatGPT's limitations qualify as censorship.
Okay, i mean you can say censorship isn't censorship if you want? This is my point, why are you treating limits placed on your expression/sharing/information differently based on what type of person is doing it?
Because fundamentally it's the same type of censorship as someone deciding to not sell porn magazines, videos or the the Anarchist Cookbook in their newsstand/bookstore/etc. back in the day. They judged (probably quite rightly) that it's not good for business.
Of course the market being extremely concentrated and effectively an oligopoly even in the best case does shine a somewhat different light on it. Until/unless open models catch up both quality and accessibility wise.
Sigh. No. Censorship is censorship is censorship. That is true even if you happen to like and can generate a plausible defense of US version that happens to be business friendly ( as opposed to China's ruling party friendly ).
It is not a take. It is simple position of 'just because you call something as involuntary semen injection does not make it any less of a rape'. I like things that are clear and well defined. And so I repeat:
I am not sure if it will surprise you, but your affiliation or the size of your 'team' is largely irrelevant from my perspective. That said, I am mildly surprised you were able to accept the new self-image as willing censor though. Most people struggle with that ( edit: hence the 'this is not censorship' facade ).
They're accepting your definition of censorship to highlight how fucking stupid it is. Is Hacker News a censorship haven because I flagged the "How to have Sex with Cars" post uploaded yesterday? Am I a tyrant for trying to oppress that poor user's voice? No. I'm upholding the guidelines of a privately owned and moderated community.
"Censorship is censorship is censorship" is the sort of defense you'd rely on if you were caught selling guns and kiddie porn on the internet. It's not the sort of defense OpenAI needs to use though, because they have a semblance of self-preservation instinct and would rather not let ChatGPT say something capable of pissing off the IMF or ADL. Call that "censorship" all you want - it's like canvassing for your right to yell 'fire!' in a movie theater.
Friend, neither of those is a body that can say constitution in US is null and void. Nor to they get to pick and choose which speech is kosher. It is not up to those orgs to decide.
<< They're accepting your definition of censorship to highlight how fucking stupid it is.
They are accepting it, because there is no way it cannot not be accepted. Now.. just because there is some cognitive dissonance over what should logically follow is a separate issue entirely.
Yes, that's true. It's very rare for people to be able to value actual free speech. Most people think they do until they hear something they don't like
However private individuals or companies deciding not to offer certain products is an expression of free speech.
i.e. denying someone who is running an online platform/community or training an LLM model or whatever the right to remove or not provide specific content is a clearly limiting their right to freedom of expression.
Because when a small group of elites with permament term and no elections decides what is allowed and what isn't... and has full control of silencing what's not allowed and any meta discussion about the silencing itself... is different from when an elected government decides it, and then anyone is free to raise a stink on whatever is their version of twitter today without worrying about being disappeared tomorrow
It's not an elected government if you're talking about the US. These policies are also all decided by "elites with permanent term and no elections" you realize right?
I don't feel like this was a good faith interpretation of my comment. What i'm saying is that in the US and China, censorship is decided by unelected officials. In one case it's CPC in another case it's corporate executives
That makes even less sense as a comparison. Sure Instagram censored anti-Trump posts for a day but in case you didn't notice you are free to discuss that without fearing suppression or jail.
> Why would China censoring Tiananmen Square/whatever out of their LLMs be anymore harmful to the training process when the US controlled LLMs also censor certain topics, eg "how do I make meth?" or "how do I make a nuclear bomb?".
I was explaining why it is more harmful and thought you were arguing it is not harmful?
I was just making a very simple narrow claim: Censorship in the west and china are both done by unelected people. Note that i didn't say china was good, censorship was equivalent or anything else you're trying to argue. my literal only point was:
Censorship in the west and china are both done by unelected people
They want their LLMs explicitly approved to align with the values of the regime. Not necessarily a bad thing, or at least that avenue wasn't my point. It does get in the way of going fast and breaking things though, and on the other side there is an outright accelerationist pseudo-cult.
Ignoring the moral dimension for a second, I do wonder if it is harder to implement a rather cohesive, but far-reaching censorship in the chinese style, or the more outrage-driven type of "censorship" required of American companies. In the West we have the left pre-occupied with -isms and -phobias, and the right with blasphemy and perceived attacks on their politics.
With the hard shift to the right and Trump coming into office, especially the last bit will be interesting. There is a pretty substantial tension between factual reporting and not offending right-wing ideology: Should a model consider "both sides" about topics with with clear and broad scientific consensus if it might offend Trumpists? (Two examples that come to mind was the recent "The Nazis were actually left wing" and "There are only two genders".)
> they (and MS) are recouping inference costs from user subscription and API revenue with a healthy operating margin.
I tried to Google for more information. I tried this search: <<is openai inference profitable?>>
I didn't find any reliable sources about OpenAI. All sources that I could find state this is not true -- inference costs are far higher than subscription fees.
I hate to ask this on HN... but, can you provide a source? Or tell us how do you know?
I don't have any qualified source and this metric would be likely be quite confidential even internally.
It is just an educated guess factoring costs of running similar/comparable models to 4o or 4o-mini per token, and how azure commitments work with OpenAI models[2], also knowing that Plus subscriptions are probably more profitable[1] than API calls.
It would be hard for even OpenAI to know with any certainty because they are not paying for Azure credits like a normal company. The costs are deeply intertwined with Azure and would be hard to split given the nature of the MS relationship[3]
----
[1] This is from experience of running LibreChat using 4o versus ChatGPT Plus for ~200 users, subscriptions should quite profitable than raw API by a order of 3 to 4x, of course different types of users and adoption levels will be there my sample while not small is not likely representative of their typical user base.
[2] MS has less incentive to subsidize than say OpenAI themselves
[3] Azure is quite profitable in the aggregate, while possibly subsidizing OpenAI APIs, any such subsidy has not shown up meaningfully in Microsoft financial reports.
> but they (and MS) are recouping inference costs from user subscription and API revenue with a healthy operating margin.
As far as I am aware the only information from within OpenAI one way or another is from their financial documents circulated to investors:
> The fund-raising material also signaled that OpenAI would need to continue raising money over the next year because its expenses grew in tandem with the number of people using its products.
Subscriptions are the lions share of their revenue (73%). It's possible they are making money on the average Plus or Enterprise subscription but given the above claim they definitely aren't making enough to cover the cost of inference for free users.
So I do question if OpenAI is able to make a profit, even if you remove training and R&D. The $20 plan may be more profitable, but now it will need to cover the R&D and training, plus whatever they lose on Pro.
Not necessarily. DeepSeek will probably only threaten the API usage of OpenAI, which could also be banned in the US if it's too sucessful. API usage is not a main revenue for OpenAI (it is for Anthropic last time I checked). The main competitor for R1 is o1, which isn't gnerally available yet.
The one your laptop can run does not rival what OpenAI offers for money. Still, the issue is not whether third party can run it, it's just the OpenAI seems not putting API as their main product.
Not quite. In 2 years their revenue has ~20x from 200M ARR to 3.7B ARR. The inference costs I believe pay for themselves (in fact are quite profitable). So what they're putting on their investor's credit cards are the costs of employees & model training. Given it's projected to be a multi-trillion dollar industry and they're seen as a market leader, investors are more than happy to throw in interest free cash flow now in exchange for variable future interest in the form of stocks.
That's not quite the same thing at all as your credit card's revenue stream as you have a ~18%+ monthly interest rate on that revenue stream. If you recall AMZN (& all startups really) have this mode early in their business where they're over-spending on R&D to grow more quickly than their free cash flow otherwise allows to stay ahead of competition and dominate the market. Indeed if investors agree and your business is actually strong, this is a strong play because you're leveraging some future value into today's growth.
Platform economics "works" in theory only upto a point. Its super inefficient if you zoom out and look not at system level but ecosystem level. It hasn't lasted long enough to hit failure cases. Just wait a few years.
As to openai, given deepseek and the fact lot of use cases dont even need real time inference its not obvious this story will end well.
I also can't see it ending well for OpenAI. This seems like it's going to be a commodity market with a race to the bottom on pricing. I read that NVIDIA has a roughly 1000% (10x) profit margin on H100's, which means that someone like Google making their own TPUs has a massive cost advantage.
Moore's law seems to be against them too... hardware getting more powerful, small models getting more powerful... Not at all obvious that companies will need to rely on cloud models vs running locally (licencing models from whoever wants that market). Also, a lot of corporate use probably isn't that time critical, and can afford to run slower and cheaper.
Of course the US government could choose to wreck free-market economics by mandating powerful models to be run in "secure" cloud environments, but unless other countries did same that might put US at competitive price disadvantage.
They do get a lot of customers buying their stuff, but on top of that, a company with unique IP and mindshare can get investors to open their wallet easily enough; I keep thinking of AMD that was not or barely profitable for like 15 years in a row.
It's not even a dream, militaries are already many orders of magnitude more capable per soldier compared to the past. That ratio will keep increasing at an even faster rate with new technologies like AI.