Hacker Newsnew | past | comments | ask | show | jobs | submit | belval's commentslogin

It's interesting that you get downvoted for what is, from a historical perspective, a very down-to-earth reasonable take.

I don't have kids but I am at the age where more and more of my friends are having kids, there definitely does seem to be something there. They are exhausted but most definitely have a renewed spark of sorts.

Unfortunately this is difficult to A/B test. So I'd avoid having kids to fix burn out.


I mean marriage is a global concept but it feels like the US makes a huge deal about it.

Like two people can't be together without being married.

But mostly it's a low effort low with quality comment that adds zero value and implicitly passes judgment on those who are not married and don't have kids.

As if married people with kids are the happiest people in the world lol.


> I mean marriage is a global concept but it feels like the US makes a huge deal about it.

I should have made that part clearer but my comment was solely on the kids part of their statement. I don't think marriage is inherently different from any other long-term partnership when it comes "existentially starving".

> As if married people with kids are the happiest people in the world lol.

That's not what I meant at all. The article is about how burnout is a catchall that hides that at our core we actually struggle for meaning. "When facing the existential vacuum, there's only one way out - up, towards your highest purpose". Children do in a lot of way give meaning to your life, suddenly you have a reason for suffering. It's a hell of a stretch to call that happiness, but it's definitely something.


Kids with two parents are far less likely to get into crime and have mental health problems, so there is that.

(Before anyone gets onto me I lived in a single parent household for years.)


I've worked on document extraction a lot and while the tweet is too flippant for my taste, it's not wrong. Mistral is comparing itself to non-VLM computer vision services. While not necessarily what everyone needs, they are a very different beasts compared to VLM based extraction because it gives you precise bounding boxes, usually at the cost of larger "document understanding".

Its failure mode are also vastly different. VLM-based extraction can misread entire sentences or miss entire paragraphs. Sonnet 3 had that issue. Computer vision models instead will make in-word typos.


Why not use both? I just built a pipeline for document data extraction that uses PaddleOCR, then Gemini 3 to check + fix errors. It gets close to 99.9% on extraction from financial statements finally on par with humans.

I did the opposite. Tesseract to get bboxes, words, and chars and then mistral on the clips with some reasonable reflow to preserve geometry. Paddle wasn’t working on my local machine (until I found RapidOCR). Surya was also very good but because you can’t really tweak any knobs, when it failed it just kinda failed. But Surya > Rapid w/ Paddle > DocTr > Tesseract while the latter gave me the most granularity when I needed it.

Edit: Gemini 2.0 was good enough for VLM cleanup, and now 2.5 or above with structured output make reconstruction even easier.


This is The Way. Remember AI doesn't have to replace existing solutions but can tactfully supplement it.

Is DeepSeek's not VLM?

I have my 5800X in my AM4 motherboard from 2017. My current system as been beyond any doubts the best bang for my bucks of any computer I have built.

Same, 5800X in my X470 AORUS mobo and it's been fantastic, no desire to upgrade (already had the 64gb ram, so the CPU swap was simple, I think I got $50 from my old 2700 cpu)

I feel like the case for Microsoft inability to execute in a lot of verticals should really be studied, not saying this as a sound bite, I'd genuinely like to know how that is possible.

Their investment in OpenAI, giving them what was, at least ~1-2 year ago if not now, the best possible LLM to integrate in the office suite yet they are unable to deliver value with it.

Their ownership of Xbox and Windows should have allowed them to get a much better foothold in gaming yet their marketplace is still, to this very day, a broken experience with multiple account types. It's been 10 years.

The counter point is Azure obviously which still has great growth numbers, but that's a different org.

From the outside, it just seems like they should be doing better than they are. They have much better business integration than Google and Amazon. The fit is obvious and people are borderline hooked on excel. Why aren't they dominating completely?


Netflix spending 240Wh for 1h of content just does not pass the smell test for me.

Today I can have ~8 people streaming from my Jellyfin instance which is a server that consumes about 35W, measured at the wall. That's ~5Wh per hour of content from me not even trying.


They claim that streaming over WiFi to a single mobile device is 37W:

  Because phones are extremely energy efficient, data transmission accounts for nearly all the electricity consumption when streaming through 4G, especially at higher resolutions (Scenario D). Streaming an hour-long SD video through a phone on WiFi (Scenario C) uses just 0.037 kWh – 170 times less than the estimate from the Shift Project.
They might be folding in wider internet energy usage?

https://www.weforum.org/stories/2020/03/carbon-footprint-net...


It's way more lopsided than your example would suggest.

My understanding is that Netflix can stream 100 Gbps from a 100W server footprint (slide 17 of [0]). Even if you assume every stream is 4k and uses 25 Mbps, that's still thousands of streams. I would guess that the bulk of the power consumption from streaming video is probably from the end-user devices -- a backbone router might consume a couple of kilowatts of power, but it's also moving terabits of traffic.

[0] https://people.freebsd.org/~gallatin/talks/OpenFest2023.pdf


The 240W number is end to end, including the power usage of a TV. Its also the high end of the estimate of 120-240W


> I worry about the damage caused by these things on distressed people. What can be done?

Why? We are gregarious animals, we need social connections. ChatGPT has guardrails that keep this mostly safe and helps with the loneliness epidemic.

It's not like people doing this are likely thriving socially in the first place, better with ChatGPT than on some forum à la 4chan that will radicalize them.

I feel like this will be one of the "breaks" between generations where millennial and GenZ will be purist and call human-to-human real connections while anything with "AI" is inherently fake and unhealthy whereas Alpha and Beta will treat it as a normal part of their lives.


The tech industry's capacity to rationalize anything, including psychosis, as long as it can make money off it is truly incredible. Even the temporarily embarrassed founders that populate this message board do it openly.


> Even the temporarily embarrassed founders that populate this message board do it openly.

Not a wannabe founder, I don't even use LLMs aside from Cursor. It's a bit disheartening that instead of trying to engage at all with a thought provoking idea you went straight for the ad hominem.

There is plenty to disagree with, plenty of counter-arguments to what I wrote. You could have argued that human connection is special or exceptional even, anything really. Instead I get "temporarily embarrassed founders".

Whether you accept it or not, the phenomenon of using LLMs as a friend is getting common because they are good enough for human to get attached to. Dismissing it as psychosis is reductive.


Thinking that a text completion algorithm is your friend, or can be your friend, indicates some detachment from reality (or some truly extraordinary capability of the algorithm?). People don't have that reaction with other algorithms.

Maybe what we're really debating here isn't whether it's psychosis on the part of the human, it's whether there is something "there" on the part of the computer.


We need a Truth and Reconciliation Commission for all of this someday, and a lot of people will need to be behind bars, if there be any healing to be done.


> Truth and Reconciliation Commission for all of this someday, and a lot of people will need to be behind bars

You missed a cornerstone of Mandela's process.


Social media aka digital smoking. Facebook lying about measurable effects. No gen divide same game different flavor. Greed is good as they say. /s


https://en.wikipedia.org/wiki/Deaths_linked_to_chatbots

If you read through that list and dismiss it as people who were already mentally ill or more susceptible to this... that's what Dr. K (psychiatrist) assumed too until he looked at some recent studies: https://youtu.be/MW6FMgOzklw?si=JgpqLzMeaBLGuAAE

Clickbait title, but well researched and explained.


Fyi, the `si` query parameter is used by Google for tracking purposes and can be removed.



Using ChatGPT to numb social isolation is akin to using alcohol to numb anxiety.

ChatGPT isn't a social connection: LLMs don't connect with you. There is no relationship growth, just an echo chamber with one occupant.

Maybe it's a little healthier for society overall if people become withdrawn to the point of suicide by spiralling deeper into loneliness with an AI chat instead of being radicalised to mass murder by forum bots and propagandists, but those are not the only two options out there.

Join a club. It doesn't really matter what it's for, so long as you like the general gist of it (and, you know, it's not "plot terrorism"). Sit in the corner and do the club thing, and social connections will form whether you want them to or not. Be a choir nerd, be a bonsai nut, do macrame, do crossfit, find a niche thing you like that you can do in a group setting, and loneliness will fade.

Numbing it will just make it hurt worse when the feeling returns, and it'll seem like the only answer is more numbing.


> social connections will form whether you want them to or not

Not true for all people or all circumstances. People are happy to leave you in the corner while they talk amongst themselves.

> it'll seem like the only answer is more numbing

For many people, the only answer is more numbing.


This is an interesting point. Personally, I am neutral on it. I'm not sure why it has received so many downvotes.

You raise a good point about a forum with real people that can radicalise someone. I would offer a dark alternative: It is only a matter of time when forums are essentially replaced by an AI-generated product that is finely tuned to each participant. Something a bit like Ready Player One.

Your last paragraph: What is the meaning of "Alpha and Beta"? I only know it from the context of Red Pill dating advice.


Gen Alpha is people born roughly 2010-2020, younger than gen Z, raised on social media and smartphones. Gen Beta is proposed for people being born now.

Radicalising forums are already filled with bots, but there's no need to finely tune them to each participant because group behaviours are already well understood and easily manipulated.


Your comment is/was getting downvoted perhaps because of the last line but this is very true:

> It's just that it doesn't seem to come from someone with authority to make decisions like that or even from someone well informed about the global strategy of the corporation.

Arduino is owned by Qualcomm, Qualcomm is known for being litigious. Whoever wrote that note, unless it was the CEO of Qualcomm, doesn't actually call the shots and if tomorrow the directive comes from above to sue makers they will have to comply.


I mean even if it came from the CEO he could change his mind tomorrow.

It's maybe better to look at incentives, something that blog posts can help illustrate. Does Qualcomm want to mine the maker community for IP or get them to adopt its technology?


> I mean even if it came from the CEO he could change his mind tomorrow.

To a point. Public statements do carry some legal weight, due to the principle of "Promissory Estoppel"[1]. There are limits to that though, but it's not nothing.

[1]: https://en.wikipedia.org/wiki/Estoppel#Promissory_estoppel


My parents bought a Lexus that happens to be tall-ish with very bright headlights. I don't think I have ever driven it at night without people flashing me.

It's really up to regulators to put something in place though, I don't understand why it is taking so long. It's not like they want those super-bright headlights, they just come with the car...


Apparently Toyota+Lexus are ordering the most blinding lights without any reason. Talk to the dealer - if no one complains, they don't feel the need to fix.


Isn't that a lot though, that means 1 memory safety vulnerability per 1000 lines of code, that seems hard to believe.


Take a look at the examples in this post: https://www.microsoft.com/en-us/msrc/blog/2019/07/we-need-a-...

Large C++ codebases have the same problems that large codebases have in any language: too many abstractions, inconsistent ways of doing things, layers of legacy. It comes with the job. The difference is that in C/C++, hard-to-read code also means hard-to-guess pointer lifetimes.


If the C++ code I worked on looked like that[1] and was actually C with classes, then I’d be switching to Rust too. For Google and Microsoft it probably makes sense to rewrite Windows and Android in Rust. They have huge amounts of legacy code and everybody’s attacking them.

It doesn’t follow that anyone else, or the majority has to follow then. But that’s predictably exactly what veteran rustafarians are arguing in many comments in this thread.

[1] Pointers getting passed all over the place, direct indexing into arrays or pointers, C-style casts, static casts. That (PVOID)(UINT_PTR) with offsetting and then copying is ridiculous.


It's not _that_ hard to believe if you start spraying smart pointers everywhere.


Per million lines of code.


Not the original commenter but I work in the space and we have large annotated datasets with "gold" evidence that we want to retrieve, the evaluation of new models is actually very quantitative.


> but I work in the space

Ya, the original commenter likely does not work in the space - hence the ask.

> the evaluation of new models is actually very quantitative.

While you may be able to derive a % correct (and hence quantitative), they are by their nature very much not quantitative. Q&As on written subjects are very much subjective. Example benchmark: https://llm-stats.com/benchmarks/gpqa Even though there are techniques to reduce overfitting, it still isn't eliminated. So it's very much subjective.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: