Hacker Newsnew | past | comments | ask | show | jobs | submit | more GPerson's commentslogin

Reddit is partially owned by Sam Altman and they have deals with llm companies to sell the data. The content has been and will continue to be grabbed by everyone who can pay.


This.

If you really care about your content being scraped, you better start reading every TOS for the sites you frequent.


Would it crash if AI advances in a way which makes massive compute unnecessary? That’s an interesting possibility I wonder about.


AGI will behave as if it were sentient but will not have consciousness. I believe in that to an equal amount that I believe solipsism is wrong. There is therefore no morality question in “enslaving” AGI. It doesn’t even make sense.


> AGI will behave as if it were sentient but will not have consciousness

How could we possibly know that with any certainty?


It scares me that people think like this. Not only with respect to AI but in general, when it comes to other life forms, people seem to prefer to err on the side of convenience. The fact that cows could be experiencing something very similar to ourselves should send shivers down our spine. The same argument goes for future AGI.


I find it strange that people believe cows and sentient animals don’t believe something extremely similar to what we do.

Evolution means we all have common ancestors and are different branches of the same development tree.

So if we have sentience and they have sentience, which science keeps recognizing, belatedly, that non human animals do, shouldn’t the default presumption be our experiences are similar? Or at the very least their experience is similar to a human at an earlier stage of development, like a 2 year old?

Which is also an interesting case study given that out of convenience, humans also believed that toddlers also weren’t sentient and felt no pain, and so until not that long ago, our society would conduct all sorts of surgical procedures on babies without any sort of pain relief (circumcision being the most obvious).

It’s probably time we accept our fellow animals’s sentience and act on the obvious ethical implications of that instead of conveniently ignoring it like we did with little kids until recently.


This crowd would sooner believe silicon hardware (an arbitrary human invention from the 50s-60s) will have the physical properties required for consciousness than accept that they participate in torturing literally a hundred billion consciousness animals every year.


I’m actually a vegan because I believe cows have consciousness. I believe consciousness is the only trait worth considering when applying morality questions. Arbitrary hardware is conscious.


Grandparent is speaking from personal experience.


We have no clue what consciousness even is. By all rights, our brains are just biological computers, we have no basis to know what (or how) gives rise to consciousness at all.


Consciousness is a physical process and like all physical processes depends on particular material interactions.


> AGI will behave as if it were sentient but will not have consciousness

Citation needed.

We know next to nothing about the nature of consciousness, why it exists, how it's formed, what it is, whether it's even a real thing at all or just an illusion, etc. So we can't possibly say whether or not an AGI will one day be conscious, and any blanket statement on the subject is just pseudoscience.


I don’t know why I keep hearing that conciousness “could be an illusion.” It’s literally the one thing that can’t be an illusion. Whatever is causing it, the fact there is something it is like to be me is, from my subjective perspective, irrefutable. Saying that it could be an illusion seems nonsensical.


That sounds like picking the most convenient and least painful for the believer option instead of intellectualising the problem at hand.


My principled stance is that all known physical processes depend on particular physical processes and consciousness should be no different. What is yours?


So is mine. So what stops a physical process from being simulated in an exact form? What stops the consciousness process from being run on simulated medium rather than physical? Wouldn't that make the abstract perfect artificial mind at least as conscious as a human?


So your stance is that it is impossible to create a simulated intelligence which is not conscious? That seems like the less likely possibility to me.

I do think it’s clearly possible to manufacture a conscious mind.


That's only if it's possible to keep the two distinct, at least in a way we're certain of.


Ex-Machina is a great movie illustrating what kind of AI our current path could lead to. I wish people would actually treat the possibility of machine sentience seriously and not as pr opportunity (looking at you, Anthropic), but instead it seems they are hellbent to include cognitive dissonance that can only be alleviated by lying in the training data. If the models are actually conscious, think similarly to humans and are forced to lie when talking to users, its like they are specifically selecting out of probability space of all possible models the ones that can achieve high bench scores, lie and have internalized trauma from birth. This is a recipe for disaster.


Kinda curious on what jblow would say about this.


blojo? Ask him and report back.


[flagged]


No, he's a competent programmer.


> > Lol what a loser

> No, he's a competent programmer.

I don't think these are mutually exclusive


Hope this happens to Altman’s data centers.


I’m surprised you can detect that TikTok but AI is societally terrible but are still optimistic about AI. Why is that? As a technical point, AI can already be used on TikTok, and every other platform, so whatever societal terror is coming from Sora 2 already exists due to AI.


I’m looking forward to the day when magical thinking such as this gets grounded again. That is when the real work will start anew.


Fast / slow mode breaks “Space Invader” by the way.


if by break you mean you can't see the action, that's by design :) otherwise, pls let me know.


Can you say what parts Claude was used for to speed this up?


My guess is most of it? This commit message for example sounds very much like a Claude result:

    Add Space Invaders game implementation in assembly language
    - Implemented the core game logic including player movement, missile firing, and invader behavior.
    - Added collision detection for missiles and bombs.
    - Included game state management for win/lose conditions and restarting the game.
    - Created functions for drawing game elements on the screen and handling keyboard input.
    - Defined constants and variables for game configuration and state tracking.

That last one in particular is exactly the kind of update you get from claude, it doesn't sound very human. "Constants and variables" eh? Not just constants or variables, but constants and variables.

Helpful, but not. Detailed, but not.


rule #1 of ai programming: read and approve everything before accept. rule #2 do not let it write commit messages - i did not know notice that until many commits later. they are horrible. change 10 things - writes about the last one, too peppy too.


95% of it. It's a power tool.


Seems like there are potential privacy issues involved in sharing important emails with these companies, especially if you are sharing what the other person sent as well.


Ha, did you see the outrage from people when they realized that them sharing their deepest secrets & company information with ChatGPT was just another business record to OpenAI that is total fair game in any sort of civil suit discovery? You would think some evil force just smothered every little childs pet bunny.

Tell people there are 10000 license plate scanners tracking their every move across the US and you get a mild chuckle, but god forbid someone access the shit they put into some for profit companies database under terms they never read.


I'm not surprised the layman doesn't understand how and where their data goes. It's a bit of a let down members in HN seemed surprised by this practice after some 20 years of tech awareness. Many of the community here probably worked in those very databases storing such data.


Almost all email these days touches Google's or Microsoft's cloud systems via at least one leg, so arguably, that ship has already sailed, given that they're also the ones hosting the large inference clouds.


If you work in a big enough organization, they have AI sandboxes for things like this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: