Reddit is partially owned by Sam Altman and they have deals with llm companies to sell the data. The content has been and will continue to be grabbed by everyone who can pay.
AGI will behave as if it were sentient but will not have consciousness. I believe in that to an equal amount that I believe solipsism is wrong. There is therefore no morality question in “enslaving” AGI. It doesn’t even make sense.
It scares me that people think like this. Not only with respect to AI but in general, when it comes to other life forms, people seem to prefer to err on the side of convenience. The fact that cows could be experiencing something very similar to ourselves should send shivers down our spine. The same argument goes for future AGI.
I find it strange that people believe cows and sentient animals don’t believe something extremely similar to what we do.
Evolution means we all have common ancestors and are different branches of the same development tree.
So if we have sentience and they have sentience, which science keeps recognizing, belatedly, that non human animals do, shouldn’t the default presumption be our experiences are similar? Or at the very least their experience is similar to a human at an earlier stage of development, like a 2 year old?
Which is also an interesting case study given that out of convenience, humans also believed that toddlers also weren’t sentient and felt no pain, and so until not that long ago, our society would conduct all sorts of surgical procedures on babies without any sort of pain relief (circumcision being the most obvious).
It’s probably time we accept our fellow animals’s sentience and act on the obvious ethical implications of that instead of conveniently ignoring it like we did with little kids until recently.
This crowd would sooner believe silicon hardware (an arbitrary human invention from the 50s-60s) will have the physical properties required for consciousness than accept that they participate in torturing literally a hundred billion consciousness animals every year.
I’m actually a vegan because I believe cows have consciousness. I believe consciousness is the only trait worth considering when applying morality questions. Arbitrary hardware is conscious.
We have no clue what consciousness even is. By all rights, our brains are just biological computers, we have no basis to know what (or how) gives rise to consciousness at all.
> AGI will behave as if it were sentient but will not have consciousness
Citation needed.
We know next to nothing about the nature of consciousness, why it exists, how it's formed, what it is, whether it's even a real thing at all or just an illusion, etc. So we can't possibly say whether or not an AGI will one day be conscious, and any blanket statement on the subject is just pseudoscience.
I don’t know why I keep hearing that conciousness “could be an illusion.” It’s literally the one thing that can’t be an illusion. Whatever is causing it, the fact there is something it is like to be me is, from my subjective perspective, irrefutable. Saying that it could be an illusion seems nonsensical.
My principled stance is that all known physical processes depend on particular physical processes and consciousness should be no different. What is yours?
So is mine. So what stops a physical process from being simulated in an exact form? What stops the consciousness process from being run on simulated medium rather than physical? Wouldn't that make the abstract perfect artificial mind at least as conscious as a human?
Ex-Machina is a great movie illustrating what kind of AI our current path could lead to. I wish people would actually treat the possibility of machine sentience seriously and not as pr opportunity (looking at you, Anthropic), but instead it seems they are hellbent to include cognitive dissonance that can only be alleviated by lying in the training data. If the models are actually conscious, think similarly to humans and are forced to lie when talking to users, its like they are specifically selecting out of probability space of all possible models the ones that can achieve high bench scores, lie and have internalized trauma from birth. This is a recipe for disaster.
I’m surprised you can detect that TikTok but AI is societally terrible but are still optimistic about AI. Why is that? As a technical point, AI can already be used on TikTok, and every other platform, so whatever societal terror is coming from Sora 2 already exists due to AI.
My guess is most of it? This commit message for example sounds very much like a Claude result:
Add Space Invaders game implementation in assembly language
- Implemented the core game logic including player movement, missile firing, and invader behavior.
- Added collision detection for missiles and bombs.
- Included game state management for win/lose conditions and restarting the game.
- Created functions for drawing game elements on the screen and handling keyboard input.
- Defined constants and variables for game configuration and state tracking.
That last one in particular is exactly the kind of update you get from claude, it doesn't sound very human. "Constants and variables" eh? Not just constants or variables, but constants and variables.
rule #1 of ai programming: read and approve everything before accept.
rule #2 do not let it write commit messages - i did not know notice that until many commits later. they are horrible. change 10 things - writes about the last one, too peppy too.
Seems like there are potential privacy issues involved in sharing important emails with these companies, especially if you are sharing what the other person sent as well.
Ha, did you see the outrage from people when they realized that them sharing their deepest secrets & company information with ChatGPT was just another business record to OpenAI that is total fair game in any sort of civil suit discovery? You would think some evil force just smothered every little childs pet bunny.
Tell people there are 10000 license plate scanners tracking their every move across the US and you get a mild chuckle, but god forbid someone access the shit they put into some for profit companies database under terms they never read.
I'm not surprised the layman doesn't understand how and where their data goes. It's a bit of a let down members in HN seemed surprised by this practice after some 20 years of tech awareness. Many of the community here probably worked in those very databases storing such data.
Almost all email these days touches Google's or Microsoft's cloud systems via at least one leg, so arguably, that ship has already sailed, given that they're also the ones hosting the large inference clouds.