Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"The idea is not that LLMs themselves become a threat."

Not my idea either, but I did hear and see that idea expressed many times already. Usually more dramatic, the less the persons knows about the actual tech, but even here I saw that point of view.



I don't think so either, but I think that LLM's are closer to AGI than what most people seem to think.

I think if you let an LLM, even with the models we have today, run in a loop and you get something that comes very close to consciousness. Consciousness, mind you, similar to that of a deaf, blind and otherwise sensory deprived person, which in itself is a bit abstract already. Give it senses and ways to interact with the world, and I would argue that what you have is a type of AGI. By that I do not mean "equivalent to humans", which I don't think is a very good definition of intelligence, but a different branch of evolution. One which I can easily see surpassing human intelligence in a near future.


A feedback loop, sensors, and a goal; boom, AGI.

So many people are stuck on AI hallucinations. How many times have you seen something that, for the briefest of instants, you thought was something else?

"Orange circle on kitchen counter, brain says basketball, but, wait, more context, not a real circle, still orange, furry, oh, a cat; it is my orange cat" and all that takes place faster than you blink.

You didn't hallucinate, your brain ran a feedback loop and based on past experiences it filled in details and as those details were validated they stayed (still looking at the kitchen counter, orange thing unexpected) and then some details deviated from brain expectations (the orange ball is not behaving like a ball) and so new context is immediately fed back in and understanding restored.

For AGI you have to have a loop with feedback. An LLM is one leg of the stool, now add a way to realize when something it generates is failing to stand up to prior experience or existing need and be able to gather more context so it can test and validate its models.

Really, that's how anything learns; how could it be anything else?


>Give it senses and ways to interact with the world

Much easier said than done.

You can take just one example, energy storage and usage, to illustrate how far off we are from giving AI real-world physical capabilities.

Human biological-based energy management capabilities are many orders of magnitude more advanced than current battery technology.

For any AGI to have seriously scary consequences, it needs to have a physical real-world presence that is at least comparable to humans. We're very far from that. Decades at least, if not centuries.

The more realistic near-term threat is a non physical presence AGI being used by bad actor humans for malevolent ends.


>For any AGI to have seriously scary consequences, it needs to have a physical real-world presence that is at least comparable to humans

That doesn't seem true to me at all.

In fact, I don't see why it needs to have any physical presence to be scary. You can wreak a lot of havoc with just an internet connection.


Can you cite examples. I haven't seen any personally.


I don't usually bookmark or otherwise save weird opinions, but will do so next time (most of the threads here, when ChatGPT became big and hyped contained many of them as far as I remember, maybe I take a nostalgic look)


The paperclip maximizer, AKA instrumental convergence theory states not that an artificial intelligence would decide to kill humanity, but rather if it has sufficient power it might inadvertently destroy humanity by e.g. using up all resources for computational power.

https://en.m.wikipedia.org/wiki/Instrumental_convergence


We already have these - they're called corporations.


Now imagine one working 1000x faster and not being as dumb as a mind running on bureaucracy is.


What does that have to do with the current LLM's?


I also don't think instrumental convergence is a risk from LLMs.

But: using up all resources for computational power might well kill a lot of — not all — humans.

Why? Imagine that some future version can produce human quality output at a human speed for an electrical power draw of 1 kW. At current prices this would cost about the same to run continuously as the UN abject poverty threshold, but it's also four times the current global electricity supply which means that electricity prices would have to go up until human and AI labour were equally priced. But I think that happens at a level where most people stop being able to afford electricity for things like "keeping food refrigerated", let alone "keep the AC or heat pumps running so it's possible to survive summer heatstroke or winter freezes".

Devil's in the details though; if some future AI only gets that good around 2030 or so, renewables are likely to at that level all by themselves and exceed it shortly after, and then this particular conflict doesn't happen. Hopefully at that point, AI driven tractors and harvesters etc. get us our food, UBI etc., because that's only a good future if you don't need to work, because if you do still need to work, you're uncompetitive and out of luck.


Yes. The whole of Reddit seemed to be signed up to that idea for about six months of 2023.


Yes and can we please reinforce the following key idea "the less the person knows about the actual tech".

Far from being an expert, but in tech for 3 decades, I wonder at most comments I hear about AI around restaurants, terraces, bars, family gatherings... the stuff I hear, oh boy.

It's easy to become a follower when you've never seen the origin...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: