Hacker Newsnew | past | comments | ask | show | jobs | submit | mcswell's commentslogin

Obviously this doesn't answer your question, but there are scifi stories about alien civilizations that arise on planets without heavy metals. Usually the plot revolves around their not getting past the stone age.

I have very limited experience with LLMs, and no recent experience teaching. But every time I hear about the problem of students using LLMs, I have two thoughts:

1) When they get out of school, no one can stop them from using LLMs. So preventing them from using them now is not a way to teach them how to cope in the future.

2) LLMs are (duh!) often wrong. So treat what the LLMs say as hypotheses, and teach the students how to test those hypotheses. Or if the LLMs are being used to write essays, have the students edit the output for clarity, form, etc. Exams might be given orally, or at least in a situation where the students don't have access to an LLM.


I've seen this in a few places, but it's rare because of all the energy (heat loss and energy required to drive the circulation) that gets used up when the water is recirculating but nobody is using it--which is most of the time.

I was recently in Iceland, and since a lot of heat is geothermal, recirc would probably make sense, but I can't remember having it. Maybe it's the pumping cost? Although natural convection driven by the difference in density between hot and cold water might make up for at least part of that.


Since mainframes, you say. Well, sonny, when I first learned programming on a mainframe, we had punch cards and fan-fold printouts. Nothing beats that, eh?

"Make America Little Again" --Donald J. Trump

Nobody is forced to fund broadcasting, now that Trump has taken away NPR and PBS funding. That has nothing to do with spectrum: nada, zilch, nichts, rien, ma'yuk...

The free spectrum granted to NPR and PBS represents a huge amount of gov funding. Spectrum is worth quite a lot, it should be auctioned and proceeds used to pay down the national debt.

It's not always the case, but often verifying an answer is far easier than coming up with the answer in the first place. That's precisely the principle behind the RSA algorithm for cryptography.

Sure, it's easy to check ((sqrt(x-3)+1)/(x/8)) is less than 4. Now do it without calculus.

Very much like this effect https://www.reddit.com/r/opticalillusions/comments/1cedtcp/s... . Shouldn't hide complexity under a truth value.


I don't see the relevance of that argument (which other responders to your post have pointed out as Searle's Chinese Room argument). The pen and paper are of course not doing any thinking, but then the pen isn't doing any writing on its own, either. It's the system of pen + paper + human that's doing the thinking.

The idea of my argument is that I notice that people project some "ethereal" properties over computations that happen in the... computer. Probably because electricity is involved, making things show up as "magic" from our point of view, making it easier to project consciousness or thinking onto the device. The cloud makes that even more abstract. But if you are aware that the transistors are just a medium that replicates what we already did for ages with knots, fingers, and paint, it gets easier to see them as plain objects. Even the resulting artifacts that the machine produces are only something meaningful from our point of view, because you need prior knowledge to read the output signals. So yeah, those devices end up being an extension of ourselves.

Your view is missing the forest for the trees. You see individual objects but miss the aggregate whole. You have a hard time conceiving of how exotic computers can be conscious because we are scale chauvinists by design. Our minds engage with the world on certain time and length scales, and so we naturally conceptualize our world based on entities that exist on those scales. But computing is necessarily scale independent. It doesn't matter to the computation if it is running on some 100GHz substrate or .0001Hz. It doesn't matter if its running on a CPU chip the size of a quarter or spread out over the entire planet. Computation is about how information is transformed in semantically meaningful ways. Scale just doesn't matter.

If you were a mind supervening on the behavior of some massive time/space scale computer, how would you know? How could you tell the difference between running on a human making marks with pen and paper and running on a modern CPU? Your experience updates based on information transformations, not based on how fast the fundamental substrate is changing. When your conscious experience changes, that means your current state is substantially different from your prior state and you can recognize this difference. Our human-scale chauvinism gets in the way of properly imagining this. A mind running on a CPU or a large collection of human computers is equally plausible.

A common question people like to ask is "where is the consciousness" in such a system. This is an important question if only because it highlights the futility of such questions. Where is Microsoft Word when it is running on my computer? How can you draw a boundary around a computation when there are a multitude of essential and non-essential parts of the system that work together to construct the relevant causal dynamic. It's just not a well-defined question. There is no one place where Microsoft Word occurs nor is there any one place where consciousness occurs in a system. Is state being properly recorded and correctly leveraged to compute the next state? The consciousness is in this process.


"'where is the consciousness' in such a system": One could ask the same of humans: where is the consciousness? The modern answer is (somewhere) in the brain, and I admit that's likely true. But we have no proof--no evidence, really--that our consciousness is not in some other dimension, and our brains could be receiving different kinds of signals from our souls in that other dimension, like TV sets receive audio and video signals from an old fashioned broadcast TV station.

This brain-receiver idea just isn't a very good theory. For one it increases the complexity of the model without any corresponding increase in explanatory power. The mystery of consciousness remains, except now you have all this extra mechanism involved.

Another issue is that the brain is overly complex for consciousness to just be received from elsewhere. Typically a radio is much less complex than the signal being received, or at least less complex than the potential space of signals it is possible to receive. We don't see that with consciousness. In fact, consciousness seems to be far less complex than the brain that supports it. The issue of the specificity of brain damage and the corresponding specificity in conscious deficits also points away from the receiver idea.


Straw man. The person who you're responding to talked about "equivalent statements" (emphasis added), whereas you appear to be talking about equivalent objects (AIs vs. brains), and pointing out the obvious flaw in this argument, that AIs aren't biology. The obvious flaw in the wrong argument, that is.

That's a really good question. I don't have an answer, or even the beginning of an answer, but I would hazard a guess that there is a feedback loop. So listening to yourself talk (or even better, putting your thoughts down in print) is sort of like listening to someone else talk, which puts new ideas into your mind, or causes you to better organize the ones you already have.

Doing mathematical proofs might be an extreme example of that: a mathematician has (I am told) an intuition--a thought--but has to work it out rigorously. Once they've done that, the intuition becomes much clearer. I guess.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: