> I'm sure on this site nobody is naive to the potential harms of tech
I don't share your confidence. A lot of people seem to either be doing their best to ignore the risks or pretending that a nightmare scenario could never happen to them for some reason. They place huge amounts of trust in companies that have repeatedly demonstrated that they are untrustworthy. They ignore the risks or realities of data collection by the state as well.
> I don't think I'm being to cynical to wait for either local LLMs to get good or for me to be able to afford expensive GPUs for current local LLMs, but maybe I should be time-discounting a bit harder?
I'm with you. As fun it would be to play around with AI it isn't worth the risks until the AI is not only running locally but also safely contained so that it can only access the data I provide it and can't phone home with insights into what it's learned about me. I'm perfectly fine with "missing out" if it makes it harder for me to be taken advantage of.
As a side benefit, if/when AI becomes safe to use with my personal information, it'll probably suck a little less, and others will have already demonstrated a number of tasks it's successful/disastrous at so I can put it work more easily and effectively without being burned by it.
I don't share your confidence. A lot of people seem to either be doing their best to ignore the risks or pretending that a nightmare scenario could never happen to them for some reason. They place huge amounts of trust in companies that have repeatedly demonstrated that they are untrustworthy. They ignore the risks or realities of data collection by the state as well.
> I don't think I'm being to cynical to wait for either local LLMs to get good or for me to be able to afford expensive GPUs for current local LLMs, but maybe I should be time-discounting a bit harder?
I'm with you. As fun it would be to play around with AI it isn't worth the risks until the AI is not only running locally but also safely contained so that it can only access the data I provide it and can't phone home with insights into what it's learned about me. I'm perfectly fine with "missing out" if it makes it harder for me to be taken advantage of.
As a side benefit, if/when AI becomes safe to use with my personal information, it'll probably suck a little less, and others will have already demonstrated a number of tasks it's successful/disastrous at so I can put it work more easily and effectively without being burned by it.