A lot of his ideas remind me of the BeReal app, it limits posts per day and is geared toward 'friends in real life' and with just a few friends on it I've stayed engaged. But it's sparse for me and can be a ghost town much of the time, but that may be just because my friend group isn't using it much. There needs to be sufficient network effect to maintain and grow it's reach as a network, which may be antithetical to its founding principles.
Lol I dont know how to say this but BeReal is flawed from the start. IG is not about being real at all for the most part - hence its popularity. People enjoy being able to cosplay and show their best self, not revealing their true self.
That's been my experience with BeReal as well. I seldom see posts from people I'm close with, and so I post less as well, and presumably that contributes to a similar feeling for other friends of mine.
In fact, these days, I only post in it so that I can record the moment, to add it to the record of fotos which are convenient and fun to look back through.
I liked BeReal while it was still just post 1 picture a day. Now you have 'your brands' there, they try to increase 'engagement' by trying to get people use the app more... I was there when it happened on Facebook, I was there when it started on Instagram – not again, thx.
They show the factory in their video on the page, it's a new modern car chassis with the vintage look (so it's not a Ford Bronco chassis but the external look is inspired by it).
Sadly the past link is a little more basic - just the title, or the URL (if it's constructed reasonably). It doesn't find all related posts (which is still NP)
This action comes a bit late, at the end of the "Search engine" era, at a time when AI responses from many sources are largely replacing the "Google Search".
Similar action happened against Microsoft Windows around 2000, just as the rise of web-based apps (online email, google docs, etc) largely made the underlying operating system less relevant to how people use their computers and apps.
So I read this as the dominant player can monopolize a market while it's relevant without an issue, and once the market starts to move on, the antitrust lawsuits come in to "make the market more competitive" at a time when that era is mostly over.
And trying to regulate early (as with the last administration's AI legislation that is now being repealed) we can see that only hindsight is 20/20, and regulating too early can kill a market. My conclusion is to just let the best product win, and even dominate for a while as this is part of the market cycle, and when a better product/platform comes along the market will move to it.
Because we're still in the "get them hooked" stage where AI startups and Google/Meta are losing money on AI.
Once they start properly monetising (aka making users pay for what it actually costs them to train and run), it will be a different story. The vast majority of people won't pay $20-30€/month for an LLM to replace their search engine. (And an analysis I saw of OpenAI's business model and financial indicated they're losing money even on their paid tier, per query).
They’ll instead accept being manipulated and fed content influenced by money, be it ads or individualized content that somehow serves whoever is paying for it.
IMO the AI players are only hooking the power users and technically inclined (in regard to AI replacing search), but there's a long, long tail of people who are going to be using Google as their search engine until the end of days. Think the type of person who types "facebook.com" into Google instead of their address bar — they're not switching over to ChatGPT or Kagi any time soon.
I’ve started using ChatGPT to look up most things I used to Google. It gives me immediate, concise results without having to parse bloated websites full of ads and other garbage.
Google themselves are trying to figure this out, with the first (top placement) of search results showing their Gemini AI Response, at least for me. I read this as an attempt to keep users on Google instead of asking Chat GPT or other some other AI. What's your take on that?
My take is that it's primarily a smart way to quickly gather lots of ELO/AB feedback about LLM responses for training, whilst also reducing people switching to ChatGPT. OpenAI has significant first mover advantage here and it's why they're so worried about distillation, becuase it threatens the moat.
Google, on the other hand, has a huge moat in access to probably the best search index available and existing infrastructure built around it. Not to mention the integration with Workspace emails, docs, photos - all sorts of things that can be used for training. But what they (presumably) lack is the feedback-derived data that OpenAI has had from the start.
ChatGPT does not use search grounding by default and the issues there are obvious. Both Gemini and ChatGPT make similar errors even with grounding but you would expect that to get better over time. It's an open research question as to what knowledge should be innate (in the weights) and what should be "queryable", but I do think the future will be an improved version of "intelligence" + "data lookup".
Every AI chatbot to date suffers from the "Film expert" effect. That is when a script writer presents data from an "expert" in a movie or show to the audience in response to some information need on the part of the other characters. Writers are really good at making it sound credible. When an audience member experiences this interchange, generally they have one of two experiences. Either they know nothing about the subject (or the subject is made up like warp drive nacelle engineering) and they nod along at the response and factor it into their understanding of the story being told. Or, they do know a lot about the subject and the glaring inaccuracies jolt them out of the story temporarily as the suspension of disbelief is damaged.
LLMs write in an authoritative way because that is how the material they have been trained on writes. Because there is no "there" there, an LLM has no way of evaluating the accuracy of the answer it just generated. People who "search" using an LLM in order to get information about a topic have a better than even chance of getting something that is completely false, and if they don't have the foundation to recognize its false may incorporate that false information into their world view. That becomes a major embarrassment later when they regurgitate that information, or act on that information in some way, that comes back to bite them.
Gemini has many examples of things it has presented authoritatively that are stochastic noise. The current fun game is to ask "Why do people say ..." and create some stupid thing like "Too many cats spoil the soup." That generates an authoritative sounding answer from Gemini that is just stupid. Gemini can't say "I don't know, I've never seen anything that says people say that."
As companies push these things out into more and places, more and more people will get the experience of believing something false because some LLM told them it was true. And like that friend of yours who always has an answer for every question you ask, but you keep finding out a bunch of them are just bullshit, you will rely on that information less and less. And eventually, like your buddy with all the answers, you stop asking them questions you actually want the answer too.
I'm not down on "LLMs" per se, but I do not believe there is any evidence that they can be relied on for anything. The only thing I have seen, so far, that they can do well is help someone struggling with a blank page get started. That's because more people than not struggle with starting from a blank page but have no trouble correcting something that is wrong, or re-writing it into something.
"Search" is multifaceted. Blekko found a great use case for reference librarians. They would have paid Blekko to provide them an index of primary sources that they could use. The other great use is shopping if you can pair it with your social network. (Something Blekko suggested to Facebook but Zuck was really blind to that one) Blekko had a demo where you could say, "Audi car dealer" and it would give you the results ranked by your friend's ratings on their service. I spent a lot of time at Blekko denying access to the index by criminals who were searching for vulnerable WordPress plugins or e-commerce shopping carts. Chat GPT is never going to give you a list of all sites on the Internet running a vulnerable version of Wordpress :-).
So my take is the LLM isn't a replacement for search and efforts to make it so will stagnate and die leaving "Search Classic" to take up the slack.
If you trained a model on a well vetted corpus and gave it the tools to say it didn't know, I could see it as being a better "textbook" then a physical textbook. But it still needs to know what it doesn't know.
What happens when search engines start displaying AI generated/human seasoned content? Everything written online is being seasoned by ChatGPT et al. and human behavior is being conditioned to think by its highly convincing outputs.
Why should I trust an ad supported search engine with the bastardized search experience displaying top rank results by highest paying advertiser and now mixing in AI slop pretending to be human written for the foreseeable future. The future looks bleak..
I rather create an AI agent to do my Google search for me and cut out the ad links results bias and further synthesize the results for me in human readable and interrogabal format.
I will take that bet Chuck. Maybe not completely replace, but AI will be the defacto search platform. I find myself using Google less and less these days
I only use Google when I am in the mood to search for something and find it below the fold after scrolling through mindless semi-related sponsored links
> This action comes a bit late, at the end of the "Search engine" era, at a time when AI responses from many sources are largely replacing the "Google Search".
AIs are using search indices more and more. Google has the largest, and there is risk of Google using its monopoly in search (in particular their index) to give themselves an unfair advantage in the nascent AI market.
Do you think its a coincidence that companies start getting regulated when they are at the end of their rope? I think its the other way around. When Google started to lose power, when alternatives started to pour money into the political system, when jobs and money from alternatives presented themselves, then the wheels of justice began turning. The regulation is caused by googles apparent decline.
"Nuclear made up close to 5% of China's power generation in 2024 and is expected to rise to 10% by 2040". That is the stat that surprised me. On balance, I understand that nuclear is environmentally much cleaner than coal, correct? If 95% of power in China is coal (with some hydro/solar/wind) then perhaps the more nuclear, the better.
I have not used this one yet but as a rule of thumb I always test this type of software in a VM, for example I have an Ubuntu Linux desktop running in Virtualbox on my mac to install and test stuff like this, which set up to be isolated and much less likely to have access to my primary Mac OS environment.
I'm just in the process of setting this up using logs from a tree that just went down in a storm, so thanks for pointing that out, I had not thought of it yet! Last year I did have to build a cage around our strawberry patch and it makes sense that we will need one over this project as well. Much appreciated.
Being optimistic and positive on tech in the first place is the root issue here. This reminds me of my mom in medical school who became disillusioned when she experienced the corruption of the pharmaceutical industry and it's influence of the entire industry for it's own profit, not always in the interest of the patient. Being overly optimistic about an industry or field is in my view a worldview error, and a better approach is to be optimistic about one's own potential to contribute to the betterment of humanity, no matter the field. Also the understanding that there are and always will be bad actors should not dissuade one from being part of creating solutions, as one sees it. Being jaded and cynical will not help in the long run.
> Being jaded and cynical will not help in the long run.
This sounds like it's better to work within the system rather than try to overthrow it. You need more than a little angst to completely reset cultural norms. Maybe you're optimizing for a local maxima instead of realizing the true potential of saying "fuck everything" and replacing it.
I'm mostly playing devil's advocate, not saying the correct response to all adversity is to plot a revolution. But my point is sincere - sometimes it is the best thing to burn it to the ground and start over. Private healthcare seems like a pretty good example of a system that should be abolished rather than massaged (assuming your goal is better healthcare at a more affordable price) and we have decades of data from our own country and others to corroborate that.
I think what you are saying is orthogonal to what they are saying.
You can be positive and optimistic about big scale societal changes that throw out all the established notions. Likewise, you can also be cynical and jaded about small scale changes that just aim to incrementally improve things.
Aiming for big changes doesn't necessarily imply you have to be cynical. In fact I think you're more likely to be able to achieve big changes if you're optimistic about them.
If you're willing to accept small changes as a win in a fundamentally broken system (in the sense the incentives aren't aligned and there is no real accountability feedback mechanism) then the problem is you aren't cynical enough to attempt something drastic. I'd actually go even further and argue it's a form of being brainwashed, usually as a byproduct of effective propaganda. Going back to the example of private healthcare - I don't fucking care about small incremental changes when the system itself is still structurally broken. We need more cynicism about the status quo so people say "fuck this" and replace it with something better. And it's not even a complicated or abstract idea - literally every other 1st world country has solved this problem and laugh about how broken healthcare is in the USA.
I think people tend to think too much in terms of black and white. Jaded cynicism is sometimes a good response, and sometimes less so, and other times won't make too much of a difference or can go either way. The trick is to know how to balance it all.
Same story with "tear it all down" vs. "work within the system".
The point is: what are you going to do if single-payer healthcare does not materialize in the US? You have many options; plotting a revolution, working for reform inside the system or impotently complaining on social media. What is actually workable for you?
The same goes for the article's author. Sounds like they're shocked—SHOCKED—that private companies are just out to make money, and don't actually have our best interests at heart. The real issue is that they bought into the fantasy in the first place. But now that the veil is lifted, how will it change your actual behavior in the real world? If it will have no effect, why let it get you worked up at all? If it will have an effect, go out and do it.
> But now that the veil is lifted, how will it change your actual behavior in the real world?
As the author said:
> Stop giving them your money, time and data as much as possible for you. They won't bring us closer to these ideals they promise.
It's not changing the world, but I just do what I can to not contribute to it. And if any alternatives do pop up I do try to support them, sometimes financially.
The internet's outskirts are emptier than ever with this centralization, but I have made the active choice to de-activate pretty much all the mainstream stuff and use extensions to minimize their ability to track me. So knowing this did change my behavior on how I interact with the internet.
>a better approach is to be optimistic about one's own potential to contribute to the betterment of humanity, no matter the field. Also the understanding that there are and always will be bad actors should not dissuade one from being part of creating solutions, as one sees it. Being jaded and cynical will not help in the long run.
Easy to say this, but these two aspects contradict each other. You become jaded and cynical precisely because your potential to better humanity is locked down in beauracracy that has the opposite interest. One can only fight back so much against the tidal wave that was setup decades before you were born.
I'd even go so far to say that the ones who do rise to the challenge need to be jaded, and channel that into overcoming the wave. Being cynical means understanding a need to deeply understand every little action, no matter how simple and otherwise "objectively good" it is in the short run.
It's how you use that cynicism that matters, not the state of being cynical.
> This reminds me of my mom in medical school who became disillusioned when she experienced the corruption of the pharmaceutical industry and it's influence of the entire industry for it's own profit, not always in the interest of the patient.
That sounds like a story of its own. Would you care to share the story about the corruption she saw? We so often hear the stories about companies hiking prices for lifesaving medicine fo no apparent reason other than profit, but it would be interesting to hear what she saw from the inside?
Someone who's in medical school (or finishes and goes into medicine) isn't really "inside" the pharmaceutical industry and typically has a very, very poor understanding of how pharmaceuticals are developed and brought to market.
The most substantial corruption in the health/life sciences/medicine world is simple profit motive at hospitals, pharmacy benefit managers (PBMs), and insurers, and especially when those three entities combine into mega "pay-vidors" like UHG.
Tiny anecdote: I worked on the campus of a children's hospital. The pharma reps had parking right by the main entrance. The parents of sick children? Expensive, paid parking a mile away.
Here's a fun one that just happened recently. A doctor I know works for a giant conglomerate as a general practitioner. Recently the business development team realized that insurance will pay them to take pee samples for diabetes tests for every patient. Now every single time someone comes in the medical assistants are made to get a piss sample for a test that is totally worthless for most of these people as they have little risk of diabetes(far high chance of false positive than true positive). When the doctor told the medical assistants to stop he received an angry email from an MBA which became a huge pain in the ass. At the end of the day we have to remember that the only goal of a business is to make money and even if everyone inside that business is trying to do good the banality of evil tends to rear its ugly face. The MBA actually believed the policy was helping people here believe it or not.
Personal financial payments to physicians are a common marketing strategy used by the pharmaceutical industry. These payments include both cash (typically for consulting services or invited lectures) and in-kind gifts such as meals.
You've got to separate the tech from human nature. Penicillin, modern medicine, travel, communication etc. are good. Greed corruption and self interest are a human thing irrespective of whether you have high tech or not. We may make some progress there but it's not really a tech issue.
> it's influence of the entire industry for it's own profit
I continue to be fascinated by how easy people priorities profit over doing the right thing. Sometimes they don't even stand to personally gain all that much, they do it for the benefit of some soulless company.
If you aren't actively making things worse for the general public I'll even let the sole focus on profit slide, but how can you justify to yourself going out and actively causing suffering.
Things like pensions are frequently refusing to invest in weapons manufacturers, because of the harm their products do, but why? At least they are honest about what they do and they can justify it.
It's easy for people who face no real threat themselves to pretend to take the moral high road by refusing to invest in weapons manufacturers. Not everyone has that luxury.
> Things like pensions are frequently refusing to invest in weapons manufacturers, because of the harm their products do, but why? At least they are honest about what they do and they can justify it.
Not to mention, their justifications are much more legitimate than anything advertising industry could come up with, and yet, marketing is a respectable occupation these days for some reason, and ad industry funds everything.
I think the argument made in the CNN clip is that it's users are now more equally distributed politically, which is reflective and a microcosm of the actual demographics of the country. I think it's a fair point. But I also agree with your take that many (like me as well) don't follow any political party (I kind of opted out around 2016). I do wish we could see a multidimensional politics rather than the polar system we have now, I know for example in some euro countries there are 5 or more parties debating policies in their houses of parliament.
> equally distributed politically, which is reflective and a microcosm of the actual demographics of the country
But it's not, for the reason you and I both gave: almost half the country identifies as independent, and party affiliation is at a record low[1]. So Twitter is not a mirror of the US, it is a representation of the extremes of the US (which is why taking Twitter seriously has been so dangerously wrong in the past).
> I do wish we could see a multidimensional politics rather than the polar system we have now, I know for example in some euro countries there are 5 or more parties debating policies in their houses of parliament.
This is and always will be impossible because many of our institutions (such as committee assignments in the Senate) are functionally locked into a bipolar structure that only a two-party system can conform to.
If you want more dimensions in your politics, you can look for more Bernie Sanders/Dan Osborn[2] candidates. They run as independents, but then they "round up" to caucus with whichever party they agree with. The key is that they eventually choose a side, though, because if either Democrats or Republicans split into separate parties, the other side (likely a minority party) will then dominate the country indefinitely.