Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do they expect to lead in identifying fake content when the problem is intractable if adversaries are even somewhat competent?

You can collect heuristics which may work here and there to stay ahead in this cat and mouse game, but when adversaries use AI models properly, there is no way to differentiate.



You can even follow the Amazon Mechanical Turk [1] + ChatGPT + editors type of workflow which will be indistinguishable from a real content? I am eager to see what arises from this acquisition. I remember using Disqus in Wordpress to expect more competitive spam detection. The result? It didn't even detect obvious network bots. Fakespot raised $ 5.3m [2]. Is there a disclosure of the number for this acquisition? [3].

[1] https://www.mturk.com/worker

[2] https://www.crunchbase.com/organization/fakespot

[3] https://webcache.googleusercontent.com/search?q=cache:C2WWi4...


The problem is not intractable if the analysis of reviews is statistical and reputation-based instead of content-based. They can look at how many reviews were added over time, how the product page has changed over time, if the reviewers are genuine users or if they only leave 5-star reviews on a handful of sketchy products, etc.

Of course, it would be much easier for Amazon to do this, because they could look at IP addresses, purchase history, mailing address, etc. - but it's in their best interest to let the spam continue, apparently.


This is the oldest conundrum in Trust & Safety. The target party (Amazon) has pretty much all of the rich data (IPs, cards, addresses) while the interested party (usually a small company) is trying to scrape their way to solve a problem that the first party is half-assed interested in solving because they (1) have an internal "abuse" team to deal with it already and/or (2) are probably making money by keeping the problem alive either directly or indirectly.

As if the above is not already, there's one more complication: the interested small company needs to pitch their solution as a service to the platform because in general that's the only interested party of real business value. i.e. end users would not pay for such protection or won't pay enough.

Apply it to social media impersonation and scams, Adtech scams, etc etc.

Fun stuff.


https://styrate.co/landing/ I'm building a product discovery website that actually uses AI and community input to filter out fake reviews




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: