Hacker Newsnew | past | comments | ask | show | jobs | submit | dvhh's commentslogin

Once used object storage as queue, you can implement queue semantic at the application level, with one object per entry.

But the application was fairly low volume in Data and usage, So eventual consistency and capacity was not an issue. And yes timestamp monotonicity is not guaranteed when multiple client upload at the same time so unique id was given to each client at startup and used for to add guarantee of entries name. Metadata and prefix were used to indicate state of object during processing.

Not ideal, but it was cheaper that a DB or a dedicated MQ. The application did not last, but would try again the approach if adapted to stuation.


The application I'm interested in is a log-based artifact registry. Volume would be very low. Much more important is the immutability and durability of the log.

I was thinking that writes could be indexed/prefixed into timestamp buckets according to the clients local time. This can't be trusted, of course. But the application consumers could detect and reject any writes whose upload timestamp exceeds a fixed delta from the timestamp bucket it was uploaded to. That allows for arbitrary seeking to any point on the log.


Comes with the general perception of OS vs Software failure responsibility:

- On Windows, this is Window's fault - On Apple OS, this is the application's fault - On Linux, this is the user's fault

Of course exception do apply, but as far as I know MacOS I have noticed some instance of application patching by the OS itself (although I haven't dug deeper, I can confirm that the application did have a slight change of behavior even before applying vendor patches, and I doubt it was anything done by the anti-malware protection)


As far as I understand LLM what is being asked is unfortunately close to impossible with LLM.

Also I find it disingenuous that apologists are stating thing close to "you are using it wrong". Where it is advertised that LLM based AI should be more and more trusted (because more accurate, based on some arbitrary metrics) and might save some time ( on some undescribed task).

Of course in that use case most would say to use your judgement to verify whatever is generated, but for the generation that is using AI LLM as a source of knowledge ( like some people are using Wikipedia as source of truth, or stack overflow) it will be difficult to verify, when all they knew is LLM generated content as source of knowledge.


sure that "being written in C" or in "PHP" would have gathered far less interest


A GameBoy emulator written in PHP would certainly piqued my interest, but more like a morbid curiosity than anything else. :)


Considering that the code is mostly x86 assembly, the gains from such optimization are quite unlikely.


The software does not necessarily need to be written in C ( or C++) for these elementary security holes to happen.


Isn't usb-c display port alt mode a lost cause anyway ?


Why? This status update from the Asahi team describes a prototype and planned code fork for long-term maintenance of the feature.

Google Pixel supports USB-c DisplayPort Alt Mode.


got a 404, anyone else can confirm?


I have a similar experience where jpeg encoding and webp encoding result in far less computing resources use that jpeg XL or AV1, and was curious at what other people used (as I might be using the wrong library).


MozJPEG encoding times are not that great but decoding is still fast.

I believe with Jpegli you can have faster encoding than with MozJPEG.


One issue is that format support detection was iffy, compared to Jpeg XL where people knows to use the accept header to declare support format.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: