China no longer has a one-child policy and is now actively focusing policies and incentives on increasing childbirth. Although it’s not going to yield immediate results, the PRC operates on long time horizons and will probably succeed long-term in raising birth rates.
> the PRC operates on long time horizons and will probably succeed long-term in raising birth rates.
That would make them the first country to do so, I think. Others have tried and nothing has worked. But China will likely become rich before it gets old, so it may not matter.
Did you mean to say "But China will likely become old before it gets rich"?
Their population is declining already and they have a very long way to go before being considered "rich", so I haven't seen many projections for what you said. If you meant it, I'd be curious to know why.
China's middle class is already larger than the entire US population, and growing fast. It won't be rich in the sense that say Switzerland or Norway are rich. But it seems safe to say they won't be barely scraping by.
IMO, India likely won't make this transition. It's population is still growing but it's birth rate is sinking fast (like most everywhere else).
lol, no. it will not even maintain its current extinction-tier TFR of 1.02, let alone maintain its current population.
like every other civilized people, the Chinese have largely realized that the game is rigged and the only winning move is not to play. the only way to "fix" the birth rate is to reject humanity (education, urbanization, technology) and retvrn to monke (subsistence farming, arranged marriages, illiteracy, superstition), which no civilized country will ever do. even the current TFR of 1.0-1.5 in the civilized world is largely inertial, and it will continue to fall. South Korean 0.7 will seem mind-bogglingly high a hundred years from now,
and 1CP was such a predictably disastrous idea that I seriously doubt the forward-thinking you seem to believe the CCP to posses.
>the only way to "fix" the birth rate is to reject humanity (education, urbanization, technology) and retvrn to monke (subsistence farming, arranged marriages, illiteracy, superstition), which no civilized country will ever do.
They won't do it willingly. That just means it will happen without their input.
sure, they could, hypothetically, close the borders and begin a campaign of forced insemination, but those babies would have no fathers to provide for them, and the state - any state - really resents footing the bill for child rearing, going as far as forcing victims of infidelity, fraud, or rape to pay child support. the state - any state - wants to give you as little as possible and to take as much as possible from you, for the delta between giving and receiving is its lifeblood.
the ideal family has two full-time working parents, paying a mortgage and car loans, consuming as many high-margin domestic products as possible, rearing as many children (future laborers and consumers) as possible, with little to no assistance from the state. and you simply can't have that by force. if you could, you might as well drop the pretense and openly treat your population as slaves.
It sounds like “national security” is the legal justification they’re using to do an end-run around Congress, just like the justifications they’ve used to implement tariffs and which underpin a bunch of their EOs.
Looking at the actual code (https://github.com/fastserial/lite3/blob/main/src/lite3.c#L2...), it seems like it performs up to 128 probes to find a target before failing, rather than bailing immediately if a collision is detected. It seems like maybe the documentation needs to be updated?
It's a bit unfortunate that the wire format is tied to a specific hash function. It also means that the spec will ossify around a specific hash function, which may not end up being the optimal choice. Neither JSON nor Protobuf have this limitation. One way around this would be to ditch the hashing and use the keys for the b-tree directly. It might be worth benchmarking - I don't think it's necessarily any slower, and an inline cache of key prefixes (basically a cheapo hash using the first N chars) should help preserve performance for common cases.
> It seems like maybe the documentation needs to be updated
Looks like it, yes:
/**
Enable hash probing to tolerate 32-bit hash collisions.
Hash probing configuration (quadratic open addressing for 32-bit hashes:
h_i = h_0 + i^2)
Limit attempts with `LITE3_HASH_PROBE_MAX` (defaults to 128).
Probing cannot be disabled.
*/
#ifndef LITE3_HASH_PROBE_MAX
#define LITE3_HASH_PROBE_MAX 128U
#endif
#if LITE3_HASH_PROBE_MAX < 2
#error "LITE3_HASH_PROBE_MAX must be >= 2"
#endif
> It also means that the spec will ossify around a specific hash function
It is a bit ugly, and will break backwards compatibility, but supporting a second hash function isn’t too hard.
You can, on load, hash a few keys, compare them to the hashes, and, from that, if the input has many keys with high probability, infer which hash function was used.
There also might be spare bit somewhere to indicate ‘use the alternative hash function’.
> The JSON standard requires that the root-level type always be an ‘object'
> or 'array'. This also applies to Lite³.
I don’t think that is true, and https://www.json.org/json-en.html agrees with that. Single values (numbers, strings, booleans, null) also are valid json.
ACR needs to die. It’s an absurd abuse of the privileged position that a TV has - a gross violation of privacy just to make a few bucks. It should be absolutely nobody’s business to know what you watch except your own; the motivation behind the VPPA was to kill exactly this type of abuse.
The greatest irony is that HDCP goes to great lengths to try and prevent people from screenshotting copyrighted content, and here we have the smart TVs at the end just scraping the content willy-nilly. If someone manages to figure out how to use ACR to break DRM, maybe the MPAA will be motivated to kill ACR :)
ACR — Automatic Content Recognition: tech in some smart TVs/apps that identifies what’s on-screen (often via audio/video “fingerprints”) and can report viewing data back to vendors/partners.
VPPA — Video Privacy Protection Act: a U.S. law aimed at limiting disclosure of people’s video-viewing/rental history.
HDCP — High-bandwidth Digital Content Protection: an anti-copy protocol used on HDMI/DisplayPort links to prevent interception/recording of protected video.
DRM — Digital Rights Management: a broad term for technical restrictions controlling how digital media can be accessed, copied, or shared.
MPAA — Motion Picture Association of America: the former name of the main U.S. film-industry trade group (now typically called the MPA, Motion Picture Association).
Enormous effort goes into stopping users from capturing a single frame, while manufacturers quietly sample the screen multiple times a second by design
The ship has sailed on that one. The telematics from the car can also be sent back to the mothership, i.e. if you’re driving like a lunatic, pulling donuts, harsh acceleration and so on.
There’s a difference between the owner having telemetry on their own car, and the manufacturer having telemetry on the cars they’ve sold. One is taking care of your assets, and the other is spying on customers.
Have they resolved the class-action lawsuit about workers sharing and making memes from pictures and videos of people inside their homes, garages, naked, with their pets, their kids, their laundry, their sex toys, etc? Mozilla said they're the least bad but they're definitely not good.
It's not just FSD footage. Footage was recorded while the cars were charging. From Reuters:
As an example, this person recalled seeing “embarrassing objects,” such as “certain pieces of laundry, certain sexual wellness items … and just private scenes of life that we really were privy to because the car was charging.”
To be clear, looking at video surreptitiously recorded inside peoples' homes is absolutely spying. And saying you get actual consent from click-through "opt-in" forms which opting out would kill huge swaths of their car's functionality, and not deliberately and loudly informing them of how invasive the videos were is frankly, ridiculous. Those forms are obviously pretext for tech companies to do things with people's data that they'd never consent to if they really understood the implications.
‘Telematics’ is not how the word ‘insurance’ is spelled. Anyone that owns an uninsured car or home that cannot afford to replace a total loss or hundreds of thousands of dollars in medical bills in the case of a major accident is negligent. Anyone without wealth lacking health insurance is negligent.
Having sensor logs of the space temp and CO2 ppm in your house when it’s burning down isn’t going to help you at all.
Car telemetry might help diagnose car issues, but I’m not aware of manufacturers using it that way, I’ve heard plenty about selling location data and driving habits.
Constantly monitoring your heart rate and blood pressure sounds like a good way to develop hypochondria.
Majority of population is wearing some sort of smartwatch tho.
Absence of PM2.5 is exactly how I debunked a false smoke alarm while I was overseas. Or I flagged excessive power use after friends left appliance on while I was away. Or water leak sensors flagged one toiled cistern dripping.
Are you saying that not monitoring e.g. heart rate constantly through some electronic device that sends the data somewhere (let’s assume somewhere under my control) is negligence?
From Wikipedia: “Telemetry is the in situ collection of measurements or other data at remote points and their automatic transmission to receiving equipment (telecommunication) for monitoring.”
There is nothing “tele” about going to the doctor, and nothing automatic about the information they gather. You’re conflating telemetry and simple examination or observation. Most types of examination are not telemetry, and many types of telemetry are not as benign as simple observation/examination. There is telemetry on my car but I can’t access the data. It’s not for my benefit— it’s for Jeep’s benefit. I don’t need it and I don’t want it.
Laws can change, but I’m not hopeful, tbh. Digital privacy problems are just too abstract to viscerally anger most people. That may change as people that grew up in surveillance capitalism mature, but being so used to invasive data grabs might replace ignorant complacency with aware complacency.
Mostly only because Tesla doesn't share this data outside of Tesla, unless they leak it to news outlets to make it look like the accident was all your fault and not Tesla's.
Tesla tends to only leak that stuff when they look bad. It's not like they are necessarily outright lying, they are just telling their version of the truth....
I point out tesla specifically because they had headlines about sharing camera feeds as memes. The Mozilla report clearly shows tesla is not an outlier, more like "middle of the pack".
"...pictures of dogs and funny road signs that employees made into memes by embellishing them with amusing captions or commentary, before posting them in private group chats. While some postings were only shared between two employees, others could be seen by scores of them, according to several ex-employees."
Two-second Google search. It's not very charitable to accuse someone of lying without even looking. It's even mentioned in that mozilla breakdown.
> I point out tesla specifically because they had headlines about sharing camera feeds as memes.
Your baseless assumptions are your own fault.
I gave citations for your specific questions. There’s multiple articles about the odious things that happened that you clearly have no interest in acknowledging. I’m not your research assistant. Go read them. You’re clearly more interested in defending Tesla than understanding people’s complaints. Cope? Shill? I’ll never know, and I’ll definitely never care.
lol, cringe. Great way to protect your ego when someone is serially pointing out how baseless your glib snapbacks are. I use em dashes because I learned to write at a competent university, instead of, say, Breaking Bad. In fact, it played a large role in early LLM development, so if they ever used student writing in their data sets, I might be a tiny part of the reason these models use so many em dashed to begin with. Hell, I don’t even use grammarly anymore. And I made this alt when I switched from dev to design, before the shitty job market switched me from design to manufacturing. And exactly who would I be “botting for?” The only person in this thread that started championing a particular company is you. The thread itself started with me bemoaning data collection among the auto industry, and mentioned that Tesla is the least bad at even if they’re still bad. So… am I botting for “big bicycle?” Part of the large Schwinn-backed anti-car virtual astroturfing brigade? Maybe an extremely aggressive public transit advocacy group? Exactly who cares enough about people stanning Tesla to point some robot propaganda machination at it? And since you were enough of a creep to go through my comment history looking for excuses to ignore what I was saying rather than actually engage with it, did you see ANY other indication that was a theme? Lol. So keep trying to scrounge up some cope, “bitch.”
You've been following my comments for a while, especially trying to dunk on Tesla with falsehoods and lies. Argue with Mozilla about their results, not me.
I’ve provided citations. You were commenting in a thread that branched off of my comment. I got my criticism from the Mozilla article, and most of my citations were linked to in the Mozilla article. I’d say you should actually read the subsection on Tesla, but you clearly can’t accept reality and have some weird Tesla obsession. Get help.
When you tap one of those fields it bounces you to a contact card. If it is an existing contact (for example, yourself), you just get the full contact card. If that contact card has multiple addresses (my contact card lists ten), you get no indication of which one it was sent to.
At some point in time the actual email address used was flagged with a little “recent” badge - by itself a confusingly-worded tag - but even that doesn’t show up consistently.
It’s stupid because there’s really no reason to play hide and seek with the email address - that’s an identifier that people should generally be familiar with (since you have to use it reasonably often), and lots of people have multiple addresses that they can receive mail at.
3. I stopped caring and learned to love the algorithm in 95% of normal typing. The result is that my typing speed is up but my accuracy has plummeted, yet my typing output is generally correct because of autocorrect.
Unfortunately this falls apart when I try to type anything that isn’t common English words: names, code, rare words, etc.
I also think that the keyboard could learn the different “rhythms” of typing - my normal typing which is fast and practically blind, and the careful hunt and peck which is much slower and intended for those out-of-distribution inputs. I bet the profile of the touch contacts (e.g. contact area and shape of the touches) for those two modes looks different too.
My strategy for a time was disabling autocorrect and perfect my accuracy, but this was stumped because indeed, it's harder to type these days than when the screens were smaller and less precise, it seems to pick adyacent keys on a whim.
So I realized I had exchanged correcting the same word four times in a row to correcting the same letter four times in a row.
Why is it hard? In principle you render an image instead of discrete buttons, and do your hit testing manually. Sure, it’s more annoying than just having your OS tell you what key got hit, but keyboard makers are doing way fancier stuff just fine (e.g. Swype).
Apple's keyboard receives more information, to put it simply. It doesn't get told that a touch was at a particular point, but the entire fuzzy area. Allowing you to use circular occlusion and other things to choose between side-by-side buttons and override the predictive behaviour when it is the wrong choice.
A third-party maker gets a single point - usually several in short succession, but still it requires more math to work out where the edges of the finger are pressing, to help determine which direction you're moving. So most just... Don't.
I mean, there is a reason why these sorts of constructs are UB, even if they work on popular architectures. The problems aren’t unique to IA64, either; the better solution is to be aware that UB means UB and to avoid it studiously. (Unfortunately, that’s also hard to do in C).
to discover at least two magical registers to hold up to 127 spilled registers worth of NaT bits. So they tried.
The NaT bits are truly bizarre and I’m really not convinced they worked well. I’m not sure what happens to bits that don’t fit in those magic registers. And it’s definitely a mistake to have registers where the register’s value cannot be reliably represented in the common in-memory form of the register. x87 FPU’s 80-bit registers that are usually stored in 64-bit words in memory are another example.
I no real complaints about CHERI here. What’s a pointer, anyway? Lots of old systems thought it was 8 or 16 bits that give a linear address. 8086 thought it was 16 + 16 bits split among two registers, with some interesting arithmetic [0]. You can’t add, say, 20000 to a pointer and get a pointer to a byte 20000 farther into memory. 80286 changed it so those high bits index into a table, and the actual segment registers are much wider than 16 bits and can’t be read or written directly [1]. Unprivileged code certainly cannot load arbitrary values into a segment register. 80386 added bits. Even x86_64 still technically has those extra segment registers, but they mostly don’t work any more.
So who am I to complain if CHERI pointers are even wider and have strange rules? At least you can write a pointer to memory and read it back again.
[0] I could be wrong. I’ve hacked on Linux’s v8086 support, but that’s virtual and I never really cared what its effect was in user mode so long as it worked.
[1] You can read and write them via SMM entry or using virtualization extensions.
The bigger problem is that a user cannot avoid an application where someone was writing code with UB, unless they both have the source code, and expertise in understanding it.
Siri does have documentation: https://support.apple.com/en-ca/guide/iphone/ipha48873ed6/io.... This list (recursively) contains more things than probably 95% of users ever do with Siri. The problem really boils down to the fact that a CLI is imposing enough that someone will need a manual (or a teacher), whereas a natural language interface looks like it should support "basically any query" but in practice does not (and cannot) due to fundamental limitations. Those limitations are not obvious, especially to lay users, making it impossible in practice to know what can and cannot be done.
Well that's largely theoretical and Siri needs largely more input than is worth the trouble. It lacks context and because of Apple focus on privacy/security is largely unable to learn who you are to be able to do stuff depending on what it knows about you.
If you ask Siri about playing some music, it will go the dumb route of finding the tracks that seems to be a close linguistic match of what you said (if it correctly understood you in the first place) when in fact you may have meant another track of the same name. Which means you always need to overspecify with lots of details (like the artist and album) and that defeat the purpose of having an "assistant".
Another example would be asking it to call your father, which it will fail to do so unless you have correctly filled the contact card with a relation field linked to you. So you need to fill in all the details about everyone (and remember what name/details you used), otherwise you are stuck just relying on rigid naming like a phone book. Moderately useful and since it require upfront work the payoff potential isn't very good. If Siri would be able to figure out who's who just from the communications happening on your device, it could be better, but Apple has dug itself into a hole with their privacy marketing.
The whole point of an (human) assistant is that it knows you, your behaviors, how you think, what you like. So he/she can help you with less effort on your part because you don't have to overspecify every details that would be obvious to you and anyone who knows you well enough.
Siri is hopeless because it doesn't really know you, it only use some very simple heuristic to try to be useful. One example is how it always offer to give me the route home when I turn on the car, even when I'm only running errands and the next stop is just another shop. It is not only unhelpful but annoying because giving me the route home when I'm only a few kilometers away is not particularly useful in the first place.
reply