This is important and should be a given. But the more interesting challenge is to highlight the object you’re editing (where your cursor is). It’s not clear even how to exactly visualize it (it could be inside subtract of union of subtract etc).
From my testing, the CSG operations, with post-processing, don’t produce watertight meshes. And being focused on printing, it doesn’t support different colors for the CSG operands. My use case is animation/games, so I’m reimplementing the CSG with a watertight b-rep.
In case you’re not well versed in Python typecheckers, in the mypy vs Pyright example, Pyright can be configured to complain about not annotating the collection (and so both typecheckers will yell at the code as written).
TypeScript takes the same approach in this scenario, and I assume this helps both be fast.
They were "on the Python Typing Council and helped put together the spec, the conformance test suite, etc" so I assume they are an expert on Python typecheckers
Would a human perform very differently? A human who must obey orders (like maybe they are paid to follow the prompt). With some "magnitude of work" enforced at each step.
I'm not sure there's much to learn here, besides it's kinda fun, since no real human was forced to suffer through this exercise on the implementor side.
How useful is the comparison with the worst human results? Which are often due to process rather than the people involved.
You can improve processes and teach the humans. The junior will become a senior, in time. If the processes and the company are bad, what's the point of using such a context to compare human and AI outputs? The context is too random and unpredictable. Even if you find out AI or some humans are better in such a bad context, what of it? The priority would be to improve the process first for best gains.
> once set up, very easy to build, no “design” required
Which is why they then get thrown around thoughtlessly. It becomes easy to pretend to have solved a problem using a toast instead of actually solving it.
Generally I have treated toasts as reassurance rather than important information
Like little 'saved' notifications when clicking through tabs, or email sent after clicking a send email button that might leave you on the same page
Web sites tend to over inform you of what's happening I like toasts (though I no longer use them since they're it of fashion) simply because you can disregard them
This is a terrible overview. The actual primary benefit of toasts is that they provide feedback on low-importance events without requiring the user to interact with them and without permanently taking up UI space. The web application I use most frequently would be infuriating if I had to deal with a modal window every time a toast would have been used, and UI space is at a premium for useful functionality, so occupying a permanent spot to relay those messages isn't a good solution either.
I wish software developers could drop this dogmatism. Same as the old Goto considered harmful trope outliving its usefulness and all that. It's always black and white - "people can misuse this tool, so this tool is inherently bad and should be eliminated from usage completely" - rather than acknowledging that many tools have great use cases even if they can also be abused.
At least the description is not at all about building an adtech platform inside OpenAI, it's about optimizing their marketing spend (which being a big brand, makes sense).
There are a bunch of people from FB at OpenAI, so they could staff an adtech team internally I think, but I also think they might not be looking at ads yet, with having "higher" ambitions (at least not the typical ads machine ala FB/Google). Also if they really needed to monetize, I bet they could wire up Meta ads platform to buy on ChatGPT, saving themselves a decade of building a solid buying platform for marketers.
> There are a bunch of people from FB at OpenAI, so they could staff an adtech team internally I think
Well they have Fidji, so she could definitely recruit enough people to make it work.
> with having "higher" ambitions (at least not the typical ads machine ala FB/Google)
Everyone has higher ambitions till the bills come due. Instagram was once going to only have thoughtfully artisan brand content and now it's just DR (like every other place on the Internet).
> At least the description is not at all about building an adtech platform inside OpenAI, it's about optimizing their marketing spend (which being a big brand, makes sense).
The job description has both, suggesting that they're hedging their bets. They want someone to build attribution systems which is both wildly, wildly ambitious and not necessary unless they want to sell ads.
> I bet they could wire up Meta ads platform to buy on ChatGPT, saving themselves a decade of building a solid buying platform for marketers.
Wouldn't work. The Meta ads system is so tuned for feed based ranking that I suspect they wouldn't gain much from this approach.
I hope it’s not controversial if I say that in the Apple world, Liquid Glass is, if not the first, certainly the worst regression. And I think this could have been predicted if you agreee with OP about Vista.
In context, (Snow) Leopard was almost definitely the peak. Windows users were struggling with Vista (UAC, DWM, and Windows Update taking up an entire CPU core (of which most people had only 1) or the security issues of XP. Meanwhile Mac users had already been through the growing pains and now had a stable, pretty, powerful OS.
reply