Hacker Newsnew | past | comments | ask | show | jobs | submit | johnxie's commentslogin

Connecting the dots from agents to workflow automation to infrastructure with Taskade Genesis.

LLMs made it easy to generate apps. The harder problem is running them as real businesses. Where they live, remember state, coordinate agents, trigger workflows, and keep operating day to day. We treat the workspace itself as that layer.

One prompt becomes a living system. CRM, ops hub, internal tool, business in a box. Memory, agents, and automations working together. Feels closer to early web hosting than modern SaaS. Not demos. Real systems.

Still early, but builders are shipping real internal apps and workflows, not demos. Excited for the future of AI from productivity to agents and workflows to Infra!

https://www.taskade.com


Cool to see open models catching up fast. For builders the real question is simple. Which model gives you the tightest loop and the least surprises in production. Sometimes that is open. Sometimes closed. The rest is noise.


Remote is awesome until you hit the limits of it.

We tried building with 3 founders across 3 timezones. On a good day it felt magical. On a bad day it felt like the kind of lag you remember from SC BW, CS 1.6, or classic WoW raids where one spike wipes the whole run just so everyone has to start over.

Async is great for shipping, but not when you are moving fast on hard problems where alignment is the whole game. The drag shows up slowly and you learn zero to one needs tight loops, high trust, and shared tempo. You cannot patch that with calls or docs.

Some teams crush remote. We did sometimes but not often enough and learned that the hard way. The work decides the model. For us it was about momentum and getting the fastest feedback loop possible. Ideas die in latency. Execution dies in drift.

At the end of the day it is not ideology. It is just whatever keeps the product moving as a startup, aiming high to become better, faster, cheaper than the status quo.

Just my 2c.~


You can't bring people in the same office from three different timezones. Most probably your setup was the problem, not the productivity of the people working remotely. Remote people working in the same timezone are usually very effective in their job.


I kinda smile seeing this growing up in the real public_html days… Xanga, Geocities, Angelfire, copying HTML from those old Scholastic books to make my first little interactive Pokemon map to hosting WoW guild sites, DKP boards, CS 1.6 servers.

Feels like we’re back again with vibe-coding, app builders, v0, bolt, lovable, all of it. The AI infra even feels familiar. End users getting back the kind of control we had in the public_html days via cPanel shared-hosting, VPS era. And for backend, it’s Supabase or Neon / Postgres now instead of phpMyAdmin and MySQL.

Full circle~


The patterns are interesting, but they don’t imply “instructions.” Networks with thresholds self-organize long before they do anything meaningful. Good developmental signal, not evidence of built-in knowledge.


The word "instructions" doesn't literally imply a book of rules and practices is stuffed somewhere in their head. I'm not sure this is a meaningful distinction. It's still built-in by the time they're born.


Feels like pricing is becoming a moving target again. Cursor’s experiments showed how fast teams will change plans the moment usage patterns shift, and LLM speed only accelerates that loop. Every new model drop forces you to rethink what’s “metered,” what’s “included,” and what users actually feel in the product.

The part that buckles first is always the billing logic. Not the API calls, but the lifecycle math behind experiments… and the experiments never stop now.

So anything that lets teams iterate without rewiring state machines every week is going to find an audience. Most people just want to ship, test, adjust, repeat, without their billing layer collapsing under the pace of AI.

Nice launch!


Thank you! That’s exactly what we saw after speaking with a bunch of devs. They wanted to iterate on pricing as they all figure out how to price their AI products.

It turns out that the architectural changes that make that easier also make the tool generally easier to use for a bunch of different use cases.

In urban planning they have a name for this: the “curb cut effect”, whereby urban planners found that curb cuts, that had been made for a specific accessibility need such as people in wheelchairs, ended up being useful to all kinds of people: cyclists, people walking at night, pedestrians with weaker joints.

Devtools seem to benefit from a similar curb cut effect, where gains in a tool’s usability for one domain generalizes across many others.


Kudos on the launch! Most of the real work isn’t the chat box. It’s keeping context stable, memory reliable, and tool calls from drifting when things get complex. That’s where projects usually break, and also where the interesting problems are now. :)


Yes indeed! Our belief is that tool design, compaction (e.g. tool result summarization), and reminders are the things that separate a product that works magically vs one that falls over on any slightly more complex task.

And this is all made all the more important when supporting the wide range of models, even "weaker" open-source models.


I don’t think he meant scaling is done. It still helps, just not in the clean way it used to. You make the model bigger and the odd failures don’t really disappear. They drift, forget, lose the shape of what they’re doing. So “age of research” feels more like an admission that the next jump won’t come from size alone.


It still does help in the clean way it used to. The problem is that the physical world is providing more constraints like lack of power and chips and data. Three years ago there was scaling headroom created by the gaming industry, the existing power grid, untapped data artefacts on the internet, and other precursor activities.


The scaling laws are also power laws, meaning that most of the big gains happen early in the curve, and improvements become more expensive the further you go along.


LLMs aren’t perfect, but calling them a “cult” misses the point. They’re not just fancy heuristics, they’re general-purpose function approximators that can reason, plan, and adapt across a huge range of tasks with zero task-specific code.

Sure, it’s not AGI. But dismissing the progress as just marketing ignores the fact that we’re already seeing them handle complex workflows, multi-step reasoning, and real-time interaction better than any previous system.

This is more than just Lisp nostalgia. Something real is happening.


Sure, I have seen the detrimental impact on some teams, and it does not play out as Marketers suggest.

The trick is in people seeing meaning in well structured nonsense, and not understanding high dimension vector spaces simply abstracting associative false equivalency with an inescapable base error rate.

I wager Neuromorphic computing is likely more viable than LLM cults. The LLM subject is incredibly boring once your tear it apart, and less interesting than watching Opuntia cactus grow. Have a wonderful day =3


For O-1s filed by startups, what are the most common weak points USCIS flags now? Are you seeing more RFEs around equity or funding documentation?


For the most part, O-1 filings for the founders or employees of startups are no more difficult than the O-1 filings for employees of established/large companies; the main issue is "distinguished reputation" (a component of one of the O-1 criteria), which can be harder for startups to show.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: