mccoyb 8 hours ago

Building agents has been fun for me, but it's clear that there are serious problems with "context engineering" that must be overcome with new ideas. In particular, no matter how big the context window size is increased - one must curate what the agent sees: agents don't have very effective filters on what is relevant to supercharge them on tasks, and so (a) you must leave *.md files strewn about to help guide them and (b) you must put them into roles. The *.md system is essentially a rudimentary memory system, but it could get be made significantly more robust, and could involve e.g. constructing programs and models (in natural language) on the fly, guided by interactions with the user.

What Claude Code has taught me is that steering an agent via a test suite is an extremely powerful reinforcement mechanism (the feedback loop leads to success, most of the time) -- and I'm hopeful that new thinking will extend this into the other "soft skills" that an agent needs to become an increasingly effective collaborator.

  • moritz64 42 minutes ago

    > steering an agent via a test suite is an extremely powerful reinforcement mechanism

    can you elaborate a bit? how do you proceed? what does your process look like?

  • franktankbank 7 hours ago

    Is there a recommended way to construct .md files for such a system? For instance when I make them for human consumption they'd have lots of markup for readability but that may or may not be consumable by an llm. Can you create a .md the same as for human consumption that doesn't hinder an llm?

    • artpar an hour ago

      I am using these files (most of them are llm generated based on my prompt to reduce its lookups when working on a codebase)

      https://gist.github.com/artpar/60a3c1edfe752450e21547898e801...

      (specially the AGENT.knowledge is quite helpful)

      • HumanOstrich 35 minutes ago

        Can you provide any form of demonstration of an LLM reading these files and acting accordingly? Do you know how each item added affects its behavior?

        I'd also be interested in your process for creating these files, such as examples of prompts, tools, and references for your research.

    • sothatsit 6 hours ago

      Just writing a clear document, like you would for a person, gets you 95% of the way there. There are little tweaks you can do, but they don't matter as much as just being concise and factual, and structuring the document clearly. You just don't want the documentation to get too long.

    • golergka 5 hours ago

      I've had very good experience with building a very architecture-conscious folder structure and putting AGENTS.md in every folder (and, of course, instruction to read _and_ update those in the root prompt). But with Agent-written docs I also have to run doc maintainer agent pretty often.

      • troupo 3 hours ago

        > and putting AGENTS.md in every folder (and, of course, instruction to read _and_ update those in the root prompt).

        For me, Claude Code completely ignores the instruction to read and follow AGENTS.md, and I have to remind it every time.

        The joys of non-deterministic blackboxes.

  • zmgsabst 7 hours ago

    I’ve found managing the context is most of the challenge:

    - creating the right context for parallel and recursive tasks;

    - removing some steps (eg, editing its previous response) to show only the corrected output;

    - showing it its own output as my comment, when I want a response;

    Etc.

    • mccoyb 7 hours ago

      I've also found that relying on agents to build their own context _poisons_ it ... that it's necessary to curate it constantly. There's kind of a <1 multiplicative thing going on, where I can ask the agent to e.g. update CLAUDE.mds or TODO.mds in a somewhat precise way, and the agent will multiply my request in a lot of changes which (on the surface) appear well and good ... but if I repeat this process a number of times _without manual curation of the text_, I end up with "lower quality" than I started with (assuming I wrote the initial CLAUDE.md).

      Obvious: while the agent can multiply the amount of work I can do, there's a multiplicative reduction in quality, which means I need to account for that (I have to add "time doing curation")

      • prmph 5 hours ago

        In other words, the old adage still applies: there is no free lunch.

        More seriously, yes it makes sense that LLMs are not going to be able to take humans entirely out of the loop. Think about what it would mean if that were the case: if people, on the basis of a few simple prompts could let the agents loose and create sophisticated systems without any further input, the there would be nothing to differentiate those systems, and thus they would lose their meaning and value.

        If prompting is indeed the new level of abstraction we are working at, then what value is added by asking Claude: make me a note-taking app? A million other people could also issue this same low-effort prompt; thus what is the value added here by the prompter?

    • ModernMech 5 hours ago

      It's funny because things are finally coming full circle in ML.

      10-15 years ago the challenge in ML/PR was "feature engineering", the careful crafting of rules that would define features in the data which would draw the attention of the ML algorithm.

      Then deep learning came along and it solved the issue of feature engineering; just throw massive amounts of data at the problem and the ML algorithms can discern the features automatically, without having to craft them by hand.

      Now we've gone as far as we can with massive data, and the problem seems to be that it's difficult to bring out the relevent details when there's so much data. Hence "context engineering", a manual, heuristic-heavy processes guided by trial and error and intuition. More an art than science. Pretty much the same thing that "feature engineering" was.

mindwok 8 hours ago

I'm not yet convinced (though I remain open to the idea) that AI agents are going to be a widely adopted pattern in the way people on LinkedIn suggest.

The way I use AI today is by keeping a pretty tight leash on it, a la Claude Code and Cursor. Not because the models aren't good enough, but because I like to weigh in frequently to provide taste and direction. Giving the AI more agency isn't necessarily desirable, because I want to provide that taste.

Maybe that'll change as I do more and new ergonomics reveal themselves, but right now I don't really want AI that's too agentic. Otherwise, I kind of lose connection to it.

  • prmph 5 hours ago

    Exactly. I made a similar comment as this elsewhere on this discussion:

    The old adage still applies: there is no free lunch. It makes sense that LLMs are not going to be able to take humans entirely out of the loop.

    Think about what it would mean if that were the case: if people, on the basis of a few simple prompts could let the agents loose and create sophisticated systems without any further input, the there would be nothing to differentiate those systems, and thus they would lose their meaning and value.

    If prompting is indeed the new level of abstraction we are working at, then what value is added by asking Claude: make me a note-taking app? A million other people could also issue this same low-effort prompt; thus what is the value added here by the prompter?

    • chamomeal 4 hours ago

      I’ve been thinking about that too! If you can only make an app by “vibe coding” it, then anybody else in the world with internet access can make it, too!

      Although sometimes the difficult part is knowing what to make, and LLMs are great for people who actually know what they want, but don’t know how to do it

  • thimabi 8 hours ago

    Do you think that, over time, knowing how the models behave, simply providing more/better context and instructions can fill this gap of wanting to provide taste and direction to the models’ outputs and actions?

    My experience is that, for many workflows, well-done “prompt engineering” is more than enough to make AI models behave more like we’d like without constantly needing us to weight in.

    • mindwok 8 hours ago

      I suppose it's possible, although the models would have to have a really nuanced understanding about my tastes and even then it seems doubtful.

      If we use a real world analogy, think of someone like an architect designing your house. I'm still going to be heavily involved in the design of my house, regardless of how skilled and tasteful the architect is. It's fundamentally an expression of myself - delegating that basically destroys the point of the exercise. I feel the same for a lot of the stuff I'm building with AI now.

      • thimabi 7 hours ago

        Can you share some examples of things you’ve been building with AI?

        From your comments, I’d venture a guess that you see your AI-assisted work as a creative endeavor — an expression of your creativity.

        I certainly wouldn’t get my hopes up for AI to make innovative jokes, poems and the like. Yet for things that can converge on specific guidelines for matters of taste and preferences, like coding, I’ve been increasingly impressed by how well AI models adapt to our human wishes, even when expressed in ever longer prompts.

        • QuadmasterXLII 7 hours ago

          One example: as a trial, I wanted to work out how frequently an 1400 rated chess player can get a particular opening trap. I intended to check this for all the traps, so it needed to be fast. With a surprising amount of handholding, claude code downloaded the relevant file from lichess. Its method of computing the probability was wrong, so I told it the formula to use and it got the right answer, but incredibly slowly. I asked it to precompute and cache a datas structure for accelerating these queries and it splashed around ineffectually for a long time with sqlite while I made dinner. I came back and clarified that just sorting all the games in the rating range and pickling that list of strings was a fine datastructure, then use binary search to do the probability in log(n) time. It managed to use binary search in o(n) time so I folded and wrote the hot loop myself. this got the query back to ~1 ms.

          In the end the agentic coding bit was garbage, but i appreciated claude’s help on writing the boilerplate to interface with stockfish

        • mindwok 7 hours ago

          I use AI for coding - most of the projects I've built have been fun toys (chore tracking apps, Flutter apps to help my parents), but I've also built one commercial money making app.

          I do agree - the models have good taste and often do things that delight me, but there's always room for me to inject my taste. For example, I don't want the AI to choose what state management solution I use for my Flutter app because I have strong opinions about that.

          • aabaker99 4 hours ago

            What’s the best state management in Flutter?

            • mindwok 3 hours ago

              Oh no we've wandered into a flamewar...

              I like Bloc the most!

    • heavyset_go 3 hours ago

      Look at what happens whenever models are updated or new models come out: previous "good" prompts might not return the expected results.

      What's good prompting for one model can be bad for another.

    • troupo 3 hours ago

      > knowing how the models behave, simply providing more/better context and instructions can fill this gap

      No.

      --- start quote ---

      prompt engineering is nothing but an attempt to reverse-engineer a non-deterministic black box for which any of the parameters below are unknown:

      - training set

      - weights

      - constraints on the model

      - layers between you and the model that transform both your input and the model's output that can change at any time

      - availability of compute for your specific query

      - and definitely some more details I haven't thought of

      https://dmitriid.com/prompting-llms-is-not-engineering

      --- end quote ---

    • apwell23 8 hours ago

      taste cannot be reduced to a bunch of instructions.

  • afc 7 hours ago

    My thinking is that over time I can incrementally codify many of these individual "taste" components as prompts that each review a change and propose suggestions.

    For example, a single prompt could tell an llm to make sure a code change doesn't introduce mutability when the same functionality can be achieved with immutable expressions. Another one to avoid useless log statements (with my specific description of what that means).

    When I want to evaluate a code change, I run all these prompts separately against it, collecting their structured (with MCP) output. Of course, I incorporate this in my code-agent to provide automated review iterations.

    If something escapes where I feel the need to "manually" provide context, I add a new prompt (or figure out how to extend whichever one failed).

transcriptase 8 hours ago

I love that there are somehow authorities on tech that realistically they could have 1-2 years experience with tops. It’s the reverse of the “seeking coder with 10 years of experience in a 2 year old language” meme.

  • noosphr 8 hours ago

    I've been building what's called ai agents since gpt3 came out. There are plenty of other people who did the same thing. That's five years now. If you can't be an expert after 5 years then there is no such thing as experts.

    Of course agents is now a buzzword that means nothing so there is that.

    • Voloskaya 2 hours ago

      “Agent” involves having agency. Calling the GPT-3 API and asking it to do some classification or whatever else your use case was, would not be considered agentic. Not only were there no tools back then to allow an LLM to carry out a plan of its own, even if you had developed your own, GPT-3 still sucked way too much to trust it with even basic tasks.

      I have been working on LLMs since 2017, both training some of the biggest and then creating products around them and consider I have no experience with agents.

      • noosphr 2 hours ago

        All llms still suck too much to trust them with basic tasks without human in the loop. The only people who don't realize this are the ones whose paycheck depends on them not understanding it.

        • Voloskaya an hour ago

          I don't necessarily disagree, my point is more that today you can realistically let an agent do several steps and use several tools, following a plan of it's own, before doing a manual review (e.g. Claude Code followed by a PR review). After all an intern has agency, even if I'm going to double check everything they do.

          GPT-3, while being impressive at the time, was too bad to even let it do that, it would break after 1 or 2 steps, so letting it do anything by itself would have been a waste of time where the human in the loop would always have to re-do everything. It's planning ability was too bad and hallucinations way to frequent to be useful in those scenarios.

    • skeeter2020 7 hours ago

      I took a course* on agent based system in grad school in 2006, but nobody has been building what agents mean today for 5 or even 3 years.

      *https://www.slideserve.com/verdi/seng-697-agent-based-softwa...

      • golergka 5 hours ago

        First GPT-based app I've built was in summer 2022, right after I got API access to GPT-3, and I was writing first autonomous GPT wrapper right after I got GPT-4 access in February GPT-3. It didn't have "tools" it could use, but it had memory and it was (clumsily) moving along a pre-defined chat scenario. And I'm nowhere near top AI researchers who have got to have had close access much earlier — so I have absolutely no doubt there's got to be people who have been writing exactly what we now call "agents" for 3 years straight.

    • GPerson 8 hours ago

      5 years is barely a beginner in lots of fields.

      • hinterlands 8 hours ago

        More to the point, it's a field where we're constantly told that our experiences from a month ago are in no way relevant and that the latest thing is fundamentally different to what we know. Should expertise degrade just as quickly?

        • lmm 3 hours ago

          Yes. The worst company I worked for was the one that allowed the guy who was a programming expert from like 30 years ago to make all important decisions.

      • tokioyoyo 7 hours ago

        Don't take it wrong way, but it's software. It's not that deep for 99% of cases.

    • djabatt 5 hours ago

      I agree with your point. After working with LLMs and building apps with them for the past four years, I consider myself a veteran and perhaps an authority (to some) on the subject. I find developing programs that use LLMs both fascinating and frustrating. Nevertheless, I'm going to continue with my work and curiosities, and let the industry change the names of what I'm doing—whether it's called agent development, context engineering, or whatever comes next.

    • Mengkudulangsat 8 hours ago

      Jiro's son is only allowed to make sushi after 30 years.

      • ecb_penguin 7 hours ago

        Yeah, but that's ego. You wouldn't be able to pick out Jiro's sushi in a blind taste test of many Tokyo sushi restaurants. If other people can replicate what you do, then the 30 years doesn't serve any actual purpose.

      • noosphr 8 hours ago

        Jiro's son is only allowed to make sushi when Jiro is about to retire.

    • apwell23 8 hours ago

      curious what did you build? experience only counts if you are shipping right?

      • noosphr 7 hours ago

        The biggest thing was an internal system for medium frequency trading.

        It had a lot of moving parts of which agents were the top 30% other systems would interact with. Storing, retrieving and ranking the information was the more important 70% that isn't as glamorous and no one makes courses about.

        I still have no idea why everyone is talking about whatever the hottest decoder only model is, encoder only models are a lot more useful for most tasks not directly interfacing with a human.

  • zzzeek 7 hours ago

    Totally my reaction - "I've worked with dozens of teams ....". Really ?

    • zmmmmm 7 hours ago

      Which means they had at best shallow involvement and left the scene pretty quickly. Probably no realistic idea whether the systems created survived long term impact with reality or not. But hey, free advice!

rm999 7 hours ago

A really short version of it is that you don't need an agent if you have a well-defined solution that can be implemented in advance (e.g. the 'patterns' in this article). Programmers often work on problems that have programmatic solutions and then the advice is totally correct: reach for simpler more reliable solutions. In the future AIs will probably be smart enough to just brute force any problem, but for now this is adding unneeded complexity.

I suspect a reason so many people are excited about agents is they are used to "chat assistants" as the primary purpose of LLMs, which is also the ideal use case for agents. The solution space in chat assistants is not defined in advance, and more complex interactions do get value from agents. For example, "find my next free Friday night and send a text to Bob asking if he's free to hang out" could theoretically be programmatically solved, but then you'd need to solve for every possible interaction with the assistant; there are a nearly unlimited number of ways of interfacing with an assistant, so agents are a great solution.

  • franktankbank 7 hours ago

    Works great when you can verify the response quicker than it would take to just do yourself. Personally I have a hard ass time trusting it without verifying.

joeblubaugh 8 hours ago

Why do so many examples break down to “send better spam faster”?

  • rglover 5 hours ago

    What is a wheel without its grease?

  • malfist 7 hours ago

    lol, that was literally their example wasn't it? Troll linkedin looking for people and spam them with "personalized" emails.

    • Animats 2 hours ago

      That's what's so funny about this.

      Spamming is not only obnoxious, but a very weak example. Spamming is so error tolerant that if 30% of the output is totally wrong, the sender won't notice. Response rates are usually very low. This is a singularly un-demanding problem.

      You don't even need "AI" for this. Just score LinkedIn profiles based on keywords, and if the score is high enough, send a spam. Draft a few form letters, and send the one most appropriate for the keywords. Probably would have about the same reply rate.

ilaksh 5 hours ago

I think this was true late 2023 or early 2024, but not necessarily in mid 2025 for most tasks (as long as they require some AI and aren't purely automation) and you use SOTA LLMs.

I used to build the way most of his examples are just functions calling LLMs. I found it almost necessary due to poor tool selection etc. But I think the leading edge LLMs like Gemini 2.5 Pro and Claude 4 are smart enough and good enough at instruction following and tool selection that it's not necessarily better to create workflows.

I do have a checklist tool and delegate command and may break tasks down into separate agents though. But the advantage of creating instructions and assigning tool commands, especially if you have an environment with a UI where it is easy to assign tool commands to agents and otherwise define them, is that it is more flexible and a level of abstraction above something like a workflow. Even for visual workflows it's still programming which is more brittle and more difficult to dial in.

This was not the case 6-12 months ago and doesn't apply if you insist on using inferior language models (which most of them are). It's really only a handful that are really good at instruction following and tool use. But I think it's worth it to use those and go with agents for most use cases.

The next thing that will happen over the following year or two is going to be a massive trend of browser and computer use agents being deployed. That is again another level of abstraction. They might even incorporate really good memory systems and surely will have demonstration or observation modes that can extract procedures from humans using UIs. They will also learn (record) procedural details for optimization during exploration from verbal or written instructions.

  • bonzini 5 hours ago

    The techniques he has in the post are mostly "model your problem as a data flow graph and follow it".

    If you skip the modeling part and rely on something that you don't control being good enough, that's faith not engineering.

    • ilaksh 5 hours ago

      I didn't say to skip any kind of problem modeling. I just didn't emphasize it.

      The goal _should_ be to avoid doing traditional software engineering or create a system that requires typical engineering to maintain.

      Agents with leading edge LLMs allow smart users to have flexible systems that they can evolve by modifying instructions and tools. This requires less technical skill than visual programming.

      If you are only taking advantage of the LLM to handle a few wrinkles or a little bit of natural language mapping then you aren't really taking advantage of what they can do.

      Of course you can build systems with rigid workflows and sprinkling of LLM integration, but for most use cases it's probably not the right default mindset for mid-2025.

      Like I said, I was originally following that approach a little ways back. But things change. Your viewpoint is about a year out of date.

      • bonzini 4 hours ago

        I understand that. You didn't answer the important point, which is that you can't be sure that what you have works if you don't encode the process. And encoding the processes isn't really software engineering; abstractions for business rules management have existed for decades and can be reused in this context.

        You're YOLOing it, and okay that may be fine but may also be a colossal mistake, especially if you remove or never had a human in the loop.

        • ilaksh 4 hours ago

          What I suggested was to use an actual agent. I also did not say there was no human in the loop.

          The process is encoded in natural language and tool options.

          I'm not YOLOing anything.

leCaptain an hour ago

there is no mention of which model was used to build the agent. for all we know the author could have used qwen3 0.6 q4.

it would be helpful to know which models where used in each scenario, otherwise this can largely be ignored

JSR_FDED 7 hours ago

After spending the last 3 weeks to get agents to work reliably I went for a much simpler pattern. Agents feel like they’re in the six fingers on a hand stage.

btown 6 hours ago

When I see things like "The coordinator threw up its hands when tasks weren't clearly defined" but the conclusion is to not use a coordinator at all in favor of imperative logic... it's really hard to know how much of this could be solved by using much more specific prompts/tool descriptions, and using interim summarization/truncation LLM passes to ensure that the amount of context from prior tool outputs doesn't overwhelm the part of context that describes the tools themselves and their recommended use cases. And when the article doesn't even provide a single example of a long-form tool description or prompt that would actually be used in practice...

I think there's some truth to using the right orchestration for the job, but I think that there's a lot more jobs that could benefit from agentic orchestration than the article would have you believe.

jlaneve 8 hours ago

We’re seeing this all the time - taking traditional workflow orchestration tools and instrumenting LLMs as part of it. It becomes a lot easier to build these because the complexity comes from a) the model, which frontier labs are making easy and b) productionizing a workflow, which workflow orchestration tools make easy. It’s also easy to recognize value because these workflows are often grounded in existing work and thus easy to measure.

We see these patterns do much so that we packaged it up for Airflow (one of the most popular workflow tools)!

https://github.com/astronomer/airflow-ai-sdk

bravesoul2 6 hours ago

Oh I wad hoping this would go back another step in the 5 whys and why use an LLM conversationally at all.

By the time you got a nice well established context with the right info... just give it to the user.

I like the idea of hallucination-free systems where the LLM merely classifies things at most.

Question -> classifier -> check with user action to take -> act using no AI

aryehof 4 hours ago

I have long felt that deterministic business processes are not suited to LLM orchestration. Isn’t the author in this article expressing this discovery?

abelanger 7 hours ago

Agents depend heavily on the quality of their individual components, so it's pretty obvious that demo agents are going to be incredibly unstable. You need a success rate for each individual component to be near 100% or build in a mechanism for corrective action (one of the things that Claude Code does particularly well).

dmezzetti 8 hours ago

This article is missing an even more important point: you don't always need to start with an LLM, plain old coding still solves a lot of problems.

  • skeeter2020 7 hours ago

    It's funny how when I talk to ML practitioners who have experience & work in the field they're the most pragmatic voices, like our staff developer on the ML team: "if you can solve the problem algorithmically you should definitely do that!"

  • riku_iki 8 hours ago

    but you can't build 5B startup in 10 months with plain old coding..

    • nine_k 7 hours ago

      Is coding the bottleneck there?

      • thimabi 7 hours ago

        Without it being attached to the AI hype, it surely is. In the current tech landscape, there’s a tendency to stuff AI into everything, so anything that doesn’t include it ends up being left behind.

    • dmezzetti 7 hours ago

      There are plenty of AI companies solving interesting problems and possibly worth it. But most problems are more simple than that and that hasn't changed.

    • imhoguy 3 hours ago

      you can build AI unicorn without AI even: builder.ai /s

      • gsky 34 minutes ago

        Some media outlets pushed that for clicks. According to an employee who worked there that they built some ai models and used AI.

lmeyerov 4 hours ago

I like the decision diagram :)

The callout on enterprise automation is interesting b/c it's one of the $T sized opportunities that matters most here, and while I think the article is right in the small, I now think quite differently in the large for what ultimately matters here. Basically, we're crossing the point where one agent written in natural language can easily be worth ~100 python scripts and be much shorter at the same time.

For context, I work with teams in operational enterprise/gov/tech co teams like tier 1+2 security incident response, where most 'alerts' don't get seriously investigated as underresourced & underautomated teams have to just define them away. Basically every since gpt4, it's been pretty insane figuring this stuff out with our partners here. As soon as you get good at prompt templates / plans with Claude Code and the like to make them spin for 10min+ productively, this gets very obvious.

Before agents:

Python workflows and their equivalent. They do not handle variety & evolution because they're hard-coded. Likewise, they only go so far on a task because they're brain dead. Teams can only crank out + maintain so many.

After agents:

You can easily sketch out 1 investigation template in natural language that literally goes 10X wider + 10X deeper than the equiv of Python code, including Python AI workflows. You are now handling much more of the problem.

deadbabe 8 hours ago

A key thing we may be forced to admit someday is that AI agents are really just expensive temporary glue that we use to build services quickly until we have cheaper hard coded functions developed once the AI agent gives us sufficient experience with the scope of the problem domain.

  • Onewildgamer 8 hours ago

    An interesting take, only if the stakes are low when the decisions are wrong. I'm not confident to have an LLM taking decisions for a customer or me. I'd rather have it suggest things to customers, sugesstive actions and some useful insights that user may have overlooked.

    • malfist 7 hours ago

      Can you imagine a bank taking this approach? Sorry, we didn't have enough time to build a true ledger, and now the AI says you have no money.

      • collingreen 7 hours ago

        This is happening across lots of industries right now. Some are ok like the car company that had to sell the car at the insanely low price their agent promised but some are terrible like the United healthcare "90% wrong when denying coverage" one or the "who should be fired" prompt from doge.

evertedsphere 2 hours ago

Another day, another blog post chock full of LLM tells on the front page.

ramoz 5 hours ago

Sorry, but you had a misleading experience with trash software (crew). Research Agent of all cases - the productionized high-value agent shipped to the masses by all providers.

Hard disagree with most of the narrative. Dont start with models, start with Claude Code. For any use case. Go from there depending on costs.

> When NOT to use agents

> Enterprise Automation

Archive this blog.

The real lesson is don't let any company other than the providers dictate what an agent is vs isnt.

Computer use agents are here, they are coming for the desktop of non-technical users, they will provide legitimate RPA capability and beyond, anyone productizing agents will build on top of provider sdks.

rvz 8 hours ago

Imagine playing the AI agent and scraper cat-and-mouse game all for Cloudflare to block them all by default all to navigate web pages and waste millions worth of tokens just to operate a computer.

The move by Cloudflare will totally ruin the AI scraper and the AI agent hype.

  • OccamsMirror 3 hours ago

    The funny thing is that a lot of people want the AI to scrape their public website. The same people that likely wrote all of their marketing content with ChatGPT.

  • oceanplexian 8 hours ago

    I don’t see how CloudFlare is going to realistically block someone who is trying hard enough.

    They’ll just get the agent to operate a browser with vision and it’s over. CAPTCHAs were already obsolete like 2-3 years ago.

    • rsanek 6 hours ago

      they don't have to make it impossible, just expensive enough for it to not be worth it. using vision is a perfect example.

  • iphone_elegance 7 hours ago

    Eh, if you control the browser or provide a good plugin you can just have users scrape for you

itomato 7 hours ago

MCP Server for Disco/Prom or Celonis?

ldjkfkdsjnv 8 hours ago

This is all going to be solved by better models. Building agents is building for a world that doesn't quite exist yet, but probably will in a year or two. Building some big heuristic engine that strings together LLM calls (which is what this blog advocates for) is essentially a bet against progress in ai. I'm not taking that bet, and neither are any of the major players.

  • tptacek 8 hours ago

    Maybe! But it seems like it's points well taken today. The important thing I think to keep in mind is that LLM calls themselves, anything that's happening inside the LLM, is stochastic. Even with drastically better models, I still can't tell myself a story that I can rely on specific outputs from an LLM call. Their outputs today are strong enough for a variety of tasks, when LLMs are part of the fabric of a program's logic --- in agent systems --- you need an expert human involved to notice when things go off the rails.

  • candiddevmike 8 hours ago

    There are perverse incentives against admitting that the AI boom music is probably stopping and grabbing a chair, better to keep stringing along investors with more AGI thought leadership.

    • tptacek 8 hours ago

      This comment has nothing to do with either the comment it replies to or the original post, neither of which have anything whatsoever to do with "AGI thought leadership".

      • bGl2YW5j 6 hours ago

        There's an implication in the messaging of most of these blogs that LLMs and the approach the blog describes, is verging on AGI.

        • tptacek 6 hours ago

          No, there isn't. People talk about AGI, including the CEOs of frontier model companies, but this isn't a post about that; it's very specifically a post about the workaday applications of LLMs as they exist today. (I don't think AGI will ever exist and don't care about it either way.)

  • snek_case 8 hours ago

    I've been reviewing papers for NeurIPS and I can tell you that many of the submissions are using various strategies to string together LLM calls for various purposes.

    It tends to work better when you give the LLMs some specific narrow subtask to do rather than expecting them to be in the driver's seat.

  • m82labs 8 hours ago

    So for people building real things today are you saying instead of stringing prompts together with logic we should just sit on our hands for a year and wait for the models to catch up to the agent paradigm?

    • ldjkfkdsjnv 8 hours ago

      if you are in a competitive market you will lose with this approach

  • mccoyb 8 hours ago

    What do "better models" and "progress in ai" mean to you? Without more information, it's impossible to respond sincerely or precisely.

  • malfist 7 hours ago

    People have been saying "solved by better models" for 5 years now. Still waiting on it.

  • imhoguy 3 hours ago

    I think that world will be abandoned by most of the sane people. Who personaly loves AI output? It it next level enshitification engine, see example in the article - spam...cough...salesbot.