• Hacker News
  • new|
  • comments|
  • show|
  • ask|
  • jobs|
  • raised_hand 3 hours

    Why K6? Is there a way I could run it without

  • rafaelbcs 9 hours

    [dead]

  • denysvitali 8 hours

    FWIW, a "cheaper" version of this is triggering Claude via GitHub Actions and `@claude`ing your agents like that. If you run your CI on Kubernets (ARC), it sounds pretty much the same

  • verdverm 3 hours

    I love k8s, but having it as a requirement for my agent setup is a non-starter. Kubernetes is one method for running, not the center piece.

  • abybaddi009 6 hours

    Does this support skills and MCP?

    jawiggins 4 hours

    Yup. MCP can be configured on a repo level. At task execution time, enabled MCP servers are written as a .mcp.json file into the agent's worktree. Enabled skills are written as .claude/commands/{name}.md files in the worktree, making them available as slash commands to the agent

  • MrDarcy 10 hours

    Looks cool, congrats on the launch. Is there any sandbox isolation from the k8s platform layer? Wondering if this is suitable for multiple tenants or customers.

    jawiggins 10 hours

    Oh good question, I haven't thought deeply about this.

    Right now nothing special happens, so claude/codex can access their normal tools and make web calls. I suppose that also means they could figure out they're running in a k8s pod and do service discovery and start calling things.

    What kind of features would you be interested in seeing around this? Maybe a toggle to disable internet connections or other connections outside of the container?

    nevon 2 hours

    Network policies controlling egress would be one thing. I haven't seen how you make secrets available to the agent, but I would imagine you would need to proxy calls through a mitm proxy to replace tokens with real secrets, or some other way to make sure the agent cannot access the secrets themselves. Specifically for an agent that works with code, I could imagine being able to run docker-in-docker will probably be requested at some point, which means you'll need gvisor or something.

  • pianopatrick 3 hours

    I wonder, based on your experience, how hard would it be to improve your system to have an AI agent review the software and suggest tickets?

    Like, can an AI agent use a browser, attempt to use the software, find bugs and create a ticket? Can an AI agent use a browser, try to use the software and suggest new features?

    mlsu 2 hours

    perhaps we can give the AI a bit of money, make it the customer, then we can all safely get off the computer and go outside :)

    ramon156 2 hours

    I think it's more important to pin down where a human must be in order for this not to become a mess. Or have we skipped that step entirely?

    smokeyfish 3 hours

    Datadog have a feature like that.

  • conception 9 hours

    What’s the most complicated, finished project you’ve done with this?

    jawiggins 8 hours

    Recently I used to to finish up my re-implementation of curl/libcurl in rust (https://news.ycombinator.com/item?id=47490735). At first I started by trying to have a single claude code session run in an iterative loop, but eventually I found it was way to slow.

    I started tasking subagents for each remaining chunk of work, and then found I was really just repeating the need for a normal sprint tasking cycle but where subagents completed the tasks with the unit tests as exit criteria. So optio came to my mind, where I asked an agent to run the test suite, see what was failing, and make tickets for each group of remaining failures. Then I use optio to manage instances of agents working on and closing out each ticket.

  • hmokiguess 9 hours

    the misaligned columns in the claude made ASCII diagrams on the README really throw me off, why not fix them?

    | | | |

    jawiggins 8 hours

    Should be fixed now :)

  • naultic 7 hours

    I'm working on something a little similar but mines more a dev tool vs process automation but I love where yours is headed. The biggest issue I've run into is handling retries with agents. My current solution is I have them set checkpoints so they can revert easily and when they can't make an edit or they can't get a test passing, they just restart from earlier state. Problem is this uses up lots of tokens on retries how did you handle this issue in your app?

    jawiggins 7 hours

    Generally I've found agents are capable of self correcting as long as they can bash up against a guardrail and see the errors. So in optio the agent is resumed and told to fix any CI failures or fix review feedback.

  • QubridAI 9 hours

    [flagged]

    knollimar 9 hours

    I don't want to accuse you of being an LLM but geez this sounds like satire

    weird-eye-issue 8 hours

    It's AI.

  • antihero 10 hours

    And what stops it making total garbage that wrecks your codebase?

    upupupandaway 10 hours

    Ticket -> PR -> Deployment -> Incident

    jawiggins 10 hours

    There are a few things:

    a) you can create CI/build checks that run in github and the agents will make sure pass before it merges anything

    b) you can configure a review agent with any prompt you'd like to make sure any specific rules you have are followed

    c) you can disable all the auto-merge settings and review all the agent code yourself if you'd like.

    kristjansson 8 hours

    > to make sure

    you've really got to be careful with absolute language like this in reference to LLMs. A review agent provides no guarantees whatsoever, just shifts the distribution of acceptable responses, hopefully in a direction the user prefers.

    jawiggins 8 hours

    Fair, it's something like a semantic enforcement rather than a hard one. I think current AI agents are good enough that if you tell it, "Review this PR and request changes anytime a user uses a variable name that is a color", it will do a pretty good job. But for complex things I can still see them falling short.

    SR2Z 6 hours

    I mean, having unit tests and not allowing PRs in unless they all pass is pretty easy (or requiring human review to remove a test!).

    A software engineer takes a spec which "shifts the distribution of acceptable responses" for their output. If they're 100% accurate (snort), how good does an LLM have to be for you to accept its review as reasonable?

    59nadir 3 hours

    We've seen public examples of where LLMs literally disable or remove tests in order to pass. I'm not sure having tests and asking LLMs to not merge things before passing them being "easy" matters much when the failure modes here are so plentiful and broad in nature.

    AbanoubRodolf 2 hours

    [dead]

    ElFitz 2 hours

    My favourite so far was Claude "fixing" deployment checks with `continue-on-error: true`

    jamiemallers 13 minutes

    [dead]