I noticed this unusual line in go.mod and got curious why it is using replace for this (typically you would `go get github.com/Masterminds/semver/v3@v3.4.0` instead).
I found this very questionable PR[0]. It appears to have been triggered by dependabot creating an issue for a version upgrade -- which is probably unnecessary to begin with. The copilot agent then implemented that by adding a replace statement, which is not how you are supposed to do this. It also included some seemingly-unrelated changes. The copilot reviewer called out the unrelated changes, but the human maintainer apparently didn't notice and merged anyway.
It is so important to use specific prompts for package upgrading.
Think about what a developer would do:
- check the latest version online;
- look at the changelog;
- evaluate if itās worth to upgrade or an intermediate may be alright in case of code update are necessary;
Of course, the keep these operations among the human ones, but if you really want to automate this part (and you are ready to pay its consequences) you need to mimic the same workflow.
I use Gemini and codex to look for package version information online, it checks the change logs from the version I am to the one Iād like to upgrade, I spawn a Claude Opus subagent to check if in the code something needs to be upgraded. In case of major releases, I git clone the two packages and another subagents check if the interfaces I use changed. Finally, I run all my tests and verify everythingās alright.
Yes, it might not still be perfect, but neither am I.
This happens with all agents I've used and package.json files for npm. Instead of using `npm i foo` the agent string-edits package.json and hallucinates some version to install. Usually it's a kind of ok version, but it's not how I would like this to work.
It's worse with renaming things in code. I've yet to see an agent be able to use refactoring tools (if they even exist in VS Code) instead of brute-forcing renames with string replacement or sed. Agents use edit -> build -> read errors -> repeat, instead of using a reliable tool, and it burns a lot more GPU...
For the first, I think maintaining package-add instructions is table stakes, we need to be opinionated here. Agents are typically good at following them, if not you can fall over to a Makefile that does everything.
For the second, I totally agree. I continue to hope that agents will get better at refactoring, and I think using LSPs effectively would make this happen. Claude took dozens of minutes to perform a rename which Jetbrains would have executed perfectly in like five seconds. Its approach was to make a change, run the tests, do it again. Nuts.
> This happens with all agents I've used and package.json files for npm. Instead of using `npm i foo` the agent string-edits package.json and hallucinates some version to install.
When using codex, I usually have something like `Never add 3rd party libraries unless explicitly requested. When adding new libraries, use `cargo add $crate` without specifying the version, so we get the latest version.` and it seems to make this issue not appear at all.
Totally. Surely the IDEās like antigravity are meant to give the LLM more tools to use for eg refactoring or dependency management? I havenāt used it but seems a quick win to move from token generation to deterministic tool use.
As if. Iāve had Gemini stuck on AG because it couldnāt figure out how to use only one version of React. I managed to detect that the build failed because 2 versions of React were being used, but it kept saying āIāll remove React version Nā, and then proceeding to add a new dependency of the latest version. Loops and loops of this. On a similar note AG really wants to parse code with weird grep commands that donāt make any sense given the directory context.
I think it is funny they all these companies are spending a ton and racing to have a AI story. Itās almost like none of the executives understand AI.
If you are changing your product for AI - you donāt understand AI. AI doesnāt need you to do this, and it doesnāt make you a AI company if you do.
AI companies like Anthropic, OpenAI, and maybe Google, simply will integrate at a more human leave and use the same tools humans used in the past, but do so at a higher speed, reliability.
All this effort wasted, as AI donāt need it, and your company is spending millions maybe billions to be an AI company that likely will be severely devalued as AI advances.
Ah, the critical problem dilemma. Some percentage of free users become paid users, but the free users take up an unreasonable amount of your time/energy/support.
I've quoted the response on that ticket below. Is there something you disagree with? The "issue" is that usage exceeds the amount that's been paid. The solution sounds pretty simple: pay for your usage. Is your experience different somehow?
> If usage is exceeded, you need to add a payment method and set a spending limit (you can even set it to $0 if you donāt want to allow extra charges).
> If you donāt want to add billing, youāll need to wait until your monthly quota resets (on the first day of the next month).
Edit: also, one of the other comments says this:
> If youāre experiencing this issue, there are two primary potential causes:
> Your billing information is incorrect. Please update your payment method and ensure your billing address is correct.
> You have a budget set for Actions that is preventing additional spend. Refer to Billing & Licensing > Budgets.
This reminds me slightly of some copilot nonsense I get. I donāt use copilot. Every few days when Iām on the GitHub homepage the copilot chat input (which I donāt want on my homepage anyway) tells me itās disabled because Iāve used up my monthly limit of copilot.
I literally do not use it, and no my account isnāt compromised. Trying to trick people into paying? Seems cartoonishly stupid butā¦
> GitHub Agentic Workflows deliver this: repository automation, running the coding agents you know and love, in GitHub Actions, with strong guardrails and security-first design principles.
GitHub Actions is the last organization I would trust to recognize a security-first design principle.
Often code is seen as an artifact, that it is valuable by itself. This was an incomplete view before, and it is now a completely wrong view.
What is valuable is how code encode the knowledge of the organization building it.
But what it is even more valuable, is that knowledge itself. Embedded into the people of the organization.
Which is why continuos and automatic improvement of a codebase is so important. We all know that code rot with time/features requests.
But at the same time, abruptly change the whole codebase architecture destroys the mental model of the people in the organization.
What I believe will work, is a slow stream of small improvements - stream that can be digested by the people in the organization.
In this context I find more useful to mix and control deterministic execution with a sprinkle of intelligence on top.
So a deterministic system that figure out what is wrong - with whatever definition of wrong that makes sense.
And then LLMs to actually fix the problem, when necessary.
We are missing some building blocks IMO. We need a good abstraction for defining the invariants in the structure of a project and communicating them to an agent. Even if we had this, if a project doesnāt already consistently apply those patterns the agent can be confused or misapply something (or maybe itās mad about ādo as I say not as I doā).
I expend a lot of effort preparing instructions in order to steer agents in this way, itās annoying actually. Think Deep Wiki-style enumeration of how things work, like C4 Diagrams for agents.
Why would that be phishy? They own the GitHub org on GitHub, hence github.github.io. I always thought it was a neat recursive/dogfood type thing even if not really that deep. Like when Reddit had /r/reddit.com or twitter having @twitter
When they launched github.io, they said it was for user-generated content, and official stuff will be on github.com. Seemingly that's changed/they forgot, but users seems to have remembered. Microsoft isn't famous for their consistency, so not unexpected exactly.
Iām pretty sure they have used it before, or maybe it was githubnext. Iām also pretty sure I have seen many large companies and organizations launch developer facing tools and stuff through GitHub pages. The structure of GitHub pages is pretty simple. You know the user/org from the domain. Iām still not sure whatās phishy about it. Is it a broken promise?
It's phishy because it's breaks the rules people are generally told for avoiding phishing links, mainly that they should pay attention to the domain rather than subdomains. Browser even highlight that part specifically so that you pay attention to it, because you can't fake the real domain. The problem with what GitHub does here is that while `github.github.io` might be the real GitHub, `foobar-github.github.io` is not because anybody can get a github.io via their username, that was part of why they made github.io separate. Additionally they could easily host this via GitHub Pages but still use a custom domain back to github.com, they just don't.
I would say that GitHub is particularly bad about this as they also use `github.blog` for announcements. I'm not sure if they have any others, but then that's the problem, you can't expect people to magically know which of your different domains are and aren't real if you use more than one. They even announced the github.com SSH key change on github.blog.
Any github pages site is, by default, ORGNAME.github.io.
We recently moved this out of the githubnext org to the github org, but short of dedicating some route in github.com/whatever, github.github.io is the domain for pages from the github org.
What timing. I used the whole weekend building a CI agentic workflow where I can let CC run wild with skip-permissions in isolated vms while working async on a gitea repo. I leave the CC instance with a decent sized mission and it will iterate until CI is green and then create a PR for me to merge. I'm moving from talking synchronously to one Clade Code to manage a small group of collaborating Claudes.
This is an extension for the gh cli that takes markdown files as input and creates github actions workflow files from them. Not just any workflow files, but 1000-line beasts that you'll need an LLM to explain what they do.
I tried out `gh aw init` and hit Y at the wrong prompt. It created a COPILOT_GITHUB_TOKEN on the github repo I happened to be in presumably with a token from my account. That's something that really should have an extra confirmation.
Stuffing agents somewhere they don't belong rather than making the system work better with the agents people already use. Obvious marketing driven cash grab.
The landing page doesn't make it clear to me what value this is providing to me (as a user). I see all of these things that I can theoretically do, but I don't see (1) actual examples of those things (2) how this specific agentic workflow helps.
For examplpe, https://github.github.io/gh-aw/blog/2026-01-13-meet-the-work... has several examples of agentic workflows for managing issues and PRs, and those examples link to actual agentic workflow files you can read and use as a starting point for your own workflows.
The value is "delegate chores that cannot be handled by a heuristic". We're figuring out how to tell the story as we go, appreciate the callout!
I find this confusing: I can see the value in having an LLM assist you in developing a CI/CD workflow, but why would you want one involved in any continuous degree with your CI/CD? Perhaps itās not as bad as that given that thereās a ācompilationā phase, but the value add there isnāt super clear either (why would I check in both the markdown and the generated workflow; should I always regenerate from the markdown when I need changes, etc.).
Given GitHubās already lackluster reputation around security in GHA, I think Iād like to see them address some of GHAās fundamental weaknesses before layering additional abstractions atop it.
I thought that it was to allow non-tech people to start making their own workflows/CI in a no/low-code way and compete against successful companies on this market.
But the implementation is comically awful.
Sure, you can "just write natural language" instructions and hope for the best.
But they couldn't fully get away from their old demons and you still have to pay the YAML tax to set the necessary guardrails.
Why setting-up an actual workflow engine on an infra managed by IT with actual security tooling when you can just stick together a few bits of YAML and Markdown on Github, right?
I don't personally want any kind of workflow that spams my repo with gen AI refactorings or doc maintenance either. That is literally just creating overhead for me and it sounds like an excuse to shoehorn AI in to a workflow more than anything else.
I use an LLM behavior test to see if the semantic responses from LLMs using my MCP server match what I expect them to. This is beyond the regex tests, but to see if there's a semantic response that's appropriate. Sometimes the LLMs kick back an unusual response that technically is a no, but effectively is a yes. Different models can behave semantically different too.
If I had a nice CI/CD workflow that was built into GitHub rather than rolling my own that I have running locally, that might just make it a little more automatic and a little easier.
> I find this confusing: I can see the value in having an LLM assist you in developing a CI/CD workflow, but why would you want one involved in any continuous degree with your CI/CD?
The sensible case for this is for delivering human-facing project documentation, not actual code. (E.g. ask the AI agent to write its own "code review" report after looking at recent commits.) It's implemented using CI/CD solutions under the hood, but not real CI/CD.
Sorry, maybe I phrased my original comment poorly: I agree there's value in that kind of "self" code-review or other agent-driven workflow; I'm less clear on how that value is produced (performantly, reliably, etc.) by the architecture described on the site.
This is a solid step forward on execution safety for agentic workflows. Permissions, sandboxing, MCP allowlists, and output sanitization all matter. But the harder, still unsolved problem is decision validation, not execution constraints. Most real failures come from agents doing authorized but wrong things with high confidence. Hallucinations, shallow agreement, or optimizing for speed while staying inside the permission box.
Iām working on an open source project called consensus-tools that sits above systems like this and focuses on that gap. Agents do not just act, they stake on decisions. Multiple agents or agents plus humans evaluate actions independently, and bad decisions have real cost. This reduces guessing, slows risky actions, and forces higher confidence for security sensitive decisions. Execution answers what an agent can do. Consensus answers how sure we are that it should do it.
I tested it a bit yesterday, and it looks goodāat least from a structural perspective. Separating the LLM invocation from the apply step is a great idea. This isnāt meant to replace our previous deterministic GitHub Actions workflow; rather, it enables automation with broader possibilities while keeping LLM usage safer.
Also, a reminder: if you run Codex/Claude Code/whatever directly inside a GitHub Action without strong guardrails , you risk leaking credentials or performing unsafe write actions.
I feel like this solution hallucinated the concept of Workflow Lock File (.lock.yml), which is not available in Github Actions. This is a missing feature that would solve the security risk of changing git tag references when calling to actions like utility@v1
I think in this context they mean ālockā as in āthese are the generated contents corresponding to your source markdown,ā not as in āthis is a lockfile.ā But I think thatās a pretty confusing overlap for them to have introduced, given that a lack of strong dependency pinning is a significant ongoing pain point in GHA.
You can also configure a policy for it [0] and there are many oss tools for auto converting your workflow into a pinned hash ones. I guess OP is upset itās not in gh CLI? Maybe a valid feature to have there even if itās just a nicety
I want to see where we're at in 2 years, because these last couple of months have been pretty chaotic (but in a good sense) in terms of agents doing things with other agents. I think this is the real wake-up-call, that these dumb and error-prone agents can do self-correcting teamwork, which they will hopefully do for us.
Two years, then we'll know if and how this industry has completely been revolutionized.
By then we'd probably have an AGI emulator, emulated through agents.
It looks like it does have an MCP Gateway https://github.com/github/gh-aw-mcpg so I may see how well it works with my MCP server. One of the components mine makes are agent elements with my own permissioning, security, memory, and skills. I put explicit programatic hard stops on my agents if they do something that is dangerous or destructive.
As for the domain, this is the same account that has been hosting Github projects for more than a decade. Pretty sure it is legit. Org ID is 9,919 from 2008.
This is insane stuff. Why are they pushing this nonsense on developers when the real money is in surveillance and web indexing?
People like Nadella must think that developers are the weakest link: Extreme tolerance for Rube Goldberg machines, no spine, no sense of self-protection.
Somehow i want to ask what's the actual job of those former software engineers. Agents everywhere, on your local machine, in the pipeline, on the servers, and they are doing everything. Yes, the specs also.
Someone still has orchestrate the shit show. Like a captain at the helm in the middle of a storm.
Or you can be full accelerationist and give an agent the role of standing up all the agents. But then you need someone with the job of being angry when they get a $7000 cloud bill.
Wasnt GitHub supposed to be doing a feature freeze while they move to Azure?(1)
They certainly could use it as their stability has plummeted. After moving to a self-hosted Forgejo I'll never go back. My UI is instant, my actions are faster than they ever were on GH (with or without accelerators like Blacksmith.sh), I dont constantly get AI nonsense crammed into my UI, and I have way better uptime all with almost no maintenance (mostly thanks to uCore)...
GH just doesnt really have much a value proposition for anything that isnt a non-trivial, star gathering obsessed, project IMO...
Hello HN! The Agentic Workflows project has been on the githubnext.com website for a while, and we recently moved the documentation and repo over to the `github` org.
This is early research out of GitHub Next building on our continuous AI [1] theme, so we'd love for you to kick the tires and share your thoughts. We'd be happy to answer questions, give support, whatever you need. One of the key goals of this project is to figure out how to put guardrails around agents running in GitHub actions. You can read more about our security architecture [1], but at a high level we do the following:
- We run the agent in a sandbox, with minimal to no access to secrets
- We run the agent in a firewall, so it can only access the sites you specify
- We have created a system called "*safe outputs*" that limits what write operations the agent can perform to only the ones you specify. For example, if you create an Agentic Workflow that should only comment on an issue, it will not be able to open a new issue, propose a PR, etc.
- We run MCPs inside their own sandboxes, so an attacker canāt leverage a compromised server to break out or affect other components
We find that there's something very compelling about the shape of this ā delegating chores to agents in the same way that we delegate CI to actions. It's certainly not perfect yet, but we're finding new applications for this every day and teams at GitHub are already creating agentic workflows for their own purposes, whether it's engineering or issue management or PR hygiene.
> Why is it on github.github.io and not github.com?
GitHub Pages domains are always ORGNAME.github.io. Now that we've moved the repo over to the `github` org, that's the domain. When this graduates from being a technology preview to a full-on product, we imagine it'll get a spot on github.com/somewhere.
> Why is GitHub Next exploring this?
Our job at GitHub is to build applications that leverage the latest technology. There are a lot of applications of _asynchronous_ AI which we suspect might become way bigger than _synchronous_ AI. Agentic Workflows can do things that are not possible without an LLM. For example, there's no linter in existence that can tell me if my documentation and my code has diverged. That's just one new capability. We think there's a huge category of these things here and the only way to make it good is to ⦠make it!
> Where can I go to talk with folks about this and see what others are cooking with it?
The generation of the workflow file from the input markdown file is deterministic. It's what the agent does when running the workflow that is non-deterministic.
Why is it phishy? Github.io has been the domain they use for all GH pages for a long time with subdomains mapping to GH usernames. Itās standard practice to separate user generated content from the main domain so that it doesnāt poison SEO.
Very weird of them to not use github.com but instead use the domain they otherwise use for non-github/user content. Phishy indeed, and then people/companies go ahead and blame users for not taking care/checking, yet banks and more continuously deploy stuff in a way to train users to disregard those things.
I noticed this unusual line in go.mod and got curious why it is using replace for this (typically you would `go get github.com/Masterminds/semver/v3@v3.4.0` instead).
I found this very questionable PR[0]. It appears to have been triggered by dependabot creating an issue for a version upgrade -- which is probably unnecessary to begin with. The copilot agent then implemented that by adding a replace statement, which is not how you are supposed to do this. It also included some seemingly-unrelated changes. The copilot reviewer called out the unrelated changes, but the human maintainer apparently didn't notice and merged anyway.There is just so much going wrong here.
[0] https://github.com/github/gh-aw/pull/4469
It is so important to use specific prompts for package upgrading.
Think about what a developer would do: - check the latest version online; - look at the changelog; - evaluate if itās worth to upgrade or an intermediate may be alright in case of code update are necessary;
Of course, the keep these operations among the human ones, but if you really want to automate this part (and you are ready to pay its consequences) you need to mimic the same workflow. I use Gemini and codex to look for package version information online, it checks the change logs from the version I am to the one Iād like to upgrade, I spawn a Claude Opus subagent to check if in the code something needs to be upgraded. In case of major releases, I git clone the two packages and another subagents check if the interfaces I use changed. Finally, I run all my tests and verify everythingās alright.
Yes, it might not still be perfect, but neither am I.
This happens with all agents I've used and package.json files for npm. Instead of using `npm i foo` the agent string-edits package.json and hallucinates some version to install. Usually it's a kind of ok version, but it's not how I would like this to work.
It's worse with renaming things in code. I've yet to see an agent be able to use refactoring tools (if they even exist in VS Code) instead of brute-forcing renames with string replacement or sed. Agents use edit -> build -> read errors -> repeat, instead of using a reliable tool, and it burns a lot more GPU...
For the first, I think maintaining package-add instructions is table stakes, we need to be opinionated here. Agents are typically good at following them, if not you can fall over to a Makefile that does everything.
For the second, I totally agree. I continue to hope that agents will get better at refactoring, and I think using LSPs effectively would make this happen. Claude took dozens of minutes to perform a rename which Jetbrains would have executed perfectly in like five seconds. Its approach was to make a change, run the tests, do it again. Nuts.
> This happens with all agents I've used and package.json files for npm. Instead of using `npm i foo` the agent string-edits package.json and hallucinates some version to install.
When using codex, I usually have something like `Never add 3rd party libraries unless explicitly requested. When adding new libraries, use `cargo add $crate` without specifying the version, so we get the latest version.` and it seems to make this issue not appear at all.
Totally. Surely the IDEās like antigravity are meant to give the LLM more tools to use for eg refactoring or dependency management? I havenāt used it but seems a quick win to move from token generation to deterministic tool use.
As if. Iāve had Gemini stuck on AG because it couldnāt figure out how to use only one version of React. I managed to detect that the build failed because 2 versions of React were being used, but it kept saying āIāll remove React version Nā, and then proceeding to add a new dependency of the latest version. Loops and loops of this. On a similar note AG really wants to parse code with weird grep commands that donāt make any sense given the directory context.
They are trying to fix it using this comment but cancelled mid way. Not sure why.
https://github.com/github/gh-aw/pull/14548
Ha, they used my comment in the prompt. I love it.
I think it is funny they all these companies are spending a ton and racing to have a AI story. Itās almost like none of the executives understand AI.
If you are changing your product for AI - you donāt understand AI. AI doesnāt need you to do this, and it doesnāt make you a AI company if you do.
AI companies like Anthropic, OpenAI, and maybe Google, simply will integrate at a more human leave and use the same tools humans used in the past, but do so at a higher speed, reliability.
All this effort wasted, as AI donāt need it, and your company is spending millions maybe billions to be an AI company that likely will be severely devalued as AI advances.
Github should focus on getting their core offerings in shape first.
I stopped using GH actions when I ran into this issue: https://github.com/orgs/community/discussions/151956#discuss...
That was almost a year ago and to this date I still get updates of people falling into the same issue.
Ah, the critical problem dilemma. Some percentage of free users become paid users, but the free users take up an unreasonable amount of your time/energy/support.
The solution seems simple. Buy their product.
I don't follow, we pay them for the actions and everything and still ran into this issue.
That's why it's an issue.
What's the issue, as you see it?
I've quoted the response on that ticket below. Is there something you disagree with? The "issue" is that usage exceeds the amount that's been paid. The solution sounds pretty simple: pay for your usage. Is your experience different somehow?
> If usage is exceeded, you need to add a payment method and set a spending limit (you can even set it to $0 if you donāt want to allow extra charges).
> If you donāt want to add billing, youāll need to wait until your monthly quota resets (on the first day of the next month).
Edit: also, one of the other comments says this:
> If youāre experiencing this issue, there are two primary potential causes:
> Your billing information is incorrect. Please update your payment method and ensure your billing address is correct.
> You have a budget set for Actions that is preventing additional spend. Refer to Billing & Licensing > Budgets.
> The solution seems simple. Buy their product.
Buying half baked software would probably encourage this. Quarter baked software!
"In shape" in what sense? This is just hitting the limits of a free account, and the message clearly states that.
> people falling into the same issue.
Every SaaS provider with a free tier has this issue. How do you suggest it should be addressed?
Well, this behavior makes sense. They're a bluechip trying to maintain the illusion that theyre a growth stock juuuust a little bit longer.
This reminds me slightly of some copilot nonsense I get. I donāt use copilot. Every few days when Iām on the GitHub homepage the copilot chat input (which I donāt want on my homepage anyway) tells me itās disabled because Iāve used up my monthly limit of copilot.
I literally do not use it, and no my account isnāt compromised. Trying to trick people into paying? Seems cartoonishly stupid butā¦
> GitHub Agentic Workflows deliver this: repository automation, running the coding agents you know and love, in GitHub Actions, with strong guardrails and security-first design principles.
GitHub Actions is the last organization I would trust to recognize a security-first design principle.
I am somehow close to what MSFT and GitHub are doing here, mostly because I believe it is a great idea, and I am experimenting on it myself.
Especially on the angle of automatic/continuos improvement (https://github.github.io/gh-aw/blog/2026-01-13-meet-the-work...)
Often code is seen as an artifact, that it is valuable by itself. This was an incomplete view before, and it is now a completely wrong view.
What is valuable is how code encode the knowledge of the organization building it.
But what it is even more valuable, is that knowledge itself. Embedded into the people of the organization.
Which is why continuos and automatic improvement of a codebase is so important. We all know that code rot with time/features requests.
But at the same time, abruptly change the whole codebase architecture destroys the mental model of the people in the organization.
What I believe will work, is a slow stream of small improvements - stream that can be digested by the people in the organization.
In this context I find more useful to mix and control deterministic execution with a sprinkle of intelligence on top. So a deterministic system that figure out what is wrong - with whatever definition of wrong that makes sense. And then LLMs to actually fix the problem, when necessary.
We are missing some building blocks IMO. We need a good abstraction for defining the invariants in the structure of a project and communicating them to an agent. Even if we had this, if a project doesnāt already consistently apply those patterns the agent can be confused or misapply something (or maybe itās mad about ādo as I say not as I doā).
I expend a lot of effort preparing instructions in order to steer agents in this way, itās annoying actually. Think Deep Wiki-style enumeration of how things work, like C4 Diagrams for agents.
Alternative, less phishy link: https://github.com/github/gh-aw
This is on GitHub's official account. For some reason GitHub is deploying this on GitHub pages without a different domain?
This is a github pages feature. Given an account with the name "example", they can publish static pages to example.github.io
So this being from github.github.io implies it's published by the "github" account on github.
Why would that be phishy? They own the GitHub org on GitHub, hence github.github.io. I always thought it was a neat recursive/dogfood type thing even if not really that deep. Like when Reddit had /r/reddit.com or twitter having @twitter
When they launched github.io, they said it was for user-generated content, and official stuff will be on github.com. Seemingly that's changed/they forgot, but users seems to have remembered. Microsoft isn't famous for their consistency, so not unexpected exactly.
Iām pretty sure they have used it before, or maybe it was githubnext. Iām also pretty sure I have seen many large companies and organizations launch developer facing tools and stuff through GitHub pages. The structure of GitHub pages is pretty simple. You know the user/org from the domain. Iām still not sure whatās phishy about it. Is it a broken promise?
It's phishy because it's breaks the rules people are generally told for avoiding phishing links, mainly that they should pay attention to the domain rather than subdomains. Browser even highlight that part specifically so that you pay attention to it, because you can't fake the real domain. The problem with what GitHub does here is that while `github.github.io` might be the real GitHub, `foobar-github.github.io` is not because anybody can get a github.io via their username, that was part of why they made github.io separate. Additionally they could easily host this via GitHub Pages but still use a custom domain back to github.com, they just don't.
I would say that GitHub is particularly bad about this as they also use `github.blog` for announcements. I'm not sure if they have any others, but then that's the problem, you can't expect people to magically know which of your different domains are and aren't real if you use more than one. They even announced the github.com SSH key change on github.blog.
Hey, sorry, yes the better link is https://github.github.com/gh-aw/
but we had a redirect set to https://github.github.io/gh-aw/
Both work and we've fixed the redirect now, thanks
Any github pages site is, by default, ORGNAME.github.io.
We recently moved this out of the githubnext org to the github org, but short of dedicating some route in github.com/whatever, github.github.io is the domain for pages from the github org.
Looks like a pre-release product. This is to lower the branding and reputational risk.
So them using their own product makes it phishy? I donāt get it
Itās not like someone else can or could own this link, could they?
What timing. I used the whole weekend building a CI agentic workflow where I can let CC run wild with skip-permissions in isolated vms while working async on a gitea repo. I leave the CC instance with a decent sized mission and it will iterate until CI is green and then create a PR for me to merge. I'm moving from talking synchronously to one Clade Code to manage a small group of collaborating Claudes.
Crazy times.
This is an extension for the gh cli that takes markdown files as input and creates github actions workflow files from them. Not just any workflow files, but 1000-line beasts that you'll need an LLM to explain what they do.
I tried out `gh aw init` and hit Y at the wrong prompt. It created a COPILOT_GITHUB_TOKEN on the github repo I happened to be in presumably with a token from my account. That's something that really should have an extra confirmation.
Stuffing agents somewhere they don't belong rather than making the system work better with the agents people already use. Obvious marketing driven cash grab.
The landing page doesn't make it clear to me what value this is providing to me (as a user). I see all of these things that I can theoretically do, but I don't see (1) actual examples of those things (2) how this specific agentic workflow helps.
https://github.github.io/gh-aw/#gallery down the page has a list of concrete applications
For examplpe, https://github.github.io/gh-aw/blog/2026-01-13-meet-the-work... has several examples of agentic workflows for managing issues and PRs, and those examples link to actual agentic workflow files you can read and use as a starting point for your own workflows.
The value is "delegate chores that cannot be handled by a heuristic". We're figuring out how to tell the story as we go, appreciate the callout!
I find this confusing: I can see the value in having an LLM assist you in developing a CI/CD workflow, but why would you want one involved in any continuous degree with your CI/CD? Perhaps itās not as bad as that given that thereās a ācompilationā phase, but the value add there isnāt super clear either (why would I check in both the markdown and the generated workflow; should I always regenerate from the markdown when I need changes, etc.).
Given GitHubās already lackluster reputation around security in GHA, I think Iād like to see them address some of GHAās fundamental weaknesses before layering additional abstractions atop it.
I thought that it was to allow non-tech people to start making their own workflows/CI in a no/low-code way and compete against successful companies on this market.
But the implementation is comically awful.
Sure, you can "just write natural language" instructions and hope for the best.
But they couldn't fully get away from their old demons and you still have to pay the YAML tax to set the necessary guardrails.
I can't help but laugh at their example: https://github.com/github/gh-aw?tab=readme-ov-file#how-it-wo...
They wrote 16 words in Markdown and... 19 in YAML.
Because you can't trust the agent, you still have to write tons on gibberish YAML.
I'm trying to understand it, but first you give permissions, here they only provide read permissions.
And then give output permissions, which are actually write permissions on a smaller scope than the previous ones.
Obviously they also absolve themselves from anything wrong that could happen by telling users to be careful.
And they also suggest to setup an egress firewall to avoid the agents being too loose: https://github.com/github/gh-aw-firewall
Why setting-up an actual workflow engine on an infra managed by IT with actual security tooling when you can just stick together a few bits of YAML and Markdown on Github, right?
The egress firewall is active by default, see https://github.github.io/gh-aw/introduction/architecture/
We've fixed the example on the README and hopefully it's clearer now what's going on.
We've added an FAQ on determinism here: https://github.github.io/gh-aw/reference/faq/#determinism
I don't personally want any kind of workflow that spams my repo with gen AI refactorings or doc maintenance either. That is literally just creating overhead for me and it sounds like an excuse to shoehorn AI in to a workflow more than anything else.
> but why would you want one involved in any continuous degree with your CI/CD
because helping you isn't the goal
the goal is to generate revenue by consuming tokens
and a never ending swarm of "AI" "agents" is a fantastic way to do that
I use an LLM behavior test to see if the semantic responses from LLMs using my MCP server match what I expect them to. This is beyond the regex tests, but to see if there's a semantic response that's appropriate. Sometimes the LLMs kick back an unusual response that technically is a no, but effectively is a yes. Different models can behave semantically different too.
If I had a nice CI/CD workflow that was built into GitHub rather than rolling my own that I have running locally, that might just make it a little more automatic and a little easier.
> I find this confusing: I can see the value in having an LLM assist you in developing a CI/CD workflow, but why would you want one involved in any continuous degree with your CI/CD?
The sensible case for this is for delivering human-facing project documentation, not actual code. (E.g. ask the AI agent to write its own "code review" report after looking at recent commits.) It's implemented using CI/CD solutions under the hood, but not real CI/CD.
Sorry, maybe I phrased my original comment poorly: I agree there's value in that kind of "self" code-review or other agent-driven workflow; I'm less clear on how that value is produced (performantly, reliably, etc.) by the architecture described on the site.
For Continuous Documentation examples, see https://github.github.io/gh-aw/blog/2026-01-13-meet-the-work...
This is a solid step forward on execution safety for agentic workflows. Permissions, sandboxing, MCP allowlists, and output sanitization all matter. But the harder, still unsolved problem is decision validation, not execution constraints. Most real failures come from agents doing authorized but wrong things with high confidence. Hallucinations, shallow agreement, or optimizing for speed while staying inside the permission box.
Iām working on an open source project called consensus-tools that sits above systems like this and focuses on that gap. Agents do not just act, they stake on decisions. Multiple agents or agents plus humans evaluate actions independently, and bad decisions have real cost. This reduces guessing, slows risky actions, and forces higher confidence for security sensitive decisions. Execution answers what an agent can do. Consensus answers how sure we are that it should do it.
I tested it a bit yesterday, and it looks goodāat least from a structural perspective. Separating the LLM invocation from the apply step is a great idea. This isnāt meant to replace our previous deterministic GitHub Actions workflow; rather, it enables automation with broader possibilities while keeping LLM usage safer.
Also, a reminder: if you run Codex/Claude Code/whatever directly inside a GitHub Action without strong guardrails , you risk leaking credentials or performing unsafe write actions.
I feel like this solution hallucinated the concept of Workflow Lock File (.lock.yml), which is not available in Github Actions. This is a missing feature that would solve the security risk of changing git tag references when calling to actions like utility@v1
I think in this context they mean ālockā as in āthese are the generated contents corresponding to your source markdown,ā not as in āthis is a lockfile.ā But I think thatās a pretty confusing overlap for them to have introduced, given that a lack of strong dependency pinning is a significant ongoing pain point in GHA.
You can already hardcode the sha of a given workflow in the ref, and arguably should do that anyways.
It doesn't work for transitive dependencies, so you're reliant on third party composite actions doing their own SHA locking.
You can also configure a policy for it [0] and there are many oss tools for auto converting your workflow into a pinned hash ones. I guess OP is upset itās not in gh CLI? Maybe a valid feature to have there even if itās just a nicety
[0] https://github.blog/changelog/2025-08-15-github-actions-poli...
I want to see where we're at in 2 years, because these last couple of months have been pretty chaotic (but in a good sense) in terms of agents doing things with other agents. I think this is the real wake-up-call, that these dumb and error-prone agents can do self-correcting teamwork, which they will hopefully do for us.
Two years, then we'll know if and how this industry has completely been revolutionized.
By then we'd probably have an AGI emulator, emulated through agents.
Spoiler: this is how humans always worked. Even Einstein had his wife, Marcel Grossmann and Hilbert, among others.
And Stalin had Lysenko.
Iād appreciate if they fix the log viewer in GH actions. That would have a larger impact, by far.
It looks like it does have an MCP Gateway https://github.com/github/gh-aw-mcpg so I may see how well it works with my MCP server. One of the components mine makes are agent elements with my own permissioning, security, memory, and skills. I put explicit programatic hard stops on my agents if they do something that is dangerous or destructive.
As for the domain, this is the same account that has been hosting Github projects for more than a decade. Pretty sure it is legit. Org ID is 9,919 from 2008.
Does this products directly compete with GitHub Models [1]?
[1] https://github.com/marketplace?type=models
I think it makes use of GitHub models.
Nope, it uses Copilot CLI under the hood (with your token)
This is insane stuff. Why are they pushing this nonsense on developers when the real money is in surveillance and web indexing?
People like Nadella must think that developers are the weakest link: Extreme tolerance for Rube Goldberg machines, no spine, no sense of self-protection.
I'll cancel my paid GitHub account though.
Link to github.com: https://github.github.com/gh-aw/
Somehow i want to ask what's the actual job of those former software engineers. Agents everywhere, on your local machine, in the pipeline, on the servers, and they are doing everything. Yes, the specs also.
Someone still has orchestrate the shit show. Like a captain at the helm in the middle of a storm.
Or you can be full accelerationist and give an agent the role of standing up all the agents. But then you need someone with the job of being angry when they get a $7000 cloud bill.
What is the job of a truck driver, if it's the truck that delivers goods?
Wasnt GitHub supposed to be doing a feature freeze while they move to Azure?(1) They certainly could use it as their stability has plummeted. After moving to a self-hosted Forgejo I'll never go back. My UI is instant, my actions are faster than they ever were on GH (with or without accelerators like Blacksmith.sh), I dont constantly get AI nonsense crammed into my UI, and I have way better uptime all with almost no maintenance (mostly thanks to uCore)...
GH just doesnt really have much a value proposition for anything that isnt a non-trivial, star gathering obsessed, project IMO...
1: https://thenewstack.io/github-will-prioritize-migrating-to-a...
Edit: typo
Hello HN! The Agentic Workflows project has been on the githubnext.com website for a while, and we recently moved the documentation and repo over to the `github` org.
This is early research out of GitHub Next building on our continuous AI [1] theme, so we'd love for you to kick the tires and share your thoughts. We'd be happy to answer questions, give support, whatever you need. One of the key goals of this project is to figure out how to put guardrails around agents running in GitHub actions. You can read more about our security architecture [1], but at a high level we do the following:
- We run the agent in a sandbox, with minimal to no access to secrets
- We run the agent in a firewall, so it can only access the sites you specify
- We have created a system called "*safe outputs*" that limits what write operations the agent can perform to only the ones you specify. For example, if you create an Agentic Workflow that should only comment on an issue, it will not be able to open a new issue, propose a PR, etc.
- We run MCPs inside their own sandboxes, so an attacker canāt leverage a compromised server to break out or affect other components
We find that there's something very compelling about the shape of this ā delegating chores to agents in the same way that we delegate CI to actions. It's certainly not perfect yet, but we're finding new applications for this every day and teams at GitHub are already creating agentic workflows for their own purposes, whether it's engineering or issue management or PR hygiene.
> Why is it on github.github.io and not github.com?
GitHub Pages domains are always ORGNAME.github.io. Now that we've moved the repo over to the `github` org, that's the domain. When this graduates from being a technology preview to a full-on product, we imagine it'll get a spot on github.com/somewhere.
> Why is GitHub Next exploring this?
Our job at GitHub is to build applications that leverage the latest technology. There are a lot of applications of _asynchronous_ AI which we suspect might become way bigger than _synchronous_ AI. Agentic Workflows can do things that are not possible without an LLM. For example, there's no linter in existence that can tell me if my documentation and my code has diverged. That's just one new capability. We think there's a huge category of these things here and the only way to make it good is to ⦠make it!
> Where can I go to talk with folks about this and see what others are cooking with it?
https://gh.io/next-discord in the #continuous-ai channel!
[1] https://githubnext.com/projects/continuous-ai/
[2] https://github.github.io/gh-aw/introduction/architecture/
(edit: right I forgot that HN doesn't do markdown links)
Surely this won't be a security nightmare.
Don't worry, you can just setup an Agentic Workflow Firewall!
https://github.com/github/gh-aw-firewall
Soon: AgentHub Git Workflows
Copilot Hub Enterprise With Copilot
At which point the AI figures out its easier to just switch to jj
WorkHub Agent Gitflows?
Apologies for the bad language but this can fuck off. They need to fix everything before pasting more shit on top.
Iām getting to the point of throwing Jenkins back in itās that bad.
GitHub gives git a bad name and reputation.
since generation is not deterministic, how do they verify the lock file?
The generation of the workflow file from the input markdown file is deterministic. It's what the agent does when running the workflow that is non-deterministic.
Go: check
YAML: check
Markdown: check
Wrong level of abstraction: check
Shit slop which will be irrelevant in less than a year time: check
Manager was not PIP'd: check
GitHub fix your uptime then come talk to me about agentic workflows
Ah yes, lovely. That's what I want in my CI/CD...hallucinations that then churn through I don't know how many tokens trying to "fix it".
Not confirmed that it's by Github, phishy domain.
Why is it phishy? Github.io has been the domain they use for all GH pages for a long time with subdomains mapping to GH usernames. Itās standard practice to separate user generated content from the main domain so that it doesnāt poison SEO.
Agreed, but looks like it: https://github.com/github/gh-aw
Very weird of them to not use github.com but instead use the domain they otherwise use for non-github/user content. Phishy indeed, and then people/companies go ahead and blame users for not taking care/checking, yet banks and more continuously deploy stuff in a way to train users to disregard those things.
How is it not confirmed? GitHub cannot use their own product? Them using GitHub pages changes something? I donāt get it