OpenAI Grove

(openai.com)

142 points | by manveerc 16 hours ago ago

121 comments

  • Zagreus2142 14 hours ago

    Can someone give the counter argument to my initial cynical read of this? That read being: OpenAI has more money than it can invest productively within it's own company and is trying to cast a net to find new product ideas via an incubator? I can't imagine Softbank or Microsoft is happy about their money being funneled into something like this and it implies they have run out of ideas internally. But I think I'm probably being too reflexively cynical

    • AnEro 14 hours ago

      I think that MIT study of 95% of internal AI projects failing has scared off a lot of corporations from risking time in it. I think they also see they are hitting a limit of profitable intelligence from their services. (with the growth in inelegance the past 6–8 months being more realistic, not the unbelievable like in the past few years)

      I think everyone is starting to see this as a middle man problem to solve, look at ERP systems for instance when they popped up it had some growing pains as an industry. (or even early windows/microsoft 'developers, developers, developers' target audience)

      I OpenAI see it will take a lot of third party devs to take what OpenAI has and run with it. So they want to build a good developer and start up network to make sure that there are a good, solid ecosystem of options corporations and people can use AI wise.

      • Workaccount2 14 hours ago

        The MIT study found 90% of workers were regularly using LLMs.

        The gap was that workers were using their own implementation instead of the company's implementation.

        • keeda 10 hours ago

          The MIT study as released also does not really provide any support for the 95% failure rate claim. Until we have more details, we really don't know where that number came from:

          https://www.linkedin.com/feed/update/urn:li:activity:7365026...

        • AnEro 13 hours ago

          Yea from what I understand 'Chats' and AI coding are something they already have market domination/are a leader on and are a good/okay product. It's the other use cases they haven't delievered on in terms of other companies using them as a platform to deliver AI apps, which I would imagine would have been a huge vertical in their pitches to investors and internal plans.

          These third-party apps get huge token usage with agenentic patterns. So losing out on them and being forced to make more internal products to tune to specific use cases is not something they want to biuld out or explore

    • r0m4n0 9 hours ago

      I think it’s more like Open AI has the name to throw around and a lot of credibility but not products that are profitable. They are burning cash and need to show a curve that they can reach profitability. Getting 15 people with 15 ideas they can throw their weight behind is worth a lot

      • chrishare 6 hours ago

        Yeah, more or less. Being in the application space as well as the inference space hedges a variety of risks, that inference margins will squeeze, that competition will continue to increase, etc etc.

    • rich_sasha 13 hours ago

      Without putting my weight behind them, here's some counterarguments:

      - OpenAI needs talent, and it's generally hard to find. Money will buy you smart PhDs who want to be on the conveyer belt, but not people who want to be a centre of a project of their own. This at least puts them in the orbit of OpenAI - some will fly away, some will set up something to be aquihired, some will just give up and try to join OpenAI anyway

      - the amount of cash they will put into this is likely minuscule compared to their mammoth raises. It doesn't fundamentally change their funding needs

      - OpenAI's biggest danger is that someone out there finds a better way to do AI. Right now they have a moat made of cash - to replicate them, you generally need a lot of hardware and cash for the electricity bill. Remember the blind panic when DeepSeek came out? So, anything they can do to stop that sprouting elsewhere is worth the money. Sprouting within OpenAI would be a nice-to-have.

    • xpe 5 hours ago

      > I can't imagine Softbank or Microsoft is happy about their money being funneled into something like this

      Imagining one negative spin doesn’t an imagination make. Imagine harder.

    • ozgung 14 hours ago

      I don't think it's about money, they don't invest anything. They gather data about "technical talent" working on AI related ideas. They will connect with 15 of these people to see if they can build it together.

      • LordDragonfang 14 hours ago

        It seems almost like... an internship program for would-be AI founders?

        My guess is this is as much about talent acquisition as it is about talent retention. Give the bored, overpaid top talent outside problems to mentor for/collaborate on that will still have strong ties to OpenAI, so they don't have the urge to just quit and start such companies on their own.

    • tern 9 hours ago

      It's possible that a single senior employee just wanted to do this and it doesn't cost that much and their manager was like "sure"

      • albingroen 8 hours ago

        I really do want this to be the case

    • ks2048 14 hours ago

      > OpenAI has more money than it can invest productively

      I don't think there is any money given, except travel costs for first and last week.

    • spott 7 hours ago

      I mean, how much money are they throwing at this? I doubt it approaches anything close to a percent of the cash they have on hand.

  • albert_e 4 hours ago

    > pre-idea individuals

    First time I am hearing this term. It is a euphemism like pre-owned cars (instead of used cars).

    What does this mean? People who do not yet have any idea? Weird.

    • arthurofbabylon 4 hours ago

      Sadly, yes, a lot of people want to be entrepreneurs for prestige/wealth. In their imagination they skip ahead to a fantastical ending: being rich and respected.

      I find this disturbing. How can someone be useful to others without an idea of what that even means? How can one provide a novel offering without even caring about it? It's an expression of missing craft and bad taste. These aspirations are reactive, not generated by something beautiful (like kindness, or optimism).

      Fortunately it is not hopeless; aspiring entrepreneurs can find deeper motivation if they look for it.

      (I like to give the following advice: it is easier to first be useful to others and become rich than it is to be rich and then become useful to others. This almost certainly requires sufficient empathy and care to have a hypothesis and be "post-idea".)

      • mikert89 3 hours ago

        Entrepreneurship is the act of creation, a noble activity

        • MarcelOlsz 3 hours ago

          The irony of the term entrepreneur is anyone who calls themselves an entrepreneur isn't, and the ones that are, don't.

        • seydor 3 hours ago

          more of risk taking

    • MarcelOlsz 4 hours ago

      Drop the "Ideas." Just "Guy." It's cleaner.

    • Lerc 38 minutes ago

      I cannot imagine not having far more ideas than I could possibly ever do. Today I was describing one to my partner and she told me the only reason I shouldn't do it is that I have too many other things to do.

      The thing that makes me continually have ideas is the same thing that makes me not want to dedicate my life to implementing just one of them. It would be like picking a favourite child if I were producing offspring like a queen bee.

      I think there is value in the effort to develop something and frequently implementing something well is worth as much and sometimes much more than just a simple proof of concept. Someone has to build the things, It should be the people who are good at that and feel rewarded by a job done well more than a job done differently.

      I do think that there isn't enough perspective of the lives that other people lead that can cause odd side-effects. Some people keep their ideas secret, or overvalue the idea because it was the one they had. This is a perspective I find hard to relate to. Most of the creative people I know are much happier someone knowing about their creations. They're like grains of sand, each one with their own details and can be evaluated many different ways. A lot of intellectual property feels like watching a man jealously protecting their grain of sand while standing on a beach.

      I believe that is why the intent of things like copyright is to not protect ideas themselves. You cannot copyright an idea, and as an ideas person (a rather horrid term) that feels appropriate. The thing that you have built around the idea is the valuable thing you have contributed to the world. I think that is why items that are copyrightable are referred to as work. The value you bring comes from the from the work you did, not the idea you had, ideas just come to you (often at inconvenient times).

      Mass media causes a bit of an aberration because of this. The thing that makes someone wealthy from a popular work is not proportional to the work done to produce it or even the quality of the work. Works that can be easily reproduced and distributed receive a disproportionate reward to their quality. A median quality work in many fields can receive next to no reward. The most popular works receive a masssive reward. The mechanism allowing a control of supply to provide reward for work ends up influencing a supply demand curve that gives massive rewards to a very few and very little to the majority. There is still an element of merit to the successes, the popular things are popular for a reason, some of those things really are the best. The question is would they have still been the best if everyone who worked to create stuff were rewarded more linearly to quality, would that support enough development of ability and opportunity that the pool from which the best can be selected becomes much larger.

      [this might have gone off topic, but obviously my brain has things that have to come out]

    • seydor 3 hours ago

      you can buy ideas the same way you can buy expensive cars and bags. Centuries ago some rich europeans used to do that. Then we discovered 'merit'.

  • atleastoptimal 8 hours ago

    Almost every parent comment on this is negative. Why is there such an anti-OpenAI bias on a forum run by YCombinator, basically the pseudo-parent of OpenAI?

    It seems that there is a constant motive to view any decision made by any big AI company on this forum at best with extreme cynicism and at worse virulent hatred. It seems unwise for a forum focused on technology and building the future to be so opposed to the companies doing the most to advance the most rapidly evolving technological domain at the moment.

    • PostOnce 7 hours ago

      People remember things and consistently behaving like an asshole gets you treated like an asshole.

      OpenAI had a lot of goodwill and the leadership set fire to it in exchange for money. That's how we got to this state of affairs.

      • atleastoptimal 7 hours ago

        What are the worst things OpenAI has done

        • echelon 7 hours ago

          The number one worst thing they've done was when Sam tried to get the US government to regulate AI so only a handful of companies could pursue research. They wanted to protect their moat.

          What's even scarier is that if they actually had the direct line of sight to AGI that they had claimed, it would have resulted in many businesses and lines of work immediately being replaced by OpenAI. They knew this and they wanted it anyway.

          Thank god they failed. Our legislators had enough of a moment of clarity to take the wait and see approach.

          • g42gregory 6 hours ago

            It's actually worse than that.

            First, when they thought they had a big lead, OpenAI argued for AI regulations (targeting regulatory capture).

            Then, when lead evaporated by Anthropic and others, OpenAI argued against AI regulations (so that they can catch up, and presumably argue for regulations again).

          • atleastoptimal 6 hours ago

            Do you believe AI should not be regulated?

            Most regulations that have been suggested would but restrictions mostly the largest, most powerful models, so they would likely affect OpenAI/Anthropic/Google primarily before smaller upstarts would be affected.

            • moregrist 5 hours ago

              I think you can both think there's a need for some regulation and also want to avoid regulation that effectively locks out competition. When only one company is pushing for regulation, it's a good bet that they see this as a competitive advantage.

        • Lionga 7 hours ago

          Dude, they completely betrayed everything in their "mission". The irony in the name OpenAI for a closed, scammy, for profit company can not be lost on you.

          • atleastoptimal 7 hours ago

            They released a near-SOTA open-source model recently.

            Their prerogative is to make money via closed-source offerings so they can afford safety work and their open-source offerings. Ilya noted this near the beginning of the company. A company can't muster the capital needed to make SOTA models giving away everything for free when their competitor is Google, a huge for-profit company.

            As per your claim that they are scammy, what about them is scammy?

            • chrishare 6 hours ago

              Their contribution to opensouurce and open research is far behind other organisations like Meta and Mistral, as welcome as their recent model release is. Former security researchers like Jan Leike commonly cite a lack of organisational focus on security as a reason for leaving.

              Not sure specifically what the commenter is referring to re: scammy, but things like the Scarlett Johansson / Her voice imitation and copyright infringement come to mind for me.

            • saithound 5 hours ago

              Oh yeah, that reminds me. the company did research on how to train a model that manipulates the metrics, allowing them to tick the open source box with a seemingly good score, while releasing something that serves no real purpose. [1] [2]

              GPT-OSS is not a near-state-of-the-art model: it is a model deliberately trained in a way that it appears great in the evaluations, but is unusable and far underperforms actual open source models like Ollama. That's scammy.

              [1] https://www.lesswrong.com/posts/pLC3bx77AckafHdkq/gpt-oss-is...

              [2] https://huggingface.co/openai/gpt-oss-20b/discussions/14

              • anthonyiscoding 5 hours ago

                That explains why gpt-oss wasn't working anywhere near as well for me as other similarly and smaller sized models. gemma3 27b, 12b, and phi4 (14b?) all significantly outperformed it when transforming unstructured data to structured data.

    • bitpush 8 hours ago

      > Why is there such an anti-OpenAI bias on a forum run by YCombinator, basically the pseudo-parent of OpenAI?

      Isnt that a good thing? The comments here are not sponsored, nor endorsed by YC.

      • atleastoptimal 8 hours ago

        I'd expect to see a balance though, at least on the notion that people would be attracted to posting on a YC forum over other forums due to them supporting or having an interest in YC.

        • bhhaskin 7 hours ago

          I think the majority of people don't care about YC. It just happens to be the most popular tech forum.

        • ncallaway 2 hours ago

          > posting on a YC forum over other forums due to them supporting or having an interest in YC.

          I've been posting here for over a decade, and I have absolutely no interest in YC in any way, other than a general strong negative sentiment towards the entire VC industry YC included.

          Lots of people come here for the forum, and leave the relationship with YC there.

        • MegaButts 7 hours ago

          Why do you assume there would be a balance? Maybe YC's reputation has just been going downhill for years. Also, OpenAI isn't part of YC. Sam Altman was fired from YC and it's pretty obvious what he learned from that was to cheat harder, not change his behavior.

          • tptacek 6 hours ago

            Sam Altman wasn't fired from YC.

            • MegaButts 6 hours ago

              The story I heard before the PR spin that came from Paul Graham later (where he tweeted that he never fired him and asked him to choose between YC and OpenAI) was that he was asked to resign. I don't have an official source, I heard this from multiple YC alumni. I don't know exactly what happened but based on what I've heard and actually having interacted with Sam Altman, it seems most likely to me he was asked to resign (which isn't technically being fired) because he does weird stuff. He claimed to be a chairman of YC which wasn't true, he barred other YC partners from running personal funds while he did it himself, and then all the further similar behaviors we've seen play out at OpenAI. Maybe you're right, but it seems to me he was "fired" and later there was some PR to smooth it over.

              https://archive.is/Vl3VR

              https://archive.is/2mzD7

              • tptacek 6 hours ago

                You don't know what exactly happened, but you stated confidently what happened.

                • MegaButts 6 hours ago

                  You're right about that and that's why I'm providing additional context.

    • AlecSchueler an hour ago

      > Why is there such an anti-OpenAI bias on a forum run by YCombinator, basically the pseudo-parent of OpenAI?

      Because our views are our own and not reflective of the feelings of the company that hosts the forum?

    • Hadriel 8 hours ago

      Why do you assume that a forum run by X needs to or should support X? And why is it unwise - from what metrics do you measure wisdom?

    • dcreater 6 hours ago

      My takeaway is actually the opposite, major props to YC for allowing this free speech unfettered - I cant think of any other organization or country on the planet where such a free setup exists

      • Mistletoe an hour ago

        Unfettered? Have you ever seen how many posts disappear from being flagged for the most dubious reasons imaginable? Have you been on other sites on the internet? Hell, Reddit is more unfettered and that’s terrible.

    • cootsnuck 4 hours ago

      I would call it skepticism, not cynicism. And there is a long list of reasons that big tech and big AI companies are met with skepticism when they trot out nice sounding ideas that require everyone to just trust in their sincerity despite prior evidence.

    • makk 6 hours ago

      These guys are pursuing what they believe to be the biggest prize ever in the history of capitalism. Given that, viewing their decisions as a cynic, by default, seems like a rational place to start.

      • atleastoptimal 6 hours ago

        True, though it seems most people on HN think AGI is impossible thus would consider OpenAI's quest a lost cause.

        • xpe 5 hours ago

          I don’t think one can validly draw any such conclusion.

    • QuadmasterXLII 8 hours ago

      because of the repeated rugpulling?

    • peishang 6 hours ago

      I don't want to be glib - but perhaps it is because our "context window lengths" extend back a bit further than yours?

      Big tech (not just AI companies) have been viewed with some degree of suspicion ever since Google's mantra of "Don't be evil" became a meme over a decade ago.

      Regardless of where you stand on the concept of copyright law, it is an indisputable fact that in order for these companies to get to where they are today - they deliberately HOOVERED up terabytes of copyrighted materials without the consent or even knowledge of the original authors.

    • typon 7 hours ago

      When you call yourself "Open"AI and then turn around and backstab the entire open community, its pretty hard to recover from that.

      • xpe 5 hours ago

        They undermined their not-for-profit mission by changing their governance structure. This changed their very DNA.

      • atleastoptimal 7 hours ago

        They released a near-SOTA open source model not too long ago

        • xpe 5 hours ago

          open weights != open source

    • theideaofcoffee 7 hours ago

      I’ll bite, but not in the way you’re expecting. I’ll turn the question back on you and ask why you think they need defending?

      Their messaging is just more drivel in a long line of corporate drivel, puffing themselves up to their investors, because that’s who their customers are first and foremost.

      I’d do some self reflection and ask yourself why you need to carry water for them.

      • atleastoptimal 6 hours ago

        I support them because I like their products and find the work they've done interesting, and whether good or bad, extremely impactful and worth at least a neutral consideration.

        I don't do a calculation in my head over whether any firm or individual I support "needs" my support before providing or rescinding it.

    • dyauspitr 7 hours ago

      People here are directly in the line of fire for their jobs. It’s not surprising.

      • chrishare 6 hours ago

        True, but there are many reasons besides. Meta and Anthropic attract less criticism for a reason.

    • mrcwinn 6 hours ago

      This. I’ve been on HN for a while. I am barely hanging on to this community. It is near constant negativity and the questioning of every potential motive.

      Skepticism is healthy. Cynicism is exhausting.

      Thank you for posting this.

      • dcreater 6 hours ago

        In the current echo chamber and unprecedented hype, I'll take cynicism over hollow positivity and sycophancy

  • andsoitis 9 hours ago

    OpenAI appears to lack clear product vision.

    This feels like a program to see what sticks.

    • arcticfox 9 hours ago

      "Pre-idea stage" support is wild to me

      • stavros 7 hours ago

        We don't invest in ideas, we invest in founders. That's why OpenAI partnered with Y Combinator to bring you investments at the pre-founder stage.

        We'll invest in your baby even before it's born! Simply accept our $10,000 now, and we'll own 30% of what your child makes in its lifetime. The womb is a hostile environment where the fetus needs to fight for survival, and a baby that actually manages to be born has the kind of can-do attitude and fierce determination and grit we're looking for in a founder.

    • 0xCMP 8 hours ago

      Feels like the next logical move to me: they need to build and grow the demand for their product and API.

      What better than companies whose central purpose is putting their API to use creatively? Rather than just waiting and hoping every F500 can implement AI improvements that aren't cut during budget crunches.

      • cootsnuck 4 hours ago

        ...no one thinks it's weird for the supposedly most transformational digital technology ever invented to need manufactured demand?? None of us think it's strange that a startup currently vying for a half a trillion dollar valuation is looking to "pre-idea founders" to help them find PMF??

        Would this have been viewed with skepticism if any other startup from like 5+ years ago selling an API did this? If so, then how is it not even worse when a startup that is supposed to be providing access to what is pushed as a technical marvel of a panacea or something does it?

        Sometimes I feel like I'm taking crazy pills...

        I literally help companies implement AI systems. So I'm not denying there being any value...just...I don't understand how we can say with a straight face that they need to "build and grow demand for their product and API" while the same company was just reported on inking a $300B deal with Oracle for infra...like come on...the demand isn't there yet?!

    • reaperducer 9 hours ago

      This feels like a program to see what sticks.

      Isn't that how we got (and eventually lost) most Google products?

      • andsoitis 9 hours ago

        There’s a difference between having product ideas rooted in compelling hypotheses on the one hand, and random ideas you throw against a wall and see what sticks.

        I suspect, but could be wrong, that in OpenAI’s case it is because they believed they will reach AGI imminently and then ā€œall problems are solvedā€, in other words the ultimate product. However, since that isn’t going to happen, they now have to think of more concrete products that are hard to copy and that people are willing to pay for.

  • minimaxir 15 hours ago

    Sam clearly misses Y Combinator.

    • Insanity 14 hours ago

      Yeah, my thoughts where along the same line. Seems like they want to be another Ycombinator but more focused on AI. (Although TBF, I guess AI would also get the most traction at Ycombinator these days, given the hype wave).

    • bananapub 14 hours ago

      Did we ever find out why it is he doesn’t work there anymore?

      • reducesuffering 13 hours ago

        Was forced to choose between OpenAI and YC by Paul Graham and Jessica. Sama chose OpenAI.

        https://x.com/paulg/status/1796107666265108940

        • moc_was_wronged 7 hours ago

          The really odd thing was when he got fired for like 3 days in 2023 because he refused to let Y Combinator have preferential representation of its startups in OpenAI models.

      • system2 13 hours ago

        Clearly, dealing with OpenAI doesn't leave any room for fun stuff like YC. Just a hunch.

    • moralestapia 14 hours ago

      Indeed.

      Exactly what I read between the lines on this.

    • dyauspitr 7 hours ago

      I randomly saw him being announced as a big deal here on this forum years ago and I remember thinking what has this guy done to deserve this.

  • NickNaraghi 2 hours ago

    Everyone at YC should be upset that sama continues to cannibalize the YC value proposition. First funding, then mindshare, and now this.

  • fi-le 14 hours ago

    Did anyone get confirmation that the form got sent? There is no feedback from pressing "submit" for me.

  • jgalt212 10 hours ago

    "pre-idea individuals"

    Next up, we're funding prenatal individuals.

    • moc_was_wronged 7 hours ago

      In 10 years, people will apply for jobs for their children before conception, and wisely not have kids if they can’t line one up (at least as a backup.)

    • hansmayer 9 hours ago

      Right, this corporate linkedin-lingo is getting worse by the day.

    • Lionga 7 hours ago

      Sell your first born to Scam Altman now!

  • Imnimo 8 hours ago

    If you are pre-idea today, does OpenAI believe your startup will still be relevant in the face of the AGI progress they forecast to make in the time it takes you to ship?

    • IncreasePosts 8 hours ago

      I ask questions like that in my head all the time. My metric is once their AI is smart enough to make their website not throw up an error half the time, I'll have to more deeply consider any AGI claims

  • Hadriel 8 hours ago

    15 ppl in first cohort? Aka dont bother applying.

  • 999900000999 4 hours ago

    What exactly do I need to do to qualify.

    I'm working on a prototype right now, guess I'll toss my hat in the ring.

    Fortune favors the bold.

  • vasilzhigilei 13 hours ago

    It looks like application submission isn't functioning.

    • jtfrench 13 hours ago

      Yeah, clicking "Submit" doesn't do anything obvious, aside from post some arcane errors to the JavaScript Console.

      • TZubiri 9 hours ago

        lmao, was this vibe coded?

    • mandeepj 7 hours ago

      Do a hard refresh while console is open; that'd fix it!

  • cadamsdotcom 2 hours ago

    Whatever.

    AWS gives startups money.

  • keeda 3 hours ago

    > pre-idea individuals

    Holy crap, I thought that term existed purely in the realm of satire skits:

    https://www.tiktok.com/@techroastshow/video/7341240131015445...

  • TZubiri 9 hours ago

    "it offers pre-idea individuals" wtf

    If ideas are a dime a dozen, what even is a pre-idea startup

    • ALittleLight an hour ago

      Talented individual(s) who want to do a startup.

  • dvfjsdhgfv 12 hours ago

    I first misread it as "OpenAI Grave" where someone would put the list of all discontinued models.

  • Cheer2171 14 hours ago

    > "pre-idea individuals"

    • jsheard 14 hours ago

      Move over "idea guys", it's the era of the "guy who hypothetically might have an idea at some point".

      • arcticfox 9 hours ago

        I've got concepts of an idea

      • bilbo0s 13 hours ago

        I don't know man?

        To me, it sounded like, "let's find all the idea guys who can't afford a tech founder. Then we'll see which ones have the best ideas, and move forward with those. As a bonus, we'll know exactly where we'd be able to acquihire a product manager for it!"

        • babyshake 10 hours ago

          If OpenAI needs a bunch of PMs, they will increasingly be able to spin some up, not hire humans.

    • MPSimmons 14 hours ago

      I caught that too. What's a "pre-idea" individual? Someone who... wants the vague _idea_ of a company?

      • spiderice 13 hours ago

        No, before that

        • fkyoureadthedoc 13 hours ago

          It's the AI guy version of the blockchain guy who had no idea what it was for or what to do with it, but was very hyped on it

    • babelfish 13 hours ago

      South Park Commons -1 to 0 program seems conceptually similar

    • tibbon 13 hours ago

      I mean, I get it.

      I'm highly capable of building some great things, but at my dayjob I'm filled to brim with things to do and a non-ending list of tasks in front of me.

      I've built cool stuff before, and if given a little push and some support could probably come up with something useful - and I can implement much of it myself.

      Put me in the room with cool people, throw out some conversation starters, shake it up and I'll come up with something.

  • nickphx 13 hours ago

    Why not ask the big bag of words to generate "ideas"?

    • bilbo0s 13 hours ago

      Just, Devil's Advocate..

      but what, exactly, makes you believe this internship program is not an idea generated by the big bag of words?

  • yde_java 13 hours ago

    The FAQ items don't expand for me, on Android Vivaldi.

  • koakuma-chan 14 hours ago

    Do you have to be in the US or can they help to get in?

    • jtfrench 13 hours ago

      The country selection menu seems to include countries from around the world. It sounds like only the first and last weeks are actually on-site, the rest in async/remote.

  • AnEro 14 hours ago

    Looks like they want to build up and support middle men to do the apps more than them, and act more like a platform or operating system position. Which makes sense giant corporations reporting 95% failed AI projects and the core success cases are specialist companies tuning the platform to a specific problem are successful. Then there are a ton of snake oil AI apps that are over promising under delivering hurting the image of AI's usefulness

    This is probably purely a pivot in market strategy to profitability to increase token usage, increase consumer/public's trust more than farming ideas for internal projects.

  • hollerith 6 hours ago

    What would be nice is a "grove" I can flee to where I'd be immune to the effects of OpenAI and the other AI labs.

    Alas, such grove is impossible.

  • linhns 14 hours ago

    Is it just me seeing this as a talent discovery program?

    • bilbo0s 13 hours ago

      It's clearly a talent grab. Where talent = creativity.

      Most will submit the app with a dime a dozen ideas. (Or, at internet scale, a dime a few hundred thousand I guess?) No need to even consider those guys.

      But it will be a pyramid. There will likely be 20-30 submissions that are at once, truly novel, and "why didn't I think of that!"-type ideas.

      Finally, a handful of the submissions will be groundbreaking.

      Et voilĆ”. Right there you've identified the guys and gals thinking outside the LLM box about LLMs. Or even AI in general.

  • lif 15 hours ago

    hmm.. wonder what the most accurate Venn diagram for this is?

  • woah 14 hours ago

    Incredible opportunity for SF Muni to get subsidized with even more full bus wrap ads for AI coding apps that nobody uses