• Hacker News
  • new|
  • comments|
  • show|
  • ask|
  • jobs|
  • elffjs 16 hours

    https://archive.ph/u274V

  • nobita223 11 minutes

    so does that mean anthropic is going to using google tensor gpu chips ?

  • freakynit 10 hours

    Funny how the strongest challenge to Nvidia's near-monopoly(full monopoly?) is coming from Google, and not AMD.

    Still rooting for AMD to catch up too, especially if they can continue improving their software stack. They seem to be moving in the right direction.. though, they could benefit from speeding up a bit more.

    Google now has it's fingers in all the pies.. is successfully fully vertically integrated and now expanding horizontally.

  • thisisauserid 15 hours

    >> $10 billion now ... another $30 billion to follow if Anthropic hits certain performance targets...

  • imrozim 4 hours

    Google puting a 40 billions in anthropic but anthropic spends it Google's own servers. The money will come back to Google. Lol

  • gverrilla 14 hours

    [dead]

  • mesonwarrior 7 hours

    [dead]

  • keasHg 15 hours

    They need it to fend off Crabby Rathbun from watching YouTube videos and commenting. The paperclip race is on, and we must win it!

  • cmiles8 13 hours

    Regardless of if this is “vendor financing” or “circular financing” the history books are riddled with this sort of stuff ending very badly.

    It’s concerning that the only thing that seems to be keeping the AI bubble inflated at this point is money from the folks selling things to AI companies. That’s very much not a good sign no matter how you spin it.

    I’m a fan of AI and there’s clearly value to it… however that value seems completely out of whack with the money pumping into the ecosystem and at some point such irrational behaviors break.

  • omrajguru 5 hours

    Google investing in its own customer, Anthropic buying Google's compute. the money doesn't really leave the room.

    simianwords 4 hours

    This sounds like a conspiracy theory if you don’t know how basic finance and economics work

    isodev 5 hours

    It’s ridiculous. Are those investments at least taxed in the US?

  • bandrami 52 minutes

    If they're investing at the same valuation Anthropic had in their last round, what happened to all that previous money?

    thawab 48 minutes

    You might be miss remembering, the last round they raised 30b.

  • threepts 7 hours

    I work at google for chrome, I can assure you nobody in our team is using gemini over claude. Haha this is hilarious

    nothrowaways 7 hours

    What do you even mean?

  • dsecurity49 45 minutes

    "Google investing $40B in Anthropic while also competing against them is the most Silicon Valley thing I've ever seen. These companies will fund their own competition just to make sure they have a seat at the table when it wins. Also $800B valuation for a company that hasn't IPO'd yet?? We are so cooked."

  • 15 hours

  • sega_sai 13 hours

    In the last couple of weeks, seeing all the announcements of new models by OAI, Anthropic and Chinese companies I was thinking if Google has something up their sleeve, but this news suggests otherwise.

  • ulfw 9 hours

    What a wonderful world our tech overlords are building for us leftover humans

  • 6thbit 14 hours

    A 10B insurance policy on google’s business sounds like a bargain?

    And with cashback through gcp usage!

  • nghnam 10 hours

    I think Google is using this to put pressure on OpenAI, while also getting some extra upside—like a possible path to acquire Anthropic later. And honestly, this could turn out to be bad news for OpenAI.

  • ppqqrr 11 hours

    i intend to invest $40B in my wife's pottery business; she will invest the same amount in my uber-for-dogs AI SaaS startup. our GDP is gonna be wild.

  • souravroy78 2 hours

    Are they done with so called state of the art Gemini models

  • atleastoptimal 8 hours

    Ive wanted to invest in Anthropic for years. The cost of not having ability to invest is hundreds of thousands for a retail investor. Maybe i should just invest in google for exposure

  • GiselW 5 hours

    [dead]

  • neltnerb 10 hours

    At this point I'll believe it when the money actually moves.

    There's been far to many "plans" and "commitments" and an awful lot of nothing actually happening.

  • munk-a 15 hours

    Anthropic, meanwhile, is spending hundreds of millions buying customer commitments from PE firms to inflate that DAU number. They now have a larger war chest to spend on artificial user acquisition to further inflate that value for future funding rounds.

  • alfiedotwtf 11 hours

    From a comment below:

    > My main job isn't writing code but I try to keep Claude Code and OpenCode busy and churning away on something as close to 100% of the time as I can without getting in the way of my other priorities

    I’ve seen many people say this the past few weeks i.e that their daily job now is no longer coding and has flipped to being a full Claude Feeder making sure its always churning.

    As someone who uses Claude Code daily, I still find myself reading code and thinking more vs just shoveling coal as fast as I can into the Claude steam train. Am I doing things wrong?

  • dev1ycan 11 hours

    How many gigantic companies can join this before it crashes?

  • ecommerceguy 11 hours

    My AI use is significantly down. I'm sick of following Chat GPT "advise" only to learn how egregiously incorrect it is. Don't ask for DMV advice on registering an out of state car!

  • forrestthewoods 15 hours

    10B at their valuation from last November is an absolutely killer deal. If Anthropic had sufficient compute supply they could raise at 2x easily if not 3x.

  • DeathArrow 4 hours

    I wonder what will this mean for Gemini? Will it survive?

  • shevy-java 4 hours

    There is a lot of money in the Network of Evil.

    I am still upset at these companies for driving up the RAM prices. "Free market" has evident problems - companies are way too dominating here. Average Joe suffers from this price mafia, assuming he or she needs to purchase RAM now.

  • cadamsdotcom 11 hours

    When they said we’d soon have a circular economy I didn’t know it’d be made up of investments in AI companies that will get fed right back into inference.

  • 15 hours

  • october8140 4 hours

    The bubble is so big.

  • dyingg 9 hours

    Its like when bit torrent and utorrent were the same thing. Right now the most popular frontier model makers are Anthropic, OpenAI, Google.

  • laweijfmvo 17 hours

    [dead]

  • Intent_net 5 hours

    IIRC Google already outright owns 15% of Anthropic.

  • zackho 11 hours

    great move by google

  • xyst 5 hours

    Cash injection disguised as an "investment"

  • whatever1 15 hours

    Cool. Will they use their balance sheets to pour all of this cash or are they going to bring the banking system to its knees and then we bail out everyone again ?

    shimman 11 hours

    I don't see why not. The US is bailing out foreign countries, might as well bail out unsustainable businesses too.

  • namegulf 17 hours

    So $40B in google cloud credits in return for % in equity.

    Didn't Amazon AWS do the same recently?

    ChrisArchitect 15 hours

    Anthropic takes $5B from Amazon and pledges $100B in cloud spending in return

    https://news.ycombinator.com/item?id=47848276

  • skizm 13 hours

    Weren't there reports of Anthropic's stock trading on secondary markets at $1T valuation recently? Now Google invests at a $350B valuation. I get valuations are often times just smoke and mirrors, but this seems like a pretty big disconnect. What's going on there?

    mjuarez 13 hours

    There's always backroom negotiations going on with investments like these. Private valuations are normally hyped-up, and with the current batch of AI companies, 100x so.

    I assume Anthropic said something like "We'll give you 3% of our company for $30B, since we're valued at $1T now! So cheap!", and Google immediately came back with "Hell no. We'll give you even more, $40B... but it's for 11% of the company. Take it or leave it." With all the issues they're having, what leverage does Anthropic have at that point?

    Basically, Google made them an offer they couldn't refuse.

    nikcub 12 hours

    Amazon and Google get discounts because they bring more than just cash and help solve a very immediate problem for Anthropic

    Great position to be in if you're Amazon and Google

  • Cyclone_ 12 hours

    This feels weird to me. Why wouldn't Google want to go all in on Gemini? Unless they feel anthropic is pretty far ahead with claude?

    wirgil1 11 hours

    If you can get influence at your competitor you take it. It's valuable for both of them regardless

  • VirusNewbie 15 hours

    It's a little weird. I work for Google, but I spend way more time helping get Anthropic serving and running than anything to do with Gemini.

    thatguysaguy 15 hours

    That's b/c the people working on Gemini serving are in GDM.

    brcmthrowaway 15 hours

    This is a good strategy. Internal competition between Gemini and GCP.

  • agnosticmantis 9 hours

    My $0.02: Competing against exp(t/2) + exp(t/2) is much much easier than exp(t/2+t/2)=exp(t).

    (If anthropic didn't exist, ØpenAI would suck up all the capital and talent in the room. Anthropic's existence has helped divide capital+talent that'd otherwise be gobbled up by the single fastest growing player.)

    derwiki 9 hours

    Is that like Lyft and Uber?

    the_killer 9 hours

    Come to Gehenna 666. Loosen your tongue. The tongue is Iniquity.

    ~ TK

  • bobkb 15 hours

    I wonder what happens to the “Gemini enterprise”. Will it do a Google plus or Google wave ?

    wasting_time 12 hours

    Gemini seems more tailored towards information retrieval and product integration (including Android and even iOS via Apple's deal).

    Google may reckon they can't (yet) reconcile their vision of Gemini with the raw coding performance of Claude and Codex.

  • htrp 19 hours

    > Google is committing $10 billion now in cash at a $350 billion valuation and will invest a further $30 billion if Anthropic meets performance targets, the report said.

    How much of this goes back to Google as cloud spend?

    dmk 19 hours

    Google investing $40bn in a company that competes directly with Gemini is one of those moves that only makes sense if you think of it as buying compute customers, not backing a competitor. Anthropic pays Google for TPUs and Cloud services, a big chunk of this investment surely has to flow right back to Google.

  • dubeye 15 hours

    Google seems to own a bit of everyone.

    airstrike 15 hours

    you might even say they own the whole alphabet at this point

  • aucisson_masque 14 hours

    Is anthropic really that good when you got deepseel V4 that has a fraction of the cost and works just as good ?

    dzhiurgis 13 hours

    I think their cli still leading for some reason.

    Not sure if it’s going to be good enough to replace IDEs with neatly integrated superior models.

    Aldipower 1 hours

    At least from an European perspective it is impossible to use DeepSeek v4 right now, as there are no privacy safe offerings.

  • xt00 16 hours

    At this point if you have cash or compute credits laying around in the tens of billions, better to hedge your bets than to find out the winner that took all was not you.

    addaon 16 hours

    Unless none of the current crop of AI companies is “the winner,” either because a newcomer appears or the craze fizzles… in which case have $40B in the bank seems superior.

  • fnoef 2 hours

    $40B. Insane. Imagine what could be done with this money to improve humanity. Instead, it’s spent on a fancy text generator that promises to eliminate most of the non mundane and physical jobs, as well as create autonomous killing machines, on top of burning the entire worlds electricity. Crazy.

    kakacik 2 hours

    you can say the same about every recent US war, and add one 0 to the sum.

    Or, more controversially, say EU green deal which decimated EU car industry and lost/will lose us few millions of jobs. Losses up to a trillion and nothing to show for that

    simianwords 2 hours

    Technology and productivity have reduced more death than any redistribution

    fnoef 1 hours

    Who said redistribution?

    This money could be invested in universal healthcare, or into AI research for medicine. But hey, I guess replacing developers and generating slop is more beneficial to our society.

    simianwords 51 minutes

    yes, replacing developers is better for sustained growth and reduced poverty and not your pet projects.

    sieabahlpark 47 minutes

    [dead]

  • zmmmmm 14 hours

    It feels like Anthropic is everybody's insurance policy against someone else winning the AI race. So you have Amazon, Google, Microsoft basically every major tech company pushing their own tech hard but simultaneously ensuring they have a survival level stake in Anthropic if they can't build or acquire their way to stay at frontier level performance themselves.

    dmix 9 hours

    Maybe it was never really about maximizing the model technology as the ultimate end goal and far more about the business side and infrastructure.

    The software will only improve for so long before it hits a wall. The best models were just a proxy for early mainstream market adoption, keeping your head above the water … plus some useful marketing hype about longshots for developing something bigger than LLMs (“AGI”).

    People who work in tech are biased to obsess about the technical side and short term uptime/performance outrage. Despite that being mostly just standard immature market issues.

    stingraycharles 9 hours

    I find it interesting that Anthropic is in this position and not OpenAI. Where did OpenAI go wrong? Lack of focus and overambitious in some of their spending commitments?

    notTheLastMan 2 hours

    [dead]

    cavisne 7 hours

    Does not seem that complicated. OpenAI basically had to do a lock-in deal with Microsoft/Azure at the time, and they pioneered this circular funding hyperscaler deal structure so there were some rough edges.

    Anthropic (all ex Open AI) knew the negatives of the deal, so they made a slightly better deal with AWS, not a full lock in. They also grounded it in hardware from the start, ie. being the flagship customer for Trainium, the flagship customer for external usage of TPU's.

    tm-guimaraes 2 hours

    Also, the fact that other tech leaders know very well how Sam Altman operates doesn’t help OpenAI secure deals with big tech.

    Domain knowledge, expertise, is a big thing in tech, because code can be written fast if you know what to do, and so by having that expertise, building a frontier model is a matter of time and capex. Anthropic is founded by top ex-openAi, so they are not lacking in expertise, and are not attached to SamAlt. It’s an easy choice of who to finance.

    Anthropic will win long term because big tech knows how much of a loud mouth sam is, how much of the piee he wants, he is more of a rival then some company they could use to grow. While Anthropic (even though they aren’t really good guys) seems more like a shared common good for the big tech then openAi, like linux corporate business deals version.

    petcat 1 hours

    This seems like wild speculation that isn't even really true at all. OpenAI locked up all the compute capacity which is why Anthropic is struggling so badly with capacity to scale for demand. It's why Claude quality is plummeting and people are leaving in droves because the usage limits are pathetic and the API pricing structure is outrageous. All because they can't scale. So that's what this deal is about.

    argee 9 hours

    Isn’t the only thing OpenAI did was: throwing a half baked model out for the public to go ham on? I was at Google when they did this and we already had working LLMs internally, they just weren’t good enough to release without PR backlash. I don’t see why such a pithy “advantage” should have led to anything other than a moment in the spotlight? The “we have no moat and neither does OpenAI” essay was published very shortly afterwards.

    If anything you ought to expect them to be behind, since they took the position of making all the mistakes first so others (who already had the same or better tech) didn’t have to.

    stingraycharles 8 hours

    > throwing a half baked model out for the public to go ham on?

    I think that’s underselling their contribution, which I believe is mainly: it’s possible and this is what it looks like as a product. Until that time, nobody had figured out how to shape it as a product, and ChatGPT showed how to do that. Don’t forget that for a year or two they kept making headlines all the time with Dall.E and whatnot.

    For me it seems like what happened after that is where the lack of focus started to hurt them: they realized that models themselves will be a commodity and have no moat, and that they needed to somehow build a network or something to keep pulling people back in. Sora was one such attempt, and it failed hard.

    To me, enterprise / B2B seems like a much easier, obvious market to approach, but I don’t know a lot about B2C. But it seems like B2C was what OpenAI was going after.

    notTheLastMan 2 hours

    [dead]

  • stephc_int13 15 hours

    My opinion about this is that Google see it as a way to weaken OpenAI, and few other side benefits, including the option to acquire Anthropic.

    And it may very well be bad news for OpenAI.

    twobitshifter 12 hours

    OpenAI was created to counter the threat of Google controlling a possible AGI. What if we still end up in the same state in the end? Both Anthropic and OpenAI have abandoned any pretense of altruism at this point and find themselves overwhelmed bythe forces of capitalism.

    aurareturn 5 hours

      including the option to acquire Anthropic.
    
    Not possible anymore unless Anthropic collapses and goes on a multi-year decline.

    They're worth $1 trillion in private market. If they IPO today, I'm willing to bet my house that the hype will drive them to $2 trillion market cap or 50% of Google's marketcap.

    OpenAI and Anthropic will be the biggest IPOs ever - bigger than SpaceX. That's my prediction.

    sumedh 13 hours

    > including the option to acquire Anthropic.

    I have feeling that Dario is not the type of man who would want to be acquired and then have Google's CEO telling him what to do.

    com2kid 13 hours

    It'd be funny if Google offered 750m in stock + cash just to see what happened... :D

    The drama on HN alone would last for days. Twitter would implode in on itself.

    aurareturn 40 minutes

    That’d be a bargain for Anthropic. They’re growing 10x each year and could reach $100b+ in revenue by end of 2026.

    siva7 15 hours

    That boat has sailed off. Not even Google has the cash to buy a company valued at almost a trillion dollars.

    stephc_int13 14 hours

    Maybe, I think there is a lot of uncertainty about valuations of AI labs in the near to medium future.

    OpenAI crashing would be good news and bad news for Anthropic investors.

    charcircuit 13 hours

    You don't have to buy companies with cash.

    SJC_Hacker 14 hours

    Valued at a trillion by basically, no one who would actually invest anywhere close to that

    aurareturn 43 minutes

    Bring them to public market and I’d bet my house it will shoot up to $2t on first day.

  • bluecalm 15 hours

    I find it crazy that Google considers Anthropic to be worth almost 10% of Google itself (350B valuation mentioned in the article). Anthropic gets traction but has no moat, no infrastructure and relatively small team working for it. I feel for 40B you can get a lot of very smart people and a lot of very good hardware to outcompete it.

    hellohello2 6 hours

    IMO: you are correct, which is why the valuation is only 350B and not significantly more.

    conradkay 14 hours

    > I feel for 40B you can get a lot of very smart people and a lot of very good hardware to outcompete it

    Nah, see Meta

    chpatrick 10 hours

    I think the very smart people already work at Google and the 40B buys some of the rest.

    GoToRO 14 hours

    the moat is the tool itself. You understand this after you start using it.

    sumedh 13 hours

    > You understand this after you start using it.

    Its just amazing people that people talk about Anthropic and have never used it.

    siva7 15 hours

    25% ;)

    mkl 13 hours

    No, Google's market cap is $4.1T, over 10 times $350B.

    conradkay 9 hours

    They mean Anthropic secondaries are trading at more than a trillion (allegedly)

  • gigatexal 16 hours

    "The Alphabet subsidiary is committing to invest $10 billion now, at a $350 billion valuation for Anthropic, with another $30 billion to follow if Anthropic hits certain performance targets, according to Anthropic."

    this is insane. on the secondary market the valuation is 2-3x that. what gives?

    panarky 15 hours

    Anthropic raised $30 billion at a $350 billion valuation (pre-money) in February.

    Google's deal from prior rounds likely lets them buy in at the same valuation other investors get every round, so they're just getting the February valuation.

    Amazon did almost the same thing last week, at the same valuation.

    lanthissa 15 hours

    Googles giving them something thats a lot more scares to them then dollars, large volumes of chips quickly.

    If you gave anthropic 10b cash they couldn't get chips in the 0-6mo timeframe at scale. Anthropic is suffering reputational damage due to choices they have to make around capacity constraints.

    Google, AWS, and Azure are the only people who can help them so they hold the cards, thus the good terms.

    Handy-Man 15 hours

    That's the last round they raised at. They had other offers from VCs at ~850B they rejected. Seems like may have been in works since that last round was being raised and just finished paperwork?

    manquer 15 hours

    The GOOG and AMZN deals announced earlier this week would be considered part of the same Feb'26 round. I.e. it would have the same seniority rights as that round.

    It is not uncommon to keep a round open after the formal announcement for a bit so that few investors who could not close for whatever reason are part of it. It can be hard to line up everyone at the same time, especially when they are public companies.

    ---

    Specific to your point on why valuation can be lower than market at the same time - Goods(and stocks) while feel to be homogeneous, divisible, fungible, they are not. Size can value of its own.

    A block of 10% shares may be worth more (or less) than unit share price, because them being available together has a property of its own, making it either more desirable when someone wants to acquire or harder to sell because there is not enough demand if all of them get dumped at the same time [1]

    In this deal terms, just cause few ten millions are trading at $850B, or some investors can put in say $1-2B doesn't mean you can raise $40B at the same valuation.

    There isn't depth in the market to raise $65B (including the AMZN deal) at $850B valuation. There is always some demand at any price point in the demand supply curve, you will probably find few people who will buy few shares at $10T, or $100T or some ridiculous number but that doesn't mean you can raise a large round on that.

    Strictly speaking it is not even $350B per se, i.e. Google and AWS benefit from this as vendors. It very much like vendor financing with convertible debt. Meaning it is worth that much to them, but not to you and me because we are not getting some of the money back as sales that boosts are own stock.

    ---

    [1] In the same vein, price can also depend on what you are getting in return, hard immediate dollars is the highest value. However if you are getting shares in return, you can usually negotiate a premium depending on risk of the shares you are getting.

    The recent SpaceX - Cursor deal is a good example, any founder would likely take say $10B all cash offer over the $60B from SpaceX, or price would be closer to cash if it GOOG, AMZN, APPL shares instead - proven deeply liquid market etc.

    nly 15 hours

    Top of the book? Nobody on the secondary market is investing $30bn

    JumpCrisscross 15 hours

    > Nobody on the secondary market is investing $30bn

    Correct. But I think $5 to 10bn are sitting ready for $700 to 800, which strongly implies Google is getting a solid deal on this.

    panarky 13 hours

    [dead]

  • urba_ 16 hours

    I consider them competitors… This reminds me of Microsoft in 1997 investing $150 million in Apple, saving it from near bankruptcy

    hu3 14 hours

    > Microsoft in 1997 investing $150 million in Apple, saving it from near bankruptcy.

    If only Apple could pass the favor forward. But no, they can't be bothered to invest even a single million in Asashi Linux to benefit their own hardware.

    twoodfin 15 hours

    Google is right (I think) to invest in winning compute share from Nvidia over winning token share from other frontier model builders.

    infecto 15 hours

    They already had a non trivial stake in Anthropic though?

    SecretDreams 15 hours

    It just keeps the lights on for the whole industry.

    The tech is great but valuations are out of control. It's cheaper to keep valuations high through these circular financing deals, rather than to allow for any deflation.

    raincole 15 hours

    They are, but Google Vertex has been one of the official ways to use Claude since forever.

    lanthissa 15 hours

    googles multiple businesses and gemini isn't the largest one.

    anthropic is the anchor external customer of tpu's and nvidia is worth more than all of google. If tpu's actually breakout as a viable alternative over the next few years for multiple clients the business could easily be worth as much as search, maybe more.

    nikcub 12 hours

    Google cloud also need to be able to offer Anthropic models on Vertex otherwise they just won't be competitive.

    Microsoft is in the same boat with Azure.

    shimman 11 hours

    Google Cloud also needs to show constant quarterly growth so what better way than simply buying it and fudging the numbers?

    billisonline 15 hours

    > If tpu's actually breakout as a viable alternative over the next few years

    Why haven't they broken out yet, I wonder, if they're more efficient for inference and LLM costs are now weighted towards inference over training?

    AnggaSP 4 hours

    TPUs are not that portable and easy for both inferencing and training. It has since improved a lot with their effort on the torch backend (XLA/TorchTPU) and JAX though.

    But as far as i know it currently supports just that + tensorflow (which nobody uses it anymore, least here). And last we tried, so much of our kernels needs rework that it’s not worth the effort.

    This may change since ironwood but we haven’t tried that generation.

    lanthissa 14 hours

    there are literally not enough tpu's on earth for them to break out, every tpu thats been made is in use, the spike in demand is recent and google has heavy competition for foundry space.

    chris_st 15 hours

    Possibly because they just haven't been able to manufacture enough of them yet to be a viable business to others? They're fighting everyone else for foundry space and time.

    zaphar 15 hours

    You essentially have to run in google to use them and that probably limits their ability to breakout. Anthropic might be doing this deal as a way to shore up their supply chain and cost of both inference and training by leveraging Google's hardware and chip manufacturing expertise.

    lanthissa 14 hours

    every tpu thats been made is in use and sold at a high margin, demand is not the issue.

    ai-x 15 hours

    Several customers like Citadel, run TPUs in their own datacenters (closer to Exchanges)

    casey2 15 hours

    Anthropics erratic behavior is going to get Google regulated. This is "don't rock the boat" money. Google existentially needs AI for advertising.

    warkdarrior 15 hours

    > Google existentially needs AI for advertising.

    What's the explanation behind this? I am sure they use AI in their ad network (matching web sites with ad offerings, maybe generating ads automatically), but is there more to it?

    crumby 15 hours

    I know AI companies are selling ad training into the models so the models know about your product. I'm not sure if that is what they were referring to, but it could be related.

    kshacker 15 hours

    That was precisely my thought on seeing the news. I did not know about Google's existing entanglements with anthropic, but it seemed like a clear message - Do not panic on the money, do the work.

    nubg 12 hours

    "Do not panic on the money, do the work." - sorry what do you mean by that?

    kshacker 12 hours

    If you look at their recent actions, they all seem financial as if they have become the monopoly already and can do anything. Maybe it is driven by fear of going bankrupt

    Example. Them doing a AB test where they remove Claude CLI from the 20$ pro plan ... they rolled it back now. Other rate limits where they publicly double your quota at NON peak times but lower it during peak quietly. These are tacky and signs of panic.

    One such issue is experimentation. But when you see back to back issues, it looks odd.

    altern8 15 hours

    If I remember correctly, Microsoft allegedly did that for the very selfish reason of looking better in terms of being a monopoly.

    stavros 15 hours

    Rather than for the altruistic reason of saving a struggling fellow company?

    politelemon 15 hours

    Of course this is well known. Everything Microsoft does is for selfish capitalist reasons and everything Apple does is for altruistic philanthropic reasons.

    kqp 14 hours

    They’re publicly traded for-profit companies, selfishness is literally the definition of both of them and it’s the farthest thing from a secret.

  • dzonga 15 hours

    my take is Anthropic needs a large cash infusion since it's the one of the popular model providers.

    if it runs of out of cash - then it's bad for the whole industry.

    same as OpenAI. so all players - will provide cash & compute to keep them going.

    slashdave 15 hours

    They need compute

    sdevonoes 15 hours

    > if it runs of out of cash - then it's bad for the whole industry.

    Why? I don’t think we would suffer if anthropic disappeared tomorrow

    ares623 15 hours

    Google, Microsoft, Oracle, Meta, Nvidia. All their stock gains in the last 2 or so years were because of the AI hype. And who knows how much money the borrowed and promises they made on the assumption that their stock will continue to rise in the same pace for years to come. When one domino falls, they will follow. So they have every incentive to keep the music going for one of their "friends".

    goolz 14 hours

    If Anthropic disappeared tomorrow due to running out of cash it would cause a great panic, no?

    shimman 11 hours

    For who? A bunch of financiers that gamble with pension funds? The real panic is when they IPO and force 401Ks to buy into it.

    andxor 11 hours

    You're so naive. It's all a big game of domino.

  • Oras 16 hours

    They just announced their new chip, and they are the ones created transformers yet investing this amount in a competitor?

    I don’t know what to make of it

    northern-lights 14 hours

    Why do you think Google considers Anthropic a competitor?

    wirgil1 11 hours

    hedge your bets, I know I would

    spwa4 16 hours

    Given that anthropic is probably paying it all back to them in compute bills, they may not be giving them anything.

    jeffbee 15 hours

    It makes every bit as much sense as investing in Snap while still operating their own social network product. Seems to have worked out fine (for Google, not Snap).

    dzhiurgis 12 hours

    FWIW I’d buy SNAP now that they are at rock bottom

    pupppet 15 hours

    I wonder if Google regrets publishing that article on transformers.

    jeffbee 15 hours

    Urs used to talk (internally) about not publishing "industry-enabling papers" which is why most Google infrastructure papers were describing something that had already been turned off, or was already in the process of being replaced by the next system (GFS, Vitess, etc). The things that did get published were either considered not key advantages, that other companies simply cannot do, things that other companies wouldn't bother doing, or experiments that never worked at all. There were exceptions of course. But it led to a public perception of the Google stack involving mostly technologies that were long dead or were never adopted.

    "Attention Is All You Need" was a very very different thing and I also wonder if they are glad they published it. But I imagine if they hadn't, the motivation for researchers to leave Google would have been even larger.

    sumedh 13 hours

    So Google allowed publishing the Attention paper because they didn't understand its value.

    CamperBob2 12 hours

    They patented it. When the dumb money stops sloshing around, we'll start to see the fallout from that.

    cameronbrown 9 hours

    > I also wonder if they are glad they published it

    https://youtu.be/ue9MWfvMylE

    Jeff Dean is asked this question by Geoffrey Hinton at 37:35 - might worth watching. Overall an interesting video.

    johanyc 3 hours

    Link with time code: https://youtu.be/ue9MWfvMylE?t=2261

  • cromka 14 hours

    Anyone else has an increasing feeling that all the AI hype is turning into a "Dot-Com Bubble x 2008 Credit Default Swaps" collab?

    djeastm 14 hours

    I think a lot of people suspect that, but no one is able to help themselves. Manias are a feature/bug of humanity.

    zrn900 7 minutes

    It's a mathematical reality rather than 'feeling'. It has become that a few years ago. It becomes even more serious when you consider that the Chinese models are just as good, and they are being just given away to run locally like Deepseek.

    Why should anyone feed the SV AI bubble if they can just use cheap Chinese models, even locally if they want to...

    kilroy123 3 hours

    Actually no. I think we're just getting started.

    littlestymaar 14 hours

    x oil shock (due to Ormuz).

    0xbadcafebee 14 hours

    It's an actual bubble specific to AI. This investment is just another example of the bubble. Pre-2008, all the investment would be coming from banks. Post-2008, all the investment came from VCs... but VCs got tapped out, so AI companies went to bigger private capital. They tapped out all the private capital. So now they're making the rounds, making deals with any corporations left with tens/hundreds of billions in cash, because they're the only possible investors left. When all of them are tapped out, and without a release of pressure from the hardware market, the only investor left will be the government. After that it's kaplooie.

    You'll notice that all the really big deals have fallen through, because they're based on promises and meeting objectives that can't be met. So it's likely that there will be really big writeoffs but not a huge implosion like 2001/2008. The real losers will be the retail investors who put all their money in a handful of stocks at ridiculous valuations.

    uncivilized 14 hours

    Which big deals have fallen through?

    mhitza 13 hours

    "Nvidia’s $100 billion OpenAI deal has seemingly vanished" https://arstechnica.com/information-technology/2026/02/five-...

    "Disney cancels $1B deal with OpenAI after video platform Sora is shut down: 'The future is human'" https://finance.yahoo.com/sectors/technology/articles/disney...

    And if I recall correctly the AI datacenter deal isn'tdoing Oracle stock any favours.

    singingtoday 8 hours

    I've got two max 20 plans and totally get value from it.

    brap 2 hours

    How does that work? Are you using separate accounts? Do you just max out one of them and then switch to the other one?

    We need to run a SotA coding agent basically 24/7 uninterrupted and so far we didn’t find an easy solution for this (you can get provisioned TPUs for Gemini on GCP but it costs a fortune).

    Surely that’s possible for under $5k a month? $10k?

    blueblisters 14 hours

    I feel the same until I’m reminded I’m paying Anthropic $100 every month for something that’s indispensable to me now and would probably pay a lot more. Very inelastic demand as long as competition is low at the frontier.

    zahlman 15 minutes

    Would it still be indispensable to you if you weren't in this industry?

    uncivilized 13 hours

    Are you paying that, or is your work paying for it?

    If you’re using it for personal work, why is $100 worth it?

    linsomniac 10 hours

    >If you’re using it for personal work, why is $100 worth it?

    I'm not who you were replying to, but:

    My work pays for $100/mo Claude, I pay another $100 to bring it up to $200/mo level because:

        - Partly: I got in the habit back when work was only paying $20 and I was paying the $180.
        - It is not worth it to me to spend braincells trying to optimize my use to slip into the $100 plan, I give everything "Opus, effort max" and with the $200/mo plan I never run out ($100 I'll run out mid-morning).
        - I run a *lot* of experiments, including work-related and personal, to try to understand and improve my AI use skills.
        - I also use it for a lot of personal things, right now I'm using it to help me plan a backyard studio and ADU.
    
    "ccusage" the past month says $1017.

    edit: Formatting, ccusage

    auto 11 hours

    I’ve been a copilot and ChatGPT subscriber for probably close to two years now, give or take a couple of months, and I had a trusted friend telling me for months to give Claude a try.

    It took about two weeks of really running it through its paces, and constantly slamming against the limit on it to convince me I had to upgrade to at least the 100/month sub, and at this point I wouldn’t blink to bump that to the 200/month if necessary.

    I 100% believe we’re in a bubble, and that this level of compute isn’t sustainable at this price point, but for as long as I have it, I’m going to run it at the redline.

    I’m a solo dev working on a project that I’ve just gone full-time on, after about 1.5 years of part time work. It’s a codebase that I laid the groundwork in, and has very well established systems, standards, and constraints.

    The work I’m using Claude to do is the exact work I would be doing myself, but it does it at somewhere in the neighborhood of 5-10x the pace I could have. I don’t know that I could get the same rate of production if I managed a team of 2-3 programmers. Right now, it’s literally almost perfect at taking my iterative suggestions, and implementing them at that accelerated pace.

    Honestly the hardest part is dealing with the fact that at the end of the day, I have to understand this codebase perfectly (solo dev and all that), so I have to take in changes to it that are also 5-10x the rate my normal intuition would. But, again, the plus side is that it’s implementing them essentially exactly as I would have, as it has ~20k lines of code that I wrote to use as an example.

    If I were to hire even one other programmer, I’d be paying well north of 5k/month, and I’d not only be managing a super computer programmer tool, but an actual human being as well. $100/month might as well be free comparatively.

    aurareturn 46 minutes

    If it gets you so much value for $100/month and Anthropic still claims they have 50%+ gross margins, why do you think we are 100% in a bubble?

    Doesn’t make any sense.

    Aurornis 13 hours

    $100/month isn't much for developer tooling. If you add up how much I spend on hardware upgrades, other SaaS products like backup services, software licenses, and other things it's easy to justify $100/month for a powerful tool.

    I pay for my own AI provider subscriptions because keeping work and personal strictly separated is important for me. I do know some people who secretly pay $200/month for Claude and use it at their job even though it's not approved. I do not recommend doing that, but it shows that some people value this for their work.

    For developers earning more than $10K per month, spending less than 1% of salary on tooling to make the job easier is easy to justify.

    shimman 11 hours

    I too spend over $100 on drugs that make me feel productive but actually am not.

    sethops1 12 hours

    I pay TMobile $100 a month but they aren't worth a trillion dollars.

    aurareturn 5 hours

    Yes but Tmobile enterprise customers don't pay much more for plans. In fact, they may pay less because of volume.

    However, Anthropic can and will charge much more for enterprise customers.

    dagss 5 hours

    Anthropic's market is global and the US is 4.5% of the worlds population. Telcos are regional.

    vovavili 12 hours

    TMobile is effectively a monopolist in many US regions.

    lotsofpulp 1 hours

    There are no places in the US that Tmobile is the only wireless mobile network provider. While all 3 mobile network providers have weak coverage areas, Verizon is considered to have the most reach.

    shimman 11 hours

    Still not worth a trillion dollars.

  • dwayne_dibley 15 hours

    $40B. Numbers mean nothing anymore

    0xBA5ED 14 hours

    Yes, and it's incredibly wasteful.

    mirekrusin 14 hours

    yep, you know what's better than billions? trillions.

    wirgil1 11 hours

    tech is the biggest sector in the world. We're seeing what happens when those war chests for rainy days get brought out

    pcurve 14 hours

    Yup. You can actually buy several European airlines with that kind of money.

    For example, you can buy KLM Air france for less than $3B.

    It is a profitable business that does $30B in sales and $1B in profit. (and has been profitable since for the past 4-5 years)

    Jabbles 13 hours

    It has $40B in liabilities.

    [PDF] https://www.airfranceklm.com/sites/default/files/2026-02/202...

    hayd 14 hours

    "$30B in sales and $1B in profit."

    This margin seems terrible.

    oscarcp 13 hours

    4% seems reasonable, it's pretty much standard across the board in Europe (median sits around 6% if I recall correctly), not many companies can pull 10% profit. For example in Spain, major conglomerates like INDITEX have a 11%, Iberdrola has a 10%. We also don't use the same metrics and parameters as the US for profit, so the values are skewed.

    That said, certain sectors like software (as in custom enterprise grade software dev) pull revenues that are much much higher sitting around 35%, but it's not that common.

    nikcub 12 hours

    Airlines are down there amongst cinema chains and video game retail stores in terms of being terrible businesses

    polski-g 10 hours

    Want to know the easiest way to become a millionaire?

    First, become a billionaire. Then, start an airline.

  • spindump8930 17 hours

    Hopefully this money means more compute infrastructure to help Anthropic counter the efficiency changes that have created this perceived downtrend in claude quality.

    littlestymaar 14 hours

    > the efficiency changes that have created this perceived downtrend in claude quality”

    Why the euphemism? What Anthropic did was an aggressive degradation of their model to save compute, and it's not just “perceived downtrend”, Anthropic themselves have acknowledged the quality of service degradation.

    palmotea 16 hours

    The puzzling thing is why Google would try to help with that. Aren't they competitors? Wouldn't they want their competitor to have problems?

    It's more understanding for Amazon or Microsoft to make such an investment, because they're not as competitive in the model space.

    infecto 15 hours

    Google was already an investor in Anthropic but I don’t think they are truly competitors in this space.

    morelikeborelax 16 hours

    What if Google can't compete? They don't want to be left behind and all this money being throw around is just nonsense anyway.

    mchusma 16 hours

    Google owned 14ish percent of Anthropic before this investment, so presumably this could bring it up to as much as 25%?

    bmurphy1976 16 hours

    There's always three:

       Google buys Anthropic.
       Microsoft buys Open AI (or vice versa depending on how things go).
       SpaceGrok buys Cursor, limps along in 3rd place.
       Meta is the last man standing, get's stuck with Oracle, dies.
    
    And then hopefully some open source models save us from this nightmare before China commadatises everything.

    Edit: I forgot Amazon. Who knows what they will do. They're the wildcard anyway.

    _puk 14 hours

    OpenAI buying Microsoft.. I honestly think I'd like to see that.

    Anything to invigorate the desktop.

    Microsoft buying OpenAI.. 10 minutes later it's rebranded Copilot.. and.. nothing much changes in the world. Oh, except all the AI improvements are around Enterprise governance.

    michelb 16 hours

    Deepmind is heavily using Claude. This could help secure computing power.

    tomrod 15 hours

    I'm not up to date, I think. How so?

    michelb 6 hours

    There's been a spat between some people on X, about how few engineers inside Google want to work with Gemini, given that it apparently is not great with code, and they would rather use Claude.

    This same sentiment is there within Deepmind, except they have more power it seems. Perhaps Google is hedging their bet?

    Best non-X link I could find: https://benzatine.com/news-room/internal-strife-at-google-th...

  • consumer451 15 hours

    It is very difficult for me to see any amount of money being thrown at Anthropic as a bad idea.

    The amount of new revenue that I am personally able to create for my clients, using Claude models for dev, and Claude models inside the insanely agile products delivered, is astounding.

    If I was not currently experiencing this myself, and someone told me that this was possible, I would be calling them names.

    gip 9 hours

    Same with Codex and very soon with open source & local models. Training great models (for coding and similar tasks) seems to be a question of scale and not much more.

    It is likely that 99% of the value created by Anthropic / OpenAI / friends will go the end user. Which is great news.

    buntp 5 hours

    Curious, what do you do btw?

    zrn900 10 minutes

    [dead]

    zakisaad 14 hours

    You could say the same about Codex (and other tooling). Opus as a model is market leading (trading blows with the greatest that OpenAI is peddling), but there will be a reckoning when open weight models are good enough - and I'd argue we are almost there with some of the latest releases. If you hook up the latest OpenAI models to something like OpenCode, its a taste of what an open harness with a powerful model (outside of a providers ecosystem) will be able to offer developers in the future.

    consumer451 14 hours

    I know there are multiple paths at this, thank the computing gods.

    If we get to an end-state of monopoly/duopoly at this game, then we are truly screwed.

    I was just stating my current use and revenue path. Anthropic has insane velocity, in April of 2026.

    neya 12 hours

    > when open weight models are good enough

    I think Deepseek is already there.

    ManuelKiessling 13 hours

    Would you mind sharing what you can and want about how the sausage is made? I would love to hear concrete cases where actual leverage is measurable. I‘m asking in good faith, not to attack your standpoint.

    consumer451 12 hours

    I would happily do so on a 1:1, private level. See bio for contact.

    MaxHoppersGhost 9 hours

    Sounds like you're about to give the OP a hard sell on a course or some other BS.

    james2doyle 12 hours

    You’re paying the subsidized cost. Those margins will shrink once the real bill comes due. I really think everyone will look back at this time as the golden area of cheap AI. We are already seeing the costs (and restrictions/limits) creep up with the Western models.

    zrn900 8 minutes

    > I really think everyone will look back at this time as the golden area of cheap AI.

    Chinese models like Deepseek v4 are as good and 10 times cheaper. You can even run Deepseek locally. So no, cheap AI wont be over. Just the US investors won't be able to profit off of the artificial bubble that is there now but wont be in the future.

    consumer451 12 hours

    > You’re paying the subsidized cost.

    100% agree. I have been trying to tell everyone to build their ideas, and exploit this environment where 100B of VC money into OpenAI/Anthropic = some percentage of money invested into your idea. This is the golden era of building! The music is gonna stop soon. Build now ffs!

    HDThoreaun 9 hours

    Compute has been getting exponentially cheaper nonstop for decades. Much more likely that current capabilities are effectively free within 5-10 years

    Spacemolte 7 hours

    It amazes me how productive it's possible to be using AI, but I also has this nagging feeling that we are being reeled into being so reliant on this that when the price starts going up, we will simply eat the cost.

    The math is pretty simple, and it's easy to justify still paying the price even if it goes up 10 fold, when compared to hirering more resources its still cheap.

    So I guess having multiple players and competition in the market is the key?

    Petersipoi 12 hours

    I think the opposite. AI will get cheaper as models become more efficient and we solve the datacenter/energy problem. I bet 10 years from now AI, that is way better than what we have today, will be close to free.

    jayd16 10 hours

    Just like how cloud costs got cheaper and we solved the datacenter/energy problem over the past 10 years.

    Petersipoi 7 hours

    For the most part, we did, actually. We had plenty of energy and computer until AI came along.

    Energy will get fully solved eventually. To think otherwise is to bet against humanities ability to innovate, which I don't think is ever a wise bet.

    simianwords 4 hours

    Cloud did get cheaper. What are you saying?

    I just ran a quick gpt check - EC2 Prices have gone down by more than 80% after accounting for performance and inflation over last 20 years.

    aurareturn 5 hours

    You can practically host a website that serves millions of users a day for nearly free using Cloudflare. Imagine doing that in the year 2000.

    slopinthebag 14 hours

    Why do AI boosters like yourself all have the same writing style? Was the comment AI generated?

    It's like insane hype marketing speak. "insanely agile products delivered" like huh?

    xeromal 13 hours

    I'll trust someone who has an account since 2018 vs 71 days ago. Especially when your name already indicates you're biased.

    slopinthebag 13 hours

    Wym "trust"? What is there to "trust" with my comment? Huh?

    quadrifoliate 13 hours

    I've had an account for a while too, and I do think that that GP comment has a style typical of "AI boosters" -- breathless, big on hyperbole, and low on detail.

    To the GP: I'd like some details of these "insanely agile products". Is this insane agility reflected by your customers saying that they have a better, faster, more reliable product? How are you measuring this?

    anon84873628 13 hours

    To me it is more like software consultant speak than AI booster speak. And it is not exactly surprising that the people in a particular subculture all talk similarly.

    slopinthebag 12 hours

    Well, I hear it from people who are regular devs and not consultants, although it's more common with people who aren't really working in the trenches anymore.

    Like ex-developer turned PM who is now vibe coding everything they can and thinks it's the greatest thing ever.

    consumer451 14 hours

    > Why do AI boosters like yourself...

    I believe that I am more of an AI realist. The agentic dev tools are really helping me out, but if I could wave a magic wand to make AI go away for a hundred years, I would do it.

    I really hope that we can all laugh at how wrong I was.

    However, I believe that the horrors will likely outweigh the benefits. Our global society/political systems are not ready for Stasi as a Service, mass unemployment, or any of this impending crap storm.

    keybored 13 hours

    Getting in on the astounding action before the world turns to shit.

    Who could call me a starry-eyed idealist? I have invested in bunkers.

    consumer451 11 hours

    lol. I have been a starry-eyed idealist all of my life. I would like to think that I still am.

    I hate money.

    You know what I hate even more? Being the supposed "smart one," and having to borrow money from my entire family to get through my health issues.

    I will never do that again, hopefully.

    12 hours

    SpicyLemonZest 13 hours

    It's like insane hype marketing speak because that is genuinely the difference from what it was like to develop software 6 months ago. You see many people using the same language, often in comments that are otherwise stylistically quite different, because many people are experiencing the same thing.

    I get that it's tedious to sit on tech forums listening to an endless stream of people insisting that suchandsuch technology is world-changing. Many people and probably most people who say that are wrong. But sometimes the world really does change.

    slopinthebag 12 hours

    It's "world changing" yet the world seems mostly the same other than the increasing enshittification of everything...

    Peritract 12 hours

    > I get that it's tedious to sit on tech forums listening to an endless stream of people insisting that suchandsuch technology is world-changing.

    It's tedious because the insistence doesn't seem to be matched by much observable change.

    SpicyLemonZest 10 hours

    There's substantial observable change pointing towards a universal software development speedup in the neighborhood of 2x. Much of it is internal company metrics, simply because it's meaningless in most enterprise contexts to count how much software is released. Things you can count, like the number of phone apps published, show the same pattern: https://techcrunch.com/2026/04/18/the-app-store-is-booming-a...

    Peritract 10 hours

    I'll grant that there's evidence of more low-level activity, but I'm not sure that equates to meaningful change particularly. "Released an app" is a neutral signal on its own, in much the same way that the Unity asset store led to an increase in game releases, but 'more asset flips' isn't really a major change to the gaming industry.

    If software development speed has doubled, then we should be seeing not just an increase in apps being released, but an increase in product output from the big players too.

  • throwawaytea 15 hours

    If you added up all the major AI valuations, it's apparently worth more than products Americans constantly buy and rely on for their main life. So either AI is going to be involved in every Americans life to a large degree, and paying real money for, or these valuations are insanely wrong.

    IncreasePosts 15 hours

    I'm not sure exactly what kind of point you are making but the valuations are at least nominally based on the expected value of the business far into the future and aren't comparable to, say, purchases done over a year despite both being denoted in dollars.

    Ericson2314 14 hours

    Stocks vs Flows! You can't compare (as in subtract and check sign) $ and $/s!

    nimchimpsky 14 hours

    [dead]

    notTheLastMan 2 hours

    [dead]

    vovavili 12 hours

    At some point in American history you probably could have said the same about railroads.

    0xDEAFBEAD 1 hours

    "The American Civil War (1861–1865) was followed by a boom in railroad construction. 33,000 miles (53,000 km) of track were laid across the country between 1868 and 1873,[28] with much of the craze in railroad investment being driven by government land grants and subsidies to the railroads.[29] The railroad industry was the largest employer outside of agriculture and involved large amounts of money and risk. A large infusion of cash from speculators caused spectacular growth in the industry and in the construction of docks, factories, and ancillary facilities. Most capital was involved in projects offering no immediate or early returns.[30]"

    https://en.wikipedia.org/wiki/Panic_of_1873#Factors

    "In the United States, the panic was known as the "Great Depression" until the events of 1929 and the early 1930s set a new standard.[2]"

    UltraSane 14 hours

    The valuations on AI companies are a bet on them capturing enough of the $60 trillion annual wages paid to people to have a good ROI.

    svieira 48 minutes

    And when these AI companies are slurping up $30 trillion of the annual wages ... where does it re-enter the human economy? Or does it just disappear into NVIDIA and never come out again?

    zmmmmm 14 hours

    there are plenty of people who basically believe this is the end of the human economy - there will be nothing left that isn't done by AI in the future. Even the bits left that humans do will be human facades on AI driven activity (like your hairdresser will be viewing you through AI powered glasses using AI powered scissors etc).

    So from that point of view you can indeed look at it as the entire value of the economy should be invested into AI companies.

    lionkor 4 hours

    But let's be real, the AI output today is largely complete and utter shit. Like, really, without careful promoting and oversight, it's not fun to read, it's not smart, it's not accurate, it's not thorough, and it's certainly not yet over the massive show stopping hurdle that is lying and hallucinating all the time.

    operatingthetan 4 hours

    The more I use them the more I draw this conclusion. Like we have given certain models an unbelievable amount of slack for bad output.

    com2kid 13 hours

    That is ultimately where it is headed and has been headed for over 100 years now.

    The question is when will we get there.

    If the answer is tomorrow, money means nothing and none of these investments matter. If the answer is 30 years, well lots of money to be made up until the inflection point of machines being able to design, build, and repair themselves.

    Capricorn2481 3 hours

    You heard AI powered scissors and thought that's where we're heading? I think you'd have to be totally divorced from the average person to believe this.

    Meanwhile people are still begging car manufacturers to stop locking their glove box behind a touch screen. Or how about a TV that isn't loaded with crappy software that makes it unusable after 2 years. There's a reason we don't put tech in everything.

    duskdozer 1 hours

    Well, fortunately (from a certain point of view), it doesn't really matter what people beg for as long as they need the thing anyway and you and your competitors all agree not to give them what they want.

    JumpCrisscross 15 hours

    > it's apparently worth more than products Americans constantly buy and rely on for their main life

    What are you counting in this category?

    throwawaytea 15 hours

    There are countless examples, but let's say Ford. Worth $150 billion, $50 billion not counting debt.

    My neighbors just gave Ford $60k. It'll be a while until my neighbor gives Anthropic $60k.

    JumpCrisscross 15 hours

    I guess I’m not surprised that if one “added up all the major AI valuations,” it’s more than any single consumer purchase or even most single companies.

    VirusNewbie 12 hours

    Ford probably made 3k profit on that car. Given the falling costs of inference, what are the chances your neighbor gives anthropic 3k in profit over the next few years? Not terribly bad.

    KingMachiavelli 15 hours

    > My neighbors just gave Ford $60k. It'll be a while until my neighbor gives Anthropic $60k.

    How much of that 60K does Ford actually keep? And how much will it be once BYD is allowed in the US? The forecast for Ford is pretty much only downwards, the possible upside on AI is huge.

    If every company in the F500 starts spending $2000+ on AI credits per employee, then every consumer product will indirectly be funding AI companies. I think it's already the case that companies small enough to avoid/skip getting O365 or Google Suite subscriptions will pay for AI first.

    AussieWog93 11 hours

    On the flip side, enterprise.

    How many businesses are paying Ford $10 million per annum?

    Aurornis 13 hours

    > My neighbors just gave Ford $60k. It'll be a while until my neighbor gives Anthropic $60k.

    AI company revenues aren't driven by consumer subscriptions.

    The people doing $20 or even $200 per month plans for their side projects aren't driving the demand. It's going to be business customers spending $1000/month or more per developer and all of the companies feeding their business processes through the API like call centers, document processing, and everything else.

    If you're thinking of AI companies as consumer plays you're only seeing the tip of the iceberg. We get cheap access to Claude because they want us playing with it so when it comes time for our employers to choose something we can all lobby for Anthropic.

    operatingthetan 13 hours

    >when it comes time for our employers to choose something we can all lobby for Anthropic

    They should stop messing with us then. Stealth model changes, threatening to take code away on the $20 plan, the list goes on.

    nmilo 14 hours

    Valuations are based on future expected earnings, not revenue. It cost Ford a lot of money to make that $60k car. The margins for AI companies are unknown but the market is pricing that they’ll be higher at one point. Not that they’ll attract more revenue from the average person.

    dragandj 2 hours

    Now let's think about how much money it costs Anthropic to make that $60k. :)

    ai-x 15 hours

    Did you add Google, Meta, Apple, Amazon in that because more people consume from these firms than Ford

    ipaddr 15 hours

    His neighbour isn't spending $60,000 on all of those together

    _puk 14 hours

    Count the Fords on the street.

    Now count the Amazon deliveries in a year on said same street. And next year, and the year after, and.. however long one keeps a Ford these days..

    It's quite a scary thought exercise.

    ipaddr 5 hours

    The average person spends 2,800 with prime or 1100 without. 75% of Amazon shoppers have prime so about $2500 a year. Amazon collects 35% on each sale where they ship and package for you.

    Amazon makes 800 dollars off of each person in revenue.

    Ford makes $303 per person in revenue.

    AWS makes the same.

    AI spend for all platforms $450 per person

    Their costs to produce aren't equal.

    dzhiurgis 14 hours

    At 20 year depreciation it’s $250 a month. Close to Anthropic’s $200 model. IMHO at this point a lot of developers would rather walk than code manually.

    greenchair 1 hours

    nope

    root_axis 14 hours

    Yeah, but $200 a month is not a sustainable price.

    dzhiurgis 14 hours

    Seems they are growing and model is overloaded. I suspect they’ll raise the prices.

    $1k for a lot of developers here is totally worth it.

    com2kid 13 hours

    Cable TV begs to differ. I grew up working poor and plenty of people around me dumped a lot of money into cable TV subscriptions, and $120 back in the late 90s is $240 now.

    Computer costs keep collapsing. Image and audio generation is turned out to be less computer intensive than text (lol).

    First company to launch 24/7 customized streaming AI slop wins!

    throwawaytea 11 hours

    I think the poster was saying giving away the models for $200 isn't sustainable for the provider, not that a user won't pay $200 for the latest and greatest models.

  • ordinaryradical 15 hours

    It feels like the market is full Wiley Coyote on frontier model makers, and I like Anthropic's B2B business model.

    But all progress points to a commodification of foundation models--Google first named it as "we have no moat, neither does anyone else." So there must be some secondary play driving this, right? Hardware sales? Hedging for search ad revenue?

    Still feels mispriced. I think asset inflation leaves too much money desperate for the Next Big Thing.

    vickychijwani 6 hours

    The “no moat” comment is from May 2023, very early in the LLM era. Agents were not a thing yet, it was all just text generation.

    The integration of LLMs with tools and data via agent harnesses has created the opportunity for a real moat. As these products start differentiating, the moats will develop to be significant.

    iamdelirium 15 hours

    "we have no moat, neither does anyone else." is just an employee's personal work blog

    onlyrealcuzzo 12 hours

    You don't need a moat if you're selling shovels and everyone's digging holes.

    kranke155 14 hours

    We have no moat could be a bad assessment. First, the models have personalities, and that matters. I like talking Claude better. OpenAI is really different from Grok. The ai models are an extension of the main concern of the company they’re in.

    Also those personalities, quirks and choices accumulate. A lot of people talk about using Claude Code and Codex for different things. This is 100% my experience. Some people make better models, but on the top 3, there are often differences that are fixed only by switching between them. If I feel the need to switch between them, then there are significant enough differences and those differences will accumulate.

    fireflash38 1 hours

    I don't get why people are so gung-ho about these companies having a moat.

    As a user and a consumer, I don't want them to have a moat. Moat means pseudo-monopoly. That is the exact opposite of what we want.

    Only the investors and owners want a moat, to keep others out.

    So what they're doing? They're competing. Good.

    zahlman 20 minutes

    > I don't get why people are so gung-ho about these companies having a moat.

    Because they are investors, VCs, or startup founders who hope to establish their own moats.

    Users and consumers can get a lot of useful information from HN, but it's important to keep the local demographics in mind.

    mhitza 15 hours

    I haven't thought about any secondary play, but if these companies converge on Google's TPUs, they would probably eagerly slice from NVIDIA's current market.

    > In September 2025, Google is in talks with several "neoclouds," including Crusoe and CoreWeave, about deploying TPU in their datacenter. In November 2025, Meta is in talks with Google to deploy TPUs in its AI datacenters.

    https://en.wikipedia.org/wiki/Tensor_Processing_Unit

    dzhiurgis 14 hours

    I keep getting notification from my tooling that gemini models are overloaded so we switched you to openai. So I feel google is not ready to sell tpu’s just yet.

    UltraSane 14 hours

    YouTube is a kind of moat for Google.

    gverrilla 14 hours

    Interesting. Wanna expand?

    UltraSane 14 hours

    It is the biggest collection of video to train LLMs on.

    zaphar 15 hours

    Google does have a sort of temporary moat. They have a much better hardware supply line story than anyone else and the revenue to maintain that edge indefinitely.

    htx80nerd 13 hours

    This is the thing - Google is a real company with well established business, money of their own, hardware, server farms, etc. ChatGPT and Anthropic have none of that in the same way google does. They have an incentive to lie and 'fake it till you make it' so they can get out of the 'risk zone' of collapsing back in on themselves. Google can throw money at Gemini all day.

    flockonus 13 hours

    That may be true for OpenAI, less so for Antropic - which has much better margins. Both of these companies CEOs have come in public saying the same.

    No doubt as of currently Google has a better business. But the same argument could have been said about Instagram or Whatsapp before Facebook (now Meta) acquired them.

    nostromo 15 hours

    Running AI at a loss long enough to kill the competition would run afoul of antitrust laws. Even more so since they’re bundling their AI products with their search monopoly.

    Although I doubt this will stop them if they think it’s advantageous…

    randito 14 hours

    > antitrust laws. Even more so since they’re bundling their AI products with their search monopoly.

    couldn't this just be framed / spun as just using search data as training? i don't seem being bundled enough to run afoul with anti-trust.

    Sohcahtoa82 14 hours

    > Running AI at a loss long enough to kill the competition would run afoul of antitrust laws.

    Running at a loss long enough to kill the competition is basically the name of the game these days.

    When Uber started, they were basically setting VC money on fire by selling rides at a loss to destroy the taxi market.

    akozak 15 hours

    Lower real operating costs isn't the same thing as below cost pricing.

    US law here is nuanced. Good quick primer https://www.ftc.gov/advice-guidance/competition-guidance/gui...

    nyc_data_geek1 14 hours

    Who's going to enforce antitrust laws in this environment, pray tell?

    klabb3 15 hours

    > run afoul of antitrust laws

    Now, that’s a name I haven’t heard in a long time.

    pixl97 15 hours

    >would run afoul of antitrust laws

    Buwahahahahahahahhahah

    They drop a little cash on some shitcoin the president controls and those problems go away.

    15 hours

    15 hours

    Bewelge 15 hours

    I thought that these type of antitrust laws are in no way enforced anymore in the tech industry. And that it's been that way for decades. I mean the sheer existence of Google shows that right? What about Maps, Mail, Books... basically everything apart from Search? Why would an AI Mode as one category of Search results be any different? They're not actively promoting Gemini in those search results. They're simply augmenting it with this new tool that exists now.

    er2d 14 hours

    Yes anti-trust is very much theatre nowadays.

    As long it further's American interests globally - monopoly is fine. Other countries need to take notice and start picking winners nationally in order to compete with the large American big tech firms.

    SJC_Hacker 14 hours

    TSMC ?

    Airbus ?

    er2d 14 hours

    Are you claiming they are tech firms in the manner of a Apple, Google etc?

    lol

    Bewelge 14 hours

    Eh, I think this is actually not a specifically American thing. More of a neo-liberal mindset. Competition may be good in the long term. But a monopoly now may mean more money in your pocket now. The tech giants definitely give the US some geo-political power in some cases but in general the US would be better off with more competition.

    ed: @er2d, can't reply to your comment for some reason, so doing it here: I don't agree. In theory a monopoly decreases the necessity for R&D. Of course this becomes more complex if the R&D is funded or steered by the state. But look at the current state of LLMs. There is fierce competition between 3 US companies. But geopolitically it's the same as if there would be one monopoly. The US being the clear technological leader in an industry is not dependent on that industry being a domestic monopoly.

    And for the Europe comment: Also don't agree. Look at Boeing & Airbus. Both are companies where the US & EU have decided that they need to ensure the existence of a domestic airplane manufacturer. So in these cases they support these companies (often in violation of international trade laws). But it has nothing to do with monopolies. If a state decides to support a company to ensure its existence, a monopoly is the logical consequence and not the aim. Because if that industry would be profitable it wouldn't need to be supported in the first place.

    But all these tech companies are not in industries that would move off-shore or stop existing because they're not profitable enough, so it's an entirely different setting.

    er2d 14 hours

    Nope the reason for a monopoly is incentives for R&D and innovation.

    The US understands that and allows it to happen as the former yields a compounding effect of power.

    European states certainly don't get this.

    JumpCrisscross 15 hours

    If AI is commoditising, who is Bahrain and who are the Saudis?

    Urahandystar 15 hours

    The app layer is Bahrain.

    catlover76 15 hours

    [dead]

    mikelitoris 15 hours

    What does that mean?

    mh- 15 hours

    I believe they were drawing a parallel to oil commoditization, but that's as far as I got.

    JumpCrisscross 15 hours

    > What does that mean?

    I really couldn't have been more obscure, could I? :P

    In 1932, "the first oil field in the Persian Gulf outside of Iran" was discovered in Bahrain [1]. (The same year Saudi Arabia announced unification [2].)

    In the end, Saudi Arabia had larger reserves and wound up geopolitically dominating its first-moving rival. In commodities, the game tends to be scale in part through land grabbing. Less who got where first.

    To close the analogy, if AI does wind up commoditised, the layers at which that commodity is held are probably between power and compute [3]. So if AI commoditises (commodifies?), Google selling computer (and indirectly power) to Anthropic and OpenAI is the smarter play than trying to advantage Gemini. (If AI doesn't commoditise, the opposite may be true–Google is supercharging a competitor.)

    [1] https://en.wikipedia.org/wiki/Bahrain_Petroleum_Company

    [2] https://en.wikipedia.org/wiki/Proclamation_of_the_Kingdom_of...

    [3] The alternate hypothesis is it's at distribution.

    dmix 9 hours

    Plus the whole thing of first mover advantage being a myth, especially in the tech industry

    JumpCrisscross 8 hours

    > Plus the whole thing of first mover advantage being a myth, especially in the tech industry

    Source? That would be surprising!

    nostromo 15 hours

    The company with the access to cheap and plentiful energy and the real estate to build data centers will be Saudi Arabia in your analogy.

    This is why SpaceX could be a dark horse in this race. Putting compute in space is expensive but so is building a data center in the US.

    redanddead 13 hours

    Putting it centrally globally makes a lot of sense, just like connecting airports

    Saudi will host the biggest data centers in the world

    bpye 14 hours

    > Putting compute in space is expensive but so is building a data center in the US.

    You know what's also really hard in a vacuum? Dissipating heat.

    JumpCrisscross 14 hours

    > You know what's also really hard in a vacuum? Dissipating heat

    Correct. The economics of space-based DCs comes down to permitting delays versus radiator mass.

    At ISS-weight radiators (12 to 15 W/kg (EDIT: kg/kW)), you need almost decade-long delays on the ground (or 10+ percent interest rates) to make lifting worthwhile. Get down to current state-of-the-art in the 5 to 10 W/kg (EDIT: kg/kW) range, however, and you only need permiting delays of 2 to 3 years.

    If there is a game-changing start-up waiting to be built, it's in someone commercialising a better vacuum-rated radiator.

    ambicapter 14 hours

    Would you want more wattage per kg for a better radiator?

    13 hours

    JumpCrisscross 13 hours

    Yes! Thank you–fixed.

  • JumpCrisscross 16 hours

    It’s pretty wild how badly Altman siding with Hegseth has backfired. (And how competently Dario has played his hand.)

    I don’t think that’s the ultimate cause of the turnaround in fortunes. But it strikes me, at least from the investor and potentially urban-consumer perspectives, as a pivotal moment in both companies’ fortunes.

    keeda 9 hours

    Actually I have the opposite take. This is largely a play to procure compute capacity (and I suspect, distribution via Google Cloud), and I think Dario wildly underestimated the amount of demand they would see.

    I always wondered why Anthropic was not out there feverishly scrambling to procure compute like the other big players. While Altman was being laughed at as a "podcasting bro asking for trillions in investment" Dario was on Dwarkesh expounding on how tricky it is to predict the demand for capacity. Now Dario has to give equity to a competitor to get compute. (OpenAI does this too, of course, but I suspect the terms are much better.)

    At this point, it's pretty clear that compute is the only moat in this business. Even as an outsider, the extreme demand curves and compute crunch were painfully obvious, so this seems like a serious strategic error on Dario's part.

    er2d 14 hours

    "(And how competently Dario has played his hand.)"

    lol hes barely done anything, but sometimes that is all that's necessary when a bozo opponent is hell-bent on screwing things up. He didn't get fired the first time for no reason.

    JumpCrisscross 12 hours

    > hes barely done anything, but sometimes that is all that's necessary when a bozo opponent is hell-bent on screwing things up

    An former chess instructor told me most games are won not by brilliant maneuver, but by not screwing up. Repeatedly making the boring play is a winning strategy far more often than any mastermind play.

    tomrod 16 hours

    It was enough for me to dig much deeper into OpenAI, where before we almost exclusively used them for services with any form of SLA.

    ordinaryradical 15 hours

    You're saying it was a turning point for you to get more embedded with them? Way to be killer robot positive, I guess...

    tomrod 14 hours

    Good call out because I was a little unclear.

    Opposite of what you said. The "dig" was not retrenching to more use, but rather I evaluated what I saw them doing and have migrated our company to much better options.

    danielbln 16 hours

    Alphabet makes $30 billion profit per quarter.

    JumpCrisscross 16 hours

    > Alphabet makes $30 billion profit per quarter

    Sure. Neither OpenAI or Anthropic do. Amazon and Google have followed institutional investors bidding up Anthropic over OpenAI in private markets, all of which—I suspect—followed user-pattern shifts following the fiasco. (Well, fiascos. Altman is a host unto himself.)

    sevenzero 16 hours

    Which means they can allow themselves to blast money left and right? Its still a big investment.

    kubb 16 hours

    they can't allow themselves NOT to blast money left and right

    RobRivera 16 hours

    Yes

    luke5441 15 hours

    No, they have a fiduciary duty to shareholders to not make obviously bad investments.

    karmasimida 15 hours

    What backfired?

    Ant's recent rise has little to none to do with retail subscribers, it is Claude Code with Opus 4.5+, followed by their Mythos stunt

    I would say the flood of $20 Claude Subscribers due to news cycle backfired on them, now everyone is getting worse outputs and exposed their shortage on compute, which they can't fix anytime soon.

    Pretty much everyone I know has both cc and codex now, just because how unreliable cc has become.

    minimaxir 15 hours

    I use both CC and Codex because one is not enough and 5x for $100 is too much.

    enraged_camel 14 hours

    >> followed by their Mythos stunt

    "Stunt", eh?

    JumpCrisscross 15 hours

    > would say the flood of 20+ Claude Subscribers due to news cycle backfired

    This is a good hypothesis. I suspect we are both correct.

    The PR boost from Anthropic standing its ground drove signups. That, in turn, drove investors. But the users also drove utilization, which degraded quality across the board.

    My hypothesis rests on Anthropic’s user mix having significantly shifted to consumers (versus enterprise) after the mix-up. Whenever we get public numbers it would be interesting to test that.

    afavour 15 hours

    > What backfired?

    I think it was psychological to a degree. For many consumers OpenAI, or at least ChatGPT was AI. The controversy was enough for folks to be introduced to competitors in the AI space and suddenly OpenAI's success felt a lot less inevitable.

    I agree with OP though that this won't actually be the cause of OpenAI's downfall, should it happen. But I still think it's an interesting inflection point.

    karmasimida 15 hours

    > introduced to competitors in the AI space and suddenly OpenAI's success felt a lot less inevitable.

    This is true. OpenAI WAS the story of AI, now it is just 50% of it, at max. Losing the monopoly of imagination towards AGI is bad for them.

    One thing I don't agree though, consumers aren't the important part of AI, they are a liability.

    AI is too expensive, consumers can't pay for it. Instead they will compete with enterprise for the same tokens, with less money.

    JumpCrisscross 15 hours

    > controversy was enough for folks to be introduced to competitors

    This is my suspicion. Consumers hadn’t previously heard of Anthropic and Claude. Now they had, particularly in cities.

    > this won't actually be the cause of OpenAI's downfall, should it happen. But I still think it's an interesting inflection point

    Also agree. Hence why I said “I don’t think” the fight is “the ultimate cause.”

    pixl97 15 hours

    Anecdotally a whole lot more people around me started using Anthropic models in the last few weeks and seem to like them more than OpenAI. For many of these people it was the second provider they ever used.

    Of course this is part of what has lead to such insane demand and outages they've experienced since then.

    infecto 15 hours

    Is the simpler explanation that Alpha was already an investor and Anthropic has been making strides in their business model?

    JumpCrisscross 15 hours

    > Is the simpler explanation that Alpha was already an investor

    Individually, yes. Anthropic surging in private markets the weekend after the supply-chain risk designation, and raising from not only Google but also Amazon in such short clip (following credibly reports of it turning down $800+ billion valuation cheques from financial investors), all while OpenAI gets pilloried in the press and struggles to hold its $800bn valuation in private markets, collectively—to me—paints a bigger picture.

    infecto 15 hours

    Please share how OpenAI is struggling in the private markets.

    JumpCrisscross 15 hours

    There is more supply than demand flat to OpenAI’s recent raise. That’s simply not the case for Anthropic, at last raise or at comparable valuations.

    infecto 15 hours

    Citation? Were you working on the deal?

    JumpCrisscross 15 hours

    Can’t speak to citations, unfortunately, but if you have a banker or broker with secondary flow right now, ask them which they can get you more of and at what valuation: OpenAI or Anthropic.

    sourcegrift 16 hours

    [flagged]

    JumpCrisscross 16 hours

    > your TDS

    Wat?

    themafia 16 hours

    Hegseth represents existing military priorities. The original comment presents the issue as if it's isolated to a single administrator.

    It wouldn't call it TDS but it does project a severe political blind spot.

    pjl0 16 hours

    DESPERATELY trying to insinuate that the only possible reason to acknowledge that many people find distasteful the association between OpenAI and the desire for autonomous killbots MUST be that people are being unfair to Trump because of mental illness.

    JumpCrisscross 15 hours

    Oh, Trump Derangement Syndrome. I found a utility in Wisconsin and was really trying to find the connection…

    I guess to address the point, having a problem with Hegseth isn’t the same as having a problem with Trump. And given some of Trump’s administration is embracing e.g. Mythos, it seems unfair to characterize Dario v. Hegseth as anything broader.

    There was a recent moment when OpenAI went from the uncontested darling of consumer and investing America, to being second place to Anthropic. It happened rapidly, and I saw it at least on the investor side in the weekend after the supply-chain risk designation. (Disclosure: that’s also the week I signed up for Claude, in part out of protest, but mostly to see what the fuss was about.) I think there is a lesson for anyone working with startups or in tech from this example—it may be one of the most violent strategic sea changes I’ve seen in a while.

    15 hours

    lovich 15 hours

    He thinks you insulted his daddy somehow and is having a temper tantrum about it.

    Forgeties79 15 hours

    I feel like there should be a rule on any forum that if somebody non-ironically uses “TDS” they should just get permabanned by a bot with no explanation.

    JumpCrisscross 14 hours

    > they should just get permabanned by a bot with no explanation

    I really like HN's system of flagging versus banning. Like, I genuinely mapped TDS to Trump Derangement Syndrome, something I wasn't doing before because I thought it was a joke versus something his supporters thought of seriously.

    Forgeties79 14 hours

    I think we’d all be better off ignorant of it tbh

    lovich 14 hours

    Accusing someone of TDS for anything short of complete subservience to Trump is a thought terminating sentence they use to protect their fragile egos and to try and darvo their way out of any legitimate position.

    Forgeties79 13 hours

    I just can’t imagine reacting that way in the name of someone I don’t even know, let alone a politician.

    lovich 11 hours

    Its a cult. They are beyond reason at this point and working entirely off of emotions and now view the entire movement as part of their personality. That leads to feeling like they, personally, are being attacked whenever their leader is criticized.

  • skybrian 13 hours

    Context: a few weeks ago, Anthropic signed a deal to buy "multiple gigawatts of next-generation TPU capacity" from Google and Broadcom [1]. There have been several previous deals, too.

    Some people call this sort of thing a "circular deal", but perhaps a better way to think of it is as a very large-scale version of vendor financing? The simple version of vendor financing is when a vendor gives a retailer time to pay for goods they purchased for resale. This is effectively a loan that's backed by the retailer's ability to resell the goods. There's a possibility that the retailer goes broke and doesn't pay, but the vendor has insight into how well the retailer is doing, so they know if they're a good risk.

    Similarly, Google likely knows quite a lot about Anthropic because Anthropic buys computing services from Google for resale. They're making an equity investment rather than a loan, but the money will be coming back to Google, assuming Anthropic's sales continue to rise as fast as they have been.

    Also, if you own Google stock, some small part of that is an investment in Anthropic?

    [1] https://www.anthropic.com/news/google-broadcom-partnership-c...

    renticulous 13 minutes

    Google already knows Anthropic is a good investment. Google owns the chrome browser and they already know from traffic data how well Anthropic is doing. This is similar to how Mark Zuckerberg came to know Instagram is a good deal.

    petra 3 hours

    Good perspective.

    Let's say Anthropic fails to pay it's debt, can Google take those TPU's back and make money from them?

    WarmWash 12 hours

    IIRC Google already outright owns 15% of Anthropic.

    the_killer 9 hours

    it's your time..

    ~ TK

    zrn900 14 minutes

    [dead]

    colechristensen 12 hours

    It could be legit, it could be a thickly veiled accounting fraud continuing the valuation inflation with fake deals that count money multiple times.

    Maybe a little bit of both.

    rnxrx 12 hours

    Lots and lots of vendor financing during the dotcom era, and it ended up being a material part of those vendors' own difficulties. Especially when service providers were concerned (e.g. the huge crash in optical in particular).

    Obviously it's not a perfect comparison, but you have to wonder how much of NVIDIA's income (for instance) is ultimately funded by its own money.

    pseudohadamard 10 hours

    It's pretty much vendor financing (although we could argue whether it should be classed as circular investment), with the extra trick being that both sides get to make number go up with it, through stock market valuations and the ability to borrow more money to set fire to so you can show how successful you are.

    etempleton 7 hours

    I think everyone is incentivized to keep the music playing and the party going with AI. Because the alternative is a massive correction like we have rarely seen.

    What if AI is never good or cheap enough to reach significant profitability?

    matt_s 12 hours

    Reciprocal agreements aren't new, sometimes they're used to gain access to a market the other party already has established a foothold in for other industry segments. These companies operate in the same general industry: tech/internet so it could be complementary services they are each after.

    So far both of these companies have shown they suck at support so we know that's not it. It could be that it might help Anthropic to leverage Gemini in their competition with OpenAI and Google will take compute commitments.

    Anecdata: I'm finding a lot of my "type random question in URL/search bar" has decent top Gemini answers where I don't scroll to results unless I need to dive deeper.

    bojan 1 hours

    I agree those results ate handy, but I had several occasions where they turned out to be completely wrong. 95% correctness rate is not good enough.

    dgb23 18 minutes

    LLMs have a lot of issues with facts, because they are probabilistic and you typically only get one answer per query instead of multiple covering a larger space.

    However, they are still useful in these cases if you know the above and use their output as a starting point to think and ask questions.

    rockskon 9 hours

    Funny how Gemini generally takes into account all the words you type whereas Google search tends to ignore most words you type or otherwise direct you to results for thematically (or grammatically or semantically) similar words to what you searched but otherwise wholly irrelevant.

    Google crippling search to bolster AI is a dangerous game. But without people going to competitors, what's the recourse?

    bostik 5 hours

    They're already crippling their AI to perform what look like sponsored searches.

    The plural of anecdote is not data but this does not feel like a one-off thing: I was trying to find where it would be possible to get to have a reasonable holiday, and asked Gemini to list me all the international airports in two named countries that had direct flights from my preferred departure airport. The response came back with a single proposed flight destination with "book here" prominently available.

    Only once I told it that the search was NOT an impulse purchase intent and I really wanted to know the possible destinations - then did it actually come back with the list of airports that satisfied my search criteria.

    Although if we are looking for the bright side, it did provide a valid and informative answer on the second try. I haven't had that kind of experience on SEO-infested Google search for quite a long time now.

    netcan 5 hours

    So yes, but that doesn't negate the circular investment aspect, for most intents and purposes.

    The risk is from this structure is mostly to do with how this affects market cap. Companies using the value of their shares to fund demand for their services.

    That's a risk.

    grafmax 1 hours

    The tech industry goes through investment phases to produce oligopolies it turns around and enshittifies, parasitizing income off what it has built. Venture capital, acquisitions, acquihires, circular investments - It’s been incestuous for years. The question is whether competition from China’s sophisticated tech sector, which already surpasses the US in many areas, will put a pin in these plans this time round.

    robjeiter 4 hours

    I feel like the whole market at this point is just AI since big tech other than Apple are all massively invested into that. Everyone owns either the S&P or the total world ETF which are both heavily skewed towards big tech and this trade - so literally everybody is in it. It might go well for a few more quarters/years but once something breaks or gets exponentially cheaper this will take down the whole market with it.

    dvfjsdhgfv 1 hours

    > literally everybody

    I personally make sure I really diversify, so that when I buy funds, I buy those with stocks of EU companies which pay dividends. AFAICT there are 0 European AI companies that pay dividends.

    netcan 2 hours

    It's just hard to tell the difference between "real" demand and "circular." That's the concern.

    PG had an essay about this during the dotcom, when he worked at yahoo. Iirc...Yahoo's share price and other big successes in the space attracted investment into startups. Startups used that money to advertise on yahoo. Yahoo bought some of these the startups.

    So... a lot of the revenue used to analyze companies for investment was actually a 2nd order side effect of these investments.

    Here the risk is that we have Ai investments servicing Ai investments for other Ai investments.

    Google buys Nvidia chips to sell anthropic compute. Anthropic sells coding assist to Ai companies (including Google and Nvidia). They buy anthropic services with investor money that is flowing because of all this hype.

    Imo the general risk factor is trying to get ahead of actual worldly use.

    The Ai optimists have a sense that Ai produces things that are valuable (like software) at massive scale...that is output.

    But... even if true, it will take a lot of time, and lot of software for the Econony to discover this, go through the path dependencies and actually produce value.

    The most valuable, known software has already afy been written. The stuff that you could do, but haven't yet is stuff that hasn't made the cut. Value isn't linear.

    datavirtue 1 hours

    I'm starting to transition how we build software at our company due to the power of AI. No more: five code monkey contractors under a lead. Two top-notch devs are all that is needed now, unrestrained by sprints and mindless ceremonies. There is going to be a giant sucking sound in India.

    I can't continue the current model. The dev that gets AI is done in five hours, the ones that don't are thrashing for the next two weeks. I have to unleash the good AI dev. I have the Product team handing us markdown files now with an overview of the project and all the details and stories built into them. I'm literally transforming how a billion dollar company works right now because of this. I have Codex, Claude and GitHub Copilot enterprise accounts on top of Office 365. Everyone is being trained right now as most devs are behind, even.

    fauigerzigerk 2 hours

    >Companies using the value of their shares to fund demand for their services.

    That's not what's happening here though. Google isn't using the value of its shares to fund demand. Google is using its own cash flow to fund this demand from Anthropic.

    The question is whether Anthropic has demand from end users for the capacity they are buying from Google (that's a yes I guess) and whether that demand is profitable for Anthropic (that's a question mark).

    netcan 2 hours

    True.

    Regardless, (a) it's ability/desire to make such investments is still driven by stock-driven optimism and (b) these transactions' "signal" can have a similar, warping effect.

    In this case the transaction creates demand for Google's services and also funds anthropic's growth... which represents demands for google's services.

    "Loop" is an approximation of an analogy. The risk is that enough of such transactions create a dynamic that distorts feedbacks.

    fauigerzigerk 1 hours

    >(a) it's ability/desire to make such investments is still driven by stock-driven optimism

    I don't think it has much to do with the stock price at all. Current platform oligopolists fear the rise of new platforms. They want a foot in the door for strategic reasons.

    What could happen is that frontier labs like Anthropic and OpenAI never become platforms and turn out to be providers of a largely commoditised, low margin service.

    In that event, current valuations are too high. But Anthropic's valuation doesn't seem extreme to me. Their $30bn annual run rate is valued at $380bn.

    Given this price and Anthropic's strategic value, Google's investment seems reasonable.

    mattmanser 42 minutes

    But OpenAI/Anthropic are not selling the compute as they're buying that from Google/Amazon/etc.

    So they're selling the transformation, or the model. Or the ability to make a model. And their brand and their harness.

    And it seems like the model is definitely not worth 380 billion. Models depreciate incredibly fast. There are lots of models and the other models aren't that far behind.

    And it seems like the harness is not worth much as there's already open source alternatives that people claim are better.

    And all these companies are paying lots of money for these AI training experts.

    But I suspect that any regular Hacker News reader of 10 years dev experience could become a training expert in months if allowed to play with a load of compute and a lot of data for a bit.

    Just like any of us could have become a data scientist, this stuff is not particularly hard. Random horny dudes on the internet are putting out loras and quantized models in days against the open source image models.

    So what's worth 380 billion exactly? The brand?

    These valuations just look really off. Not by one order of magnitude, but more like by 4 orders of magnitude. Like 380 million might be a reasonable valuation, but not billion.

    What I also don't get is that it's pretty obvious to me that the Europeans should all be spinning up their own, not necessarily massive, data centers and throwing a few billion at some guys in Cambridge or Stockholm or London or Berlin to make their own AI models.

    Only the French have done it.

    But instead the rest seem to be trying to court Anthropic or OpenAI to build data centers. Which is just stupid politics given what's happening in the world right now.

    fauigerzigerk 24 minutes

    >So what's worth 380 billion exactly? The brand?

    Whatever it is that leads to a $30bn run rate, growing >200%. Right now it's having the better model and being able to show how to use it in specific verticals.

    But I suspect in the long run only platforms have high margins (and they will need margins not just revenues to justify their valuation). Are they becoming platforms? Google seems to think (or fear) that they might.

    zymhan 12 hours

    To be honest, I think "vendor financing" is still a very risky premise.

    Vendors may be positioned to know how a customer is doing, but they're also incentivized to overestimate how well a customer is going to perform.

    GE Capital (edit: and GMCA) is a great example of how seemingly reasonable vendor financing can cause the lender serious problems.

    skybrian 12 hours

    The risks are different, but there's no getting around that the value of any investment is based on future cash flows and that's speculating about the future.

    To the extent that Google and Anthropic are competing for AI business, Google is somewhat hedged against Anthropic winning market share. They still get data center revenue and they own equity, so that’s a consolation prize.

    On the other hand, it’s increasing Google’s investment in AI, in general.

    cowsandmilk 11 hours

    GE Capital was not just vendor financing and its serious problems were not due to vendor financing. I don’t think it is a great example in any way.

    everly 6 hours

    $40 billion is about a quarter’s worth of profits for Google. They make that much every 3 months, what’s the risk

    throwaway2037 5 hours

    Hat tip. Great point. To quote J Paul Getty: "If you owe the bank $100, that's your problem. If you owe the bank $100 million, that's the bank's problem." In this case, yes, the investment is large, but not bankrupting for Google if it goes wrong.

    Spooky23 8 hours

    GE Capital was a different creature, riding the line of fraud in some ways. They misapplied accounting rules and had to write down or capitalize over $20B for long term care insurance.

    zymhan 6 hours

    That's what brought them down, but that could bring down anyone. My point is that vendor financing turns non-finance companies into finance companies, and brings along a huge can of worms.

    throwaway2037 5 hours

    I don't know the full the history of this story, but I honestly wonder if type of scandal is still possible in the United States. After Enron and Worldcomm, the US introduced Sarbanes-Oxley reporting regulations. Additionally, after the Global Financial Crisis of 2008/2009, there was a dramatic increase in regulations for banks (of all kinds) and insurance companies.

    HappMacDonald 4 hours

    .. yet today we have Kalshi, Polymarket, et al.

    lotsofpulp 1 hours

    Those are private gambling businesses akin to a casino, not publicly listed businesses subject to the aforementioned regulations.

    throwaway2037 5 hours

        > To be honest, I think "vendor financing" is still a very risky premise.
    
    Are you aware that all heavy industry in all highly developed nations make extensive use of vendor financing to sell their products? Siemens is a perfect example of a well-run, stable, industrial giant. They offer vendor financing for large purchases. Same for the "heavies" (Mitsubishi, Kawasaki, IHI, Hyundai, Doosan, Hanjin) in Japan and Korea.

    If anyone is interested to learn about the damage that the financialisation of General Electric (USA) brought upon itself, you can ask ChatGPT to tell you the story. It is too long to repeat here.

    Here is a sample prompt that I used to remind myself:

        > I am interested in the history of General Electric and the trouble that their financing units brought in the early to mid 2000s. Can you tell me more?

    paganel 3 hours

    > Are you aware that all heavy industry in all highly developed nations make extensive use of vendor financing to sell their products?

    The OP did mention GE Capital, the motherload of all heavy industry vendor financing. And of massaging the accounting books in order to increase shareholder value in the short term, also.

    throwaway2037 1 hours

        > motherload of all heavy industry vendor financing
    
    I doubt they are bigger than other national "heavy industry" champions from East Asia and Western/Central Europe. Without checking, I would guess that the global leaders are Boeing and Airbus.

    HappMacDonald 4 hours

    Are we replacing "Let me google that for you" with "Here is a prompt to feed ChatGPT" now?

    Edit: I am not asking whether ChatGPT is better than Google Search, I am asking after the standard dodge of citing one's sources.

    OJFord 3 hours

    It's a good use case really – it'll tell it differently according to what it knows about your background, if you 'just Google it' you'll get the same maybe-appropriate results as anyone else.

    chiffaa 1 hours

    Very tangentially related comment, but I remember seeing a post on a local Facebook clone with a prompt to throw at Claude to "make a custom YouTube downloader for MacOS", so the general "Here is a prompt to feed an LLM" is somewhat real for some, apparently

    throwaway2037 1 hours

    Fair point/question. For many of my HN responses, I first ask ChatGPT for a bit of information about the topic. For the case of GE Cap's wrecking of parent GE with excessive financialisation, I could only loosely remember the details from the 2000s. It is a long time ago! That prompt that I shared gave a reply that was 100s of words. Too much for copy/pasta, and too hard for me to summarise briefly. Instead, I decided to share the prompt. It is not my intention to dodge sources. Plus, the newest versions of ChatGPT is pretty good about sharing sources. (Of course, the quality of sources can be debatable.) In short, it was not my intention to be snarky by sharing my ChatGPT prompt.

    EDIT ---- Also, the OP was so brief about GE Cap, I realised that most readers under 30 (maybe 35) will have almost no knowledge or memory of that economic history. I wanted to offer an "intellectual carrot" (ChatGPT prompt) for anyone wishing to learn more. ----

    What bothered me most about the original post was the person was putting all vendor financing in the same "bad" bucket. I disagree. I would characterise GE Cap as an infamous example! They were the worst of the worst in a generation (25 years). Most vendor financing is very boring and is used to buy big heavy things with very long operational lives. If the buyer goes bankrupt, it is (relatively) easy to repossess the big heavy thing and sell it again (probably with vendor financing again!).

    jona-f 3 hours

    Yes, cause google has been giving crap results long before chatgpt was a thing and it only got worse. Before ai it was "let me google that on reddit for you".

    exoverito 4 hours

    Yes.

    2ndorderthought 2 hours

    Google search has gone way down hill after they nerfed it and then did nothing to prevent the flood of AI slop seo websites. So unfortunately, instead of sharing links everyone now gets sent to the inefficient text generator that hallucinates nonsense and will color the average summary of a topic by whoever trained it and your most recent chat history instead.

    datavirtue 1 hours

    I haven't run a Google search in two years. Your comment just made me realize that. Doing a Google search is like trying to watch cable after being on YouTube for years.

    2ndorderthought 23 minutes

    I use different search engines than Google. They have similar issues, but some are better at ignoring the slop.

    I just cannot justify the environmental impact and surveillance of using LLMs for everything. I prefer to summarize recent information myself. LLMs are not particularly good at it.

    Funny thing about the cable analogy. Ever since all streaming providers have started cranking up prices and still forcing users to see hundreds of ads my family has been buying second hand dvds. So we have regressed from streaming to right after cable. I know one family that went back to cable, they do still watch YouTubes here and there but they got sick of it.

    fc417fc802 11 hours

    In another context I might see it as vendor financing. However given that Google and Anthropic are competitors in this segment and given that Google has previously invested in them I'd rather see this as a sort of bartered stock purchase presumably for the purpose of hedging against failure. If Anthropic wins the race and it turns out to be winner takes all and you happen to own half of Anthropic then you still win half of the immediate spoils even though your internal team lost. If you view losing the race as an existential threat then having all your eggs in the one basket is a terrible proposition.

    svnt 8 hours

    $40B is not anywhere near half of Anthropic at this point. You do get the same access as nvidia, aws, and other investors, which has value.

    windexh8er 10 hours

    I look at this as Google needs a competitor. While Anthropic seems to be the flavor of the quarter OAI looks like such a dumpster fire right now that it's in Google's best interest to help keep Anthropic moving towards winning the #2 spot. I say the #2 spot because it doesn't matter how good this week's LLM is. Until someone else owns the infra and has an actually profitable business model they're all just lighting money and the world around us on fire.

    I actually mentioned to a Google friend the other week that I wouldn't be surprised to see Google tipping the hat towards Anthropic soon so as to put a little more heat on OAI.

    BobbyTables2 9 hours

    I wonder if Google is that much a competitor. Sure, they tried to make an AI of their own.

    But they also have access to an unimaginably large data set plus reach into people’s daily lives.

    Seems more like partners for world domination.

    skybrian 11 hours

    Sure, since Google is both a supplier and a competitor, it’s both vendor finance and hedging. Also, it increases their investment in AI, in general.

    Arguably, too much of this kind of hedging is anti-competitive. But that doesn’t seem to be much of a problem yet?

    thayne 6 hours

    > Arguably, too much of this kind of hedging is anti-competitive. But that doesn’t seem to be much of a problem yet?

    By the time it is a problem, it will be too late.

    bonesss 7 hours

    Are we stoping too early in this analysis though?

    Google versus OpenAI and Anthropic, sure, but Microsoft is deep into OpenAI. Google helping Anthropic is also putting MS into a corner (one that may even be shrinking? Copilot and openAI financing hurting their brand, rumours of deep displeasure at OpenAIs promises v returns).

    Seen from afar I see Google happy to provide TPUs for money (improving Googles strategic positioning), torpedoing confidence in LLMs with their search AI summaries, and using their bankroll to force larger competitors (MS in particular), to keep investments high regardless of performance and user revolts and internal tensions with Sam Altmans sales approach. Plus, Anthropic is in ‘the lead’ right now product wise, so grooming them as a potential purchase would also seem to be a strategic hedge in the long term.

    ojosilva 5 hours

    MS is not so deep with OpenAI, it's not all black and white, they have signed several distribution deals where Claude drives Copilot [1], since Anthropic and MS are better aligned in the Enterprise market, it makes sense. It also makes sense for MS not to lose ground anywhere at this point and play with the best. Actually, any cash rich company that is not OpenAI or Anthropic wants to be close-by when any of the two needs money. That's the ultimate win they can aspire for right now, get a financial slice of frontier models on one hand while not losing revenue on the other given the existential ordeal AI represents for them.

    1. https://www.microsoft.com/en-us/microsoft-365/blog/2026/03/0...

    throwaway2037 5 hours

    You make some good points, but this part feels like a wild overreach:

        > torpedoing confidence in LLMs with their search AI summaries
    
    That is some real tin foil hat thinking.

    bonesss 1 hours

    Straightforward observations of market impact aren’t tin foil :)

    Google didn’t launch LLM products despite being a tech leader, and have gotten piles of bad press for their misleading AI search summaries. They know how and why they suck. Google search is a highly popular and market facing service packaging bad summaries as “AI”. Meanwhile LLM searches threaten to disrupt Googles primary cash cow (advertising around search).

    Here on HN, on Reddit, and media writ large, a lot of the “AI” failure stories are not about ChatGPT hallucinations, it’s the shockingly wrong search summaries from Google, undermining consumer confidence and breaching trust.

    ChatGPT and other LLM providers rarely show conflicting source material side by side with misleading text gen. The number one search provider who leads in some LLM tech does though, routinely, looking incompetent and generating negative “AI” sentiment through repeated failures at mass scale…

    So the theory here is either that the best search org in the world filled with geniuses can’t tell they’re pooping on their own product and profitability and aren’t fixing it because they can’t/won’t… … or <tinfoil mode engaged>… Google already makes money and is happy with substandard product and market performance in the cases where it hurts the necessary hype critical to other businesses but not themselves (while also pre-positioning in case LLM search becomes essential).

    Win/win/win strategy with a substandard product, versus Google not being aware of what their biggest product is doing.

    Googles AI summaries are doing lotsa work to make AI summaries seem terrible. I ascribe profit motives to their actions. Ascribing incompetence seems naive and irreconcilable with their strategic corporate history.

    lukan 9 hours

    How can there be a "winner takes it all" situation with AI?

    OpenAI lead the game while they were best. Antrophic followed and got better. Now openAI is catching up again and also google with gemini(?) ... and the open weight models are 2 years behind.

    Any win here seems only temporary. Even if a new breakthrough to a strong AI happen somehow.

    calebkaiser 8 hours

    2 years? 2 years ago, gpt-4o was OpenAI's flagship model. The gap is real, but much smaller than 2 years.

    nine_k 9 hours

    Look at the "winner takes all" situation in web search. Of course other search engines exist, but the scale of the Google search operation allows it to do things that are uneconomical for smaller players.

    teaearlgraycold 9 hours

    Not even 2 years behind.

    jedberg 8 hours

    The first to AGI, or a close approximation, is the winner. That’s what the investors in Anthropic and OpenAI are betting on.

    I’d be willing the bet that the Venn diagram of investors in those two companies is nearly a circle.

    svnt 8 hours

    This depends on a fantasy cascade of functional consequences of AGI, whatever that acronym even means anymore.

    It is just cargo cult financing at this point.

    lukan 8 hours

    "The first to AGI, or a close approximation, is the winner. "

    But why? Assuming there is a secret undiscovered algorithm to make AGI from a neuronal network ... then what happens if someone leaks it, or china steals it and releases it openly tomorrow?

    modriano 5 hours

    So, what will AGI be able to do that will make that bet pay off? Human-like intelligence is already very common. Vastly better than human intelligence seems like it would be worth the expense, but I don't know where we'd get suitable training data.

    HappMacDonald 3 hours

    The bet is that they perfect a new kind of neural network which is roughly as good at "training" as the human mind is as far as "amount learned/experience gained per bit of information input".

    Current LLMs are absolutely stupidly inefficient on this front, requiring virtually all human knowledge to train on as a prerequisite to early-college-level understanding of any one subject (granted, almost all subjects at that point, but what it has in breadth it lacks in depth).

    That way instead of training millions of TPUs on petabytes of data just to get a model that maintains an encyclopedia of knowledge with a twelve-year-old's capacity for judgment, that same training set and compute could (they hope) instead far exceed the depth of judgement, planning, and vision of any human who has ever lived (ideally while keeping the same depth, speed of inference, etc).

    It's one of those situations where we have reason to believe that "exactly matching" human intelligence is basically impossible: the target range is too exponentially large. You either fall short (and it's honestly odd that LLMs were able to exceed animal intelligence/judgment while still falling short of average adult humans.. even that should have been too small of a target) or you blow past it completely into something that both humans and teams of humans could never compete directly against.

    Chess and Go are fine examples here: algorithms spent very short periods of time "at a level where they could compete reasonably well against" human grand masters. It was decades falling short, followed by quite suddenly leaving humans completely in the dust with no delusions of ever catching up.

    That is what the large players hope to get with AGI as well (and/or failing that, using AI as a smoke screen to bilk investors and the public, cover up their misdeeds, play cup and ball games with accountability, etc)

    devmor 8 hours

    Are these investors high? Or just insane?

    blippage 2 hours

    Finance professor Aswath Damodaran, and others, have made many useful insights as to how AI as an investment is likely to pay out.

    One technique is, instead of trying to pick individual winners, look at the total addressable market. Then compare the market size with the capital being pumped in. If you look on this basis, Aswath concluded that collectively AI investment is likely to provide unsatisfactory returns.

    Here's a recent headline: "Nvidia’s Jensen Huang thinks $1 trillion won’t be enough to meet AI demand—and he’s paying engineers in AI tokens worth half their salary to prove it"

    There are two parts to this. 1. A staggering $1t is expected to be invested in AI. Someone worked out that this was more than the entire capital expenditure for companies like Apple. We're talking about its entire existence here. IOW, $1t is a lot of dough. A LOT.

    Secondly, this whole notion that AI is such a sure thing that half the salary will be in tokens should ring alarm bells. '“I could totally imagine in the future every single engineer in our company will need an annual token budget,” he said. “They’re going to make a few 100,000 a year as their base pay. I’m going to give them probably half of that on top of it as tokens so that they could be amplified 10 times.”'

    I recall from the dotcom fiasco that service companies like accountants and lawyers were providing services to the dotcom companies and being compensated in stock options rather than cold hard cash like you'd normally expect.

    Very dangerous.

    As another poster pointed out, this really boils down to FOMO by big tech. I'm expecting big trouble down the line. We await to see if I'm early or just plain wrong.

    saintfire 8 hours

    Its just market euphoria.

    fc417fc802 8 hours

    Neither. It's the most severe FOMO in history. The best case scenario is equivalent to attempting to pick future winners just prior to the industrial revolution really kicking off. Except this time around the technological timelines appear to be severely compressed and everyone is fully aware of what's at stake. And again, that's the best case scenario.

    ngruhn 9 hours

    I guess if you build the first AI that can autonomously self improve, then nobody can catch up anymore.

    lukan 8 hours

    But if the second AI that can self improve comes up?

    Then it all remains a question of who has the most compute power, as self improve seems compute heavy with the current approach.

    techpression 7 hours

    If that happens catching up will be meaningless, everything we know and care about will change. You don’t have to be doomsday about it even, a self improving AI will quickly be more efficient than a human brain, all the data centers will be useless, tech companies will collapse (so will most others), everyone will have an incredible AI resource for the price of a hotdog. There’s no way it wouldn’t leak from whoever made it, either by people or by the AI itself.

    fc417fc802 6 hours

    > There’s no way it wouldn’t leak from whoever made it, either by people or by the AI itself.

    It seems pretty wild to bet the future on such an assumption. What are you even basing it on?

    ngruhn 4 hours

    Because any goal can be better achieved if you're under fewer constraints. We're building super powerful agentic problem solving machines. Give them literally any complex goal. Breaking out of the sandbox is a useful subtask to increase their options.

    hattmall 8 hours

    That seems really paradoxical and I think it would just burn up compute. The AI really doesn't have any way to know it's getting better without humans telling. As soon as the AI begins to recursively improve based on its own definition of improvement model collapse seems unavoidable.

    fc417fc802 8 hours

    If humans are able to judge, and if the AI is more capable than a human in every respect, then why can't the AI be the judge of its own performance? Humans judge their own output all the time.

    darkwater 5 hours

    The difference IMO is that every single human is a slightly different model, not the same one with a different prompt, or weights.

    fc417fc802 4 hours

    I'm not sure I buy that competition between individuals is a hard requirement but lets assume that to be the case for now. Then how many variants of itself do you suppose an AI could instantiate in parallel given full control of a gigawatt class datacenter?

    TeMPOraL 5 hours

    Humans ultimately judge their output by comparison and competition. When we get to the point an AI is capable of participating on the market directly, it'll no longer make sense to proxy judgement through humans anymore.

    fc417fc802 4 hours

    Agreed. But also, comparison and competition between individuals is only one of the ways in which improvement happens. Consider for example that it's also possible to build something for personal consumption and iteratively improve on the design without regard for what anyone else thinks of it. Cooking comes to mind.

    TeMPOraL 4 hours

    Right. But even that is shaped directly or indirectly by environment you live in. The way you scratch your own itch looks differently depending on what itch you have. Plus, humans are social animals, we live in groups and constantly judge each other and try to have others judge us favorably.

    AI has none of that now - it only gets direct human feedback from those controlling the training (or at a second level, the harness), and that feedback is really in service of the humans at the steering wheels. Sum total of humanity, mixed in the blender, and flavored to make the trainers look good in front of their peers.

    Now, if AI could interact directly and propagate that feedback to their training, or otherwise learn on-line, that changes. It's a qualitative jump. The second one is, once there's enough AIs interacting with human economy and society directly, that their influence starts to outweigh ours. At that point, they'll end up evolving their own standards and benchmarks, and then it's us who will be judged by their measure.

    (I.e. if you think we have it bad now, with how we're starting to adapt our writing and coding style to make it easier for LLMs, just wait when next-gen models start participating in the economy, and we'll all be forced by the market forces to learn some weird, emergent token-efficient English/Chinese pidgin that AI-run companies prefer their suppliers to use.)

    conradkay 9 hours

    Recursive self-improvement is one argument. Otherwise winner takes all seems much less likely than a OpenAI/Anthropic duopoly. For the best models, obviously other providers will have plenty of uses, but even looking at the revenue right now it's pretty concentrated at the top.

    So if I'm Google I'd want a decent chunk of at least one of them.

    svnt 8 hours

    What is the argument for a duopoly when Kimi and Deepseek models are only months behind?

    It’s a commodity in the making.

    conradkay 8 hours

    They're months behind now and have very low market share, so as long as they stay months behind the duopoly/triopoly can hold.

    lebuin 5 hours

    The argument is based on one of these companies hitting the singularity, making it impossible for any other company to catch up ever. I still think it's way more likely we'll see a typical S-curve where innovation starts to plateau. But even a small chance of it happening in the future is worth a lot of money today.

    jona-f 3 hours

    There's a massive thinking gap in this singularity thinking. We ARE the singularity. It has been exponential all the way back to the big bang. First the stars, the solar system, life, consciousness, language, computers, the internet. Yes it is speeding up and that is exciting, cause we are going to experience a lot in our lifetimes. We have a lot of exponential growth to go before progress becomes instant. There are physical limits, too. Power generation for example. I can't believe on what dumb shit people bet the world economy on.

    fc417fc802 8 hours

    That's certainly how it looks right now but where's the guarantee? What happens if it turns out that deep learning on its own can't achieve AGI but someone figures out a proprietary algorithm that can? That sort of thing. Metaphorically we're a bunch of tribesmen speculating about the future potential outcomes of the space race (ie the impacts, limits, and timeline of ASI).

    zarzavat 6 hours

    Imagine such an AI exists. What good is AI that is so good that you cannot sell API access because it would help others to build equivalently powerful AI and compete with you?

    If you gatekeep, you will not make back the money you invested. If you don't gatekeep, your competitors will use your model to build competing models.

    I guess you can sell it to the Department of War.

    TeMPOraL 5 hours

    > Imagine such an AI exists. What good is AI that is so good that you cannot sell API access because it would help others to build equivalently powerful AI and compete with you?

    At this point, if you can no longer safely drip-feed industry the access to "thinking as a service" and rake in rent, you start using it, displacing existing players in segment after segment until you kill the entire software industry.

    That's pre-ASI and entirely distinct from the AI itself becoming so good it takes over.

    fc417fc802 6 hours

    If you assume the status quo - a powerful not quite human level AI - then you are most likely correct. However one of the primary winner takes all hypotheticals (and to be sure it remains nothing more than a wild hypothetical at this point) is achieving and managing to control proprietary ASI. Approximately, constructing something that vaguely resembles a god.

    Being unfathomably smarter than the people making use of it you could simply instruct it not to reveal information that would enable a potential competitor to construct an equivalent. No need to worry about competition; you can quite literally take over the world at that point.

    Not that I think it's likely such a system will so easily come to pass, nor that I think humanity could manage to maintain control over such a system for long. But we're talking about investments to hedge against existential tail risks here so "within the realm of plausibility" is sufficient.

    dragonwriter 5 hours

    > What good is AI that is so good that you cannot sell API access because it would help others to build equivalently powerful AI and compete with you?

    Its awesome and world dominating, you just don’t sell access to that AI, you instead directly, by yourself, dominate any field that better AI provides a competitive advantage in as soon as you can afford to invest the capital to otherwise operate in that field, and you start with the fields where the lowest investment outside of your unmatchable AI provides the highest returns and, and plow the growing proceeds into investing in successive fields.

    Obviously, it is even more awesome if you are a gigantic company with enormous cash to to throw around to start with when you develop the AI in question, since that lets you get the expanding domination operation going much quicker.

    renticulous 4 minutes

    To dominate the real world, you need correcting feedback loop from reality. These feedback loops and regulations (in medical and other industries) take long time to come back with good signals. So you are still time bound by how fast your experiments are.

    zarzavat 4 hours

    It's not clear to me that one horse-sized AI allows you to outcompete 100 duck-sized AIs in use by everyone else once you factor in the non-intelligence contributions that the others with weaker AIs bring to the table.

    There's a lot more to building a successful product than how smart your engineers/agents are, how many engineers/agents you have, and capital.

    Google, for example, can be extremely dysfunctional at launching new products despite unimaginably vast resources. They often lack intangible elements to success, such as empathizing with your customers' needs.

    If we were in a world where AI was not already widespread, then I would agree that having strong AI would be an immense competitive advantage. However, in a world where "good enough" AI is increasingly widespread, the competitive advantage of strong AI diminishes as time goes on.

    TeMPOraL 4 hours

    Yup. That doesn't really take a full-blown AGI on the path to ASI on the path to godhood - it'll take a bit better and more reliable LLM with a decent harness.

    That's why I've been saying that the entire software industry is now living on borrowed time. It'll continue at the mercy of SOTA LLM operators, for as long as they prefer to extract rent from everyone for access to "cognition as a service". In the meantime, as the models (and harnesses) get better, the number of fields SOTA model owners could dominate overnight, continues to grow.

    (One possible trigger would be the open models. As long as the gap between SOTA and open is constant or decreasing, there will be a point where SOTA operators might be forced to cannibalize the software industry by a third party with an open model and access to infra pulling the trigger first.)

    fc417fc802 4 hours

    Don't open models and competition between frontier providers both serve as barriers here? If a frontier provider pivoted as you describe it would certainly change the landscape but they wouldn't be unassailable without developing some sort of secret sauce that gave them an extremely large advantage over everyone else. They'd need a sufficient advantage to pull out far ahead of everyone else before others had a chance to react in a meaningful way. Otherwise the competitors that absorbed all your subscriptions would stack that much more hardware and continue to challenge you.

    I think meaningful change to the current equilibrium would require at absolute minimum the proprietary equivalent of the development of the transformer architecture.

    TeMPOraL 4 hours

    > If a frontier provider pivoted as you describe it would certainly change the landscape but they wouldn't be unassailable without developing some sort of secret sauce that gave them an extremely large advantage over everyone else.

    Integration, and mindset. AI, by its general-purpose nature, subsumes software products. Most products today try to integrate AI inside, put it in a box and use to supercharge the product - whereas it's becoming obvious even for non-technical users, that AI is better on the outside, using the product for you. This gives the SOTA AI companies an advantage over everyone else - they're on the outside, and can assimilate products into their AI ecosystem - like the Borg collective, adding their distinctiveness to their own - and reaping outsized and compounding benefits from deep interoperability between the new capability and everything else the AI could already do.

    Once one SOTA AI company starts this process, the way I see it, it's the end-game for the industry. The only players that can compete with it are the other SOTA AI companies - but this will just be another race, with nearly-equivalent offerings trading spots in benchmarks/userbase every other month - and that race starts with rapidly cannibalizing the entire software industry, as each provider wants to add new capabilities first, for a momentary advantage.

    Once this process starts, I see no way for it to be stopped. Software products will stop being a thing.

    Open models can't compete, because they're always lagging proprietary. What they do, however, is ensure the above happens - because if, for some reason SOTA AI companies stick to only supplying "digital smarts a service" for everyone, someone with access to sufficient compute infra is bound to eventually try the end-game strategy with an open model, hoping to get a big payday before SOTA companies respond in kind.

    Either way, the way I see it, software industry as we know it is already living on borrowed time.

    fc417fc802 1 hours

    I don't understand where the unbeatable edge is supposed to come from here. Don't we already have this in the form of agents using tools? Right now it's CLI but it's not difficult to imagine extending that to a GUI coupled with OCR and image recognition in a way that generalizes.

    So suppose ACo attempts to subsume Spotify or Photoshop or whatever. So they ... build their own competing platform internally? That's a lot of work. And now they what, attempt to drive users to it by virtue of it being a first party offering? Okay sure that's just your basic anticompetitive abuse of monopoly I guess. MS got in trouble for that but whatever let's assume that happens.

    So now lots of ACo users are using a Photoshop competitor behind the scenes. I guess they purchased a subscription addon for that? And I guess ACo has the home team advantage here (anticompetitive and illegal ofc) but other than that why can't Photoshop compete? It just seems like business as usual to me. What am I missing?

    If ACo sells widgets and I also sell widgets, assuming I can get attention from consumers and offer a compelling set of features for a competitive price why can't I get customers exactly? ACo's AI will be able to make use of either widget solution just fine assuming ACo doesn't intentionally sabotage me.

    I think the more likely issue is that at some point the cost of building software falls far enough that it ceases to be a viable product category. You just ask an agent for a one off solution and it hands it to you.

    Projecting out even farther, eventually the agents get good enough that you don't need to ask for software tools in the first place. You request X, the agent realizes that it needs a tool for that, builds the one off tool, uses it, returns X to you, and the ephemeral purpose built tool gets disposed of as part of the the session history. All of this without the end user ever realizing that a tool to do X was authored to begin with.

    So I guess I agree with your end outcome but disagree about the mechanics and consequences of it.

    > Open models can't compete

    They can though. There's a gap, sure, but this isn't black and white. Plenty of open models are quite useful for a particular task right now.

    twoodfin 45 minutes

    One of the most valuable software products in the world is Instagram. Tens of billions of revenue annually.

    Any of Meta’s competitors could reproduce Instagram “the software” in every meaningful detail for (let’s say) $100M.

    They still don’t have Instagram the product. Reducing that outlay to a few billion tokens doesn’t change that.

    I guess I’ll believe this theory when Anthropic or OpenAI rolls out a search engine with an integrated ad platform that can meaningfully compete with Google. How hard can that be?

  • 33MHz-i486 15 hours

    I think the subtext of the last few weeks is the Anthropic was becoming severely capacity constrained (or approaching that). They seem to have had to sign two somewhat adverse contracts with Amazon and Google in short succession. suddenly model quality is back up again.

    Sol- 14 hours

    Perhaps the adversity of the contracts cancels out with their sudden success and increase in valuation and it ends up a wash compared to the counterfactual scenario where they would have speculated on high growth early on.

    13 hours

    ux266478 11 hours

    They should probably look at moving away from general purpose hardware for their actual products, and reserve GP hardware for RnD. You don't need frontier nodes to run circles around GPGPUs, an ASIC made with 28nm is more than enough to embarrass an H100 (and much cheaper)

    AI is in such desperate need to adopt software-hardware co-development practices, it's infuriating watching the industry drag its feet about it. We are wasting so much electricity and absolutely wrecking the "free" market just because these companies are incentivized to work at an unsustainable breakneck speed in getting shit to market.

    dakolli 7 hours

    another llm user exhibiting the behavior of a gambling addict "That table is cold this week" "that machine is't hot, you wont win on it" Every month llm users are coming up with reasons why they think the model is under preforming from some occulted reasoning only they perceive. Maybe you're just frying your brain by using llms the way you do.

    scoot 12 hours

    > suddenly model quality is back up again

    Is that not down to this? https://www.anthropic.com/engineering/april-23-postmortem

    elAhmo 13 hours

    You really think that for companies of this size, signing a contract would immediately reflect in you as end user noticing improved model quality?

    data-ottawa 14 hours

    It takes me 6 minutes minimum to get a response in the last 3 days, I don’t think model capacity is better.

    __turbobrew__ 11 hours

    It seems like they shifted heavily to prioritizing enterprise users. Starting in the last day or two I started getting much faster responses on an enterprise plan.

    mrandish 13 hours

    > suddenly model quality is back up again.

    I agree about the core motivation behind these deals, however I'm skeptical as to how "suddenly" we'll see substantial improvements. Despite their size, I'd be surprised if Google or Amazon had uncommitted chunks of Anthropic-scale, top-tier AI compute sitting around waiting to be activated.

    They're already over-subscribed and waiting for new data centers (and power plants) to come online. I suspect Anthropic will get a modest amount of new capacity right away with more added over coming quarters. These two deals don't change the total amount of AI compute available on planet Earth over the next 18 months. Anthropic parting with high-value equity has now made them the new highest bidder for an already over-bid resource. I suspect the net impact will be Amazon & Google pushing prices even higher on everyone else as they reallocate compute to their new top whale.

    HWR_14 13 hours

    > Despite their size, I'd be surprised if Google or Amazon had uncommitted chunks of Anthropic-scale, top-tier AI compute sitting around waiting to be activated.

    I doubt it was idle capacity. But for a chunk of equity in Anthropic I imagine they are willing to deprioritize other, possibly internal, uses. Certainly anything that's not contractually obligated could be on the chopping block.

    dannyw 8 hours

    When people are moving from other models (e.g. GPT, Gemini, etc) the compute that is previously powering that inference now becomes available. Of course, I'm certainly doubtful that Google would break commits and give OpenAI GPUs to Anthropic, but the underlying effect is present and probably sorted out somehow. It's not completely net new compute for the world.

    Onavo 15 hours

    Well to a certain extent it also blunts competition, Gemini is less of a threat if their main investor is also backing Anthropic. The issue is when the pyramid scheme collapses...

    ValentineC 15 hours

    Both Amazon and Google provide the Claude models via their Kiro and Antigravity IDEs respectively. It could also be investing in their attempt to own the IDE space.

    inquirerGeneral 13 hours

    [dead]

    tiffanyh 15 hours

    That’s what’s needed when you go from $9B in ARR … to $30B in ARR literally just one quarter later.

    That kind of insane growth & demand is unprecedented at that scale.

    https://www.anthropic.com/news/google-broadcom-partnership-c...

    iLoveOncall 15 hours

    Run-rate revenue is not ARR. For all we know they could have a revenue of $100 and claim a run-rate revenue of $30B.

    Given the fact that both Altman and Amodei are pathological liars, there's absolutely no reason to believe that Anthropic has $30B ARR.

    siva7 15 hours

    the fact!?

    applfanboysbgon 14 hours

    I don't follow Anthropic closely enough to know what claims its CEO has made, but it is factual that Altman is a pathological liar. You can observe this for yourself by reading and listening to the things he says and then comparing them to reality. We have years of evidence to look back on. The chasm between Altman's reality and everyone else's is so large and so well-known that it was one of the chief factors cited by the board when he was fired.

    (I would then argue that he was re-hired specifically because others involved with OpenAI understood that it is literally his job to lie and that OpenAI would not be where it is today as a corporate behemoth rather than a research non-profit without a world-class liar marketing it, but that is merely conjecture.)

    kllrnohj 13 hours

    I mean.. kinda everything about Mythos for example? Anthropic has a good product, but they also pretty consistently say some stupid ass shit if you're being generous, and blatant lies if you aren't

    Danox 10 hours

    Stand clear of the blast crater not everyone in tech bought the con…

    senordevnyc 15 hours

    For all we know they could have a revenue of $100 and claim a run-rate revenue of $30B.

    Can you explain how that’d work? What would the $30B figure be based on if they only have $100 in revenue?

    DavidSJ 15 hours

    There are about 30 million seconds in a year. If they made $100 over the last hundred milliseconds, then that’s $30B annualized.

    (That said, their numbers are much realer than that.)

    maplethorpe 14 hours

    If you make a hundred dollars in 0.1 seconds, you could say your annualized revenue is $100 / 0.1 * 60 * 60 * 24 * 365 = -$30 billion.

    That said, most people would use a monthly or quarterly period to estimate ARR. I'm not sure what Anthropic used. Probably monthly.

    13 hours

    15 hours

    nilkn 15 hours

    They're pointing out that run-rate revenue is based on essentially sampling revenue over some limited time interval, then extrapolating from there assuming revenue always occurs at the same rate (or greater) over all similar intervals in the future. More specifically, they're pointing out that estimates of ARR derived from this kind of sampling are fundamentally prone to error and can be arbitrarily inflated based on how the time interval is sampled.

    stingraycharles 9 hours

    Of course, but the fact of the matter is that the same technique was used for the quarter prior to that, and there’s a 3x increase quarter over quarter.

    sisve 14 hours

    As far as I understand run rate revenue is just a fancy way of saying that "the last month we had sales, and if that continues for a year we will have a AAR of 30B. meaning it's not 30B yet, but the sales numbers indicates that we get there by continue selling at the current speed. But to have revenue of $100 and get $30B in ARR I guess the period looked at needs to be seconds....

    (Run Rate = Revenue in Period / # of Days in Period x 365)

    iLoveOncall 12 hours

    Not even that. It's not based on actual sales in, for example, the past month. It's based on an expected continuous growth based on the growth of the past month (or whatever period you pick).

    It's a forecast.

    sisve 5 hours

    I cant say what all companies does. But my google seaches and and chatgpt Do not agree with you on that. They stick to actual sales.

    an0malous 14 hours

    What is all this AI doing? People are spending 10’s to 100’s of billions and no service or technology seems better or cheaper. Everything is more expensive and worse.

    amelius 15 minutes

    Yes but help is on the way. I have asked my OpenClaw agent to build a new RAM factory.

    _zoltan_ 5 hours

    Claude is great. I'm never going back. There is no way back.

    I'm at least 5x faster, if not more. With tooling I might be able to get to 10-15x.

    psadauskas 14 hours

    I'm spending a ton of tokens because it insists on manually correcting code that fails the linter, despite the instructions in the AGENTS.md to run the linter with autocorrect.

    And also because the Plan agent generates a huge plan, asks me a couple yes/no questions with an obvious answer, and then regenerates the entire plan again. Then the Build agent gets confused anyway and does something else, and I have to round-trip about 5 times with that full context each time.

    trhway 13 hours

    >What is all this AI doing? People are spending 10’s to 100’s of billions and no service or technology seems better or cheaper. Everything is more expensive and worse.

    That "more expensive" is someone's revenue. May be AI is the kind of technology that allows to make more and more revenue by making things more expensive and worse than by making them better and cheaper.

    bgun 14 hours

    You seem to be under the impression that making services better or cheaper _for the consumer_ is the goal of any corporation. The goal is to make their own operations better and cheaper for them. They are laying off employees and adding features of questionable value as a pretext to raise prices. The playbook has not changed, it has only accelerated.

    ravenstine 6 hours

    Exactly. Software quality has become worse, online media has become even more trash than before, and life is otherwise basically the same, lack of jobs notwithstanding. The legitimately useful things regular people can use AI for would be mostly solved by locally run quantized models. This AI "revolution" may be setting several billion on fire without even 1% of that being real value added to the world.

    Coding velocity doesn't matter if it the net result is software that sucks massive schlong. The real world doesn't care if programmers can write code faster.

    pizzly 13 hours

    For myself, its a massive boost when solo developing. Perhaps this is a different use case than most. It can work across multiple programming languages and frameworks that I had zero experience in. I use my existing knowledge of programming to ensure the new code written is correct. Also it really excels at translating from one language/framework to another. I can spend time getting it working well in a platform I know then just ask it to convert to another platform. It gets it 90% right in the first prompt, then its just a matter of fine-tuning, reviewing etc. This last 10% is where I supercharge my learning on those languages/framework. To lean all the new languages and frameworks would have taken me months before I would be productive. Now with a single prompt, we get 90% of the way there. That is incredible value for us.

    rnxrx 12 hours

    It's not just code generation, either - more and more people in my own org are using Claude Code for infrastructure automation, devops, etc. Obviously some amount of code in there, but an absolute ton of tokens being consumed just dealing with Kubernetes work at scale.

    notTheLastMan 2 hours

    [dead]

    inquirerGeneral 13 hours

    [dead]

    jonlucc 14 hours

    I can say in one role in my job, I'm getting a lot of use and I know my colleagues are at least trying a lot of things. One use is a first-pass review of animal care and use protocols. The Claude project was given all of the relevant policies and guidelines as well as a fairly long prompt that explains the things we look for in protocol review. It's checking some things that the software we use makes very tedious to check and raising inconsistencies between sections. Some places have a full time "protocol reader" who does this kind of first check, but we've never had that, so it's helpful.

    Another project I'm seeing in the same realm is taking an approved protocol and some study results and checking that the records of what was done match what they said they could do in the approved protocol. It can also make sure that surgical records have all the things they should have. This can help meet one of the requirements from the national accreditation organization to do "post approval monitoring".

    Another way I've used it is to have it collate and compare a particular kind of policy across many institutions who transparently put their policies online. Seeing the commonality between the policies and where some excel helped me rewrite our policy.

    This is work that just wasn't happening before or, more accurately, it was being spread over lots of people, and any improvement in efficiency or consistency is hard to measure.

    _puk 14 hours

    I keep seeing this take.

    And yet.. building shit is no longer the sole domain of the software engineer.

    That's the sea change.

    I've literally had finance and GTM stand things up for themselves in the last few weeks. A few tweaks (obviously around security and access), and they are good to go.

    They've gone from wrangling spreadsheets to smooth automated workflows that allow them to work at a higher level in a matter of months.

    That's what all this AI is doing. The shit we could never get the time to get around to doing.

    uncivilized 14 hours

    Mind sharing what industry you’re seeing this in? I’ve never talked to finance or GTM as an engineer. I’m not sure GTM exists in my industry.

    er2d 14 hours

    So... more 'busy work'.

    The only thing that matters is the impact on the financials. The shareholders (the people who employ you) dont care about any of this if it does not enhance value.

    johanneskanybal 12 hours

    It's a great tool, and at 1/10 or 1/100th the cost of actual developers. In the context of yc I guess watch out getting re-disrupted by a smaller team faster than before. But that's really the trend the past 40 years so nothing is new. Well maybe the velocity combined with us loosing it's footing at the same time.

    But yea it's not gonna make facebook 20% better tomorrow just that you need 5 people instead of 40 to build the next facebook.

    jazzyjackson 7 hours

    And we all know what a positive impact to quality of life building facebook made.

    xtracto 14 hours

    Haven't you seen all the layoffs? Ive been subscribed to r/layoffs for 5+ years, and since a couple of months ago, it's been crazy noisy.

    My hypothesis is that companies dont want to offer cheaper nor better services. Only want to cut costs and keep the revenue for investors.

    I other news, TQQQ is pretty high!

    adrithmetiqa 14 hours

    Subscribers will not enable these companies to make their money back. The only way is for them to eat the economy itself

    hmaxwell 14 hours

    I'm wondering whether the layoffs are partly targeting people who haven't adapted to using AI tools, particularly those who are openly dismissive of AI-assisted work.

    dieortin 13 hours

    That’s like firing someone because he uses vim instead of VSCode. Who cares about the tools someone uses if he still does his job well?

    duskdozer 2 hours

    Because the job changed out from under them - it's now to use AI as much as possible and generate so much and so convoluted content that humans have no chance of keeping up the "velocity" without being entirely dependent on it.

    morserer 9 hours

    Because the job itself has now changed, and they haven't. Their output speed might have been eclipsed by that of the engineers who efficiently adopted the new tooling.

    Where I work, the power dynamics have shifted wildly. There are a number of senior engineers who refuse to touch the stuff, and as a result, they can barely keep up with their peers. Some of our juniors are now running laps around them.

    When a stranger to your craft can now teach themselves what you know, how to do your job, and even how to automate your tasks in the span of the same workday as you, all while reliably being able to gauge the innacuracy of the output they're reading, how much longer do you really hold relevance?

    nly 8 hours

    They aren't teaching themselves anything though are they, they are basically getting AI to do their work for them.

    jazzyjackson 7 hours

    How do you mean "barely keep up" and "running laps around them" ?

    Are the juniors increasing economic productivity or just pushing lines of code?

    </retired from being measured against a random number generator>

    barnabee 13 hours

    Where I work:

    - Development velocity is very noticeably much higher across the board. Quality is not obviously worse, but it's LLM assisted, not vibe coding (except for experiments and internal tools).

    - Things that would have been tactically built with TypeScript are now Rust apps.

    - Things that would have been small Python scripts are full web apps and dashboards.

    - Vibe coding (with Claude Desktop, nobody is using Replit or any of the others) is the new Excel for non tech people.

    - Every time someone has any idea it's accompanied by a multi page "Clauded" memo explaining why it's a great idea and what exactly should be done (about 20% of which is useful).

    - 80% of what were web searches now go to Claude instead (for at least a significant minority of people, could easily be over 50%).

    - Nobody talks about ChatGPT any more. It's Claude or (sometimes) Gemini.

    - My main job isn't writing code but I try to keep Claude Code (both my personal and corpo accounts) and OpenCode (also almost always Claude, via Copilot) busy and churning away on something as close to 100% of the time as I can without getting in the way of my other priorities.

    We (~20 people) are probably using 2 orders of magnitude more inference than we were at the start of the year and it's consolidated away from cursor, ChatGPT and Claude to just be almost all Claude (plus a little Gemini as that's part of our Google Whateverspace plan and some people like it, mostly for non-engineering tasks).

    No idea if any of this will make things better, exactly, but I think we'd be at a severe competitive disadvantage if we dropped it all and went back how things were.

    ojr 10 hours

    I am an early Gemini daily driver type engineer, feels like Node, Firefox, React and Tailwind all over again, Claude Sonnet is 10x more expensive, quick thought experiment do you think 10 Gemini prompts is needed to match the quality of one Claude Code prompt? The harness around Gemini is an issue but I built my own (in Rust)

    realusername 2 hours

    Personally at my place, there hasn't been a noticable velocity change since the adoption of Claude Code. I'd say it's even slightly worse as now you have junior frontend engineers making nonsense PRs in the backend.

    Mainn blockers are still product, legal, management ... which Claude code didn't help with.

    jeremyjh 12 hours

    It sounds very similar to my shop. I have QA people and Product Managers using Claude to develop better integration and reporting tools in Python. Business users are vibe coding all kinds of tools shared as Claude Artifacts, the more ambitious ones are building single page app prototypes. We ported one prototype to Next.js and hosted on Vercel in a couple of days and then handed it back to them with a Devcontainer and Claude Code so they can iterate on it themselves; and we also developed all the security infrastructure, scaffolding, agent instructions & policy required to do this for low stakes apps in a responsible way.

    It hardly seems worth it to try to iterate on design when they can just build a completely functional prototype themselves in a few hours. We're building APIs for internal users in preference to UIs, because they can build the UIs themselves and get exactly what they need for their specific use cases and then share it with whoever wants it.

    We replaced an expensive, proprietary vendor product in a couple of weeks.

    I have no delusions about the scale or complexity limits of these projects. They can help with large, complex systems but mostly at the margins: help with impact analysis, production support, test cases, code review. We generate a lot of code too but we're not vibe coding a new system of record and review standards have actually increased because refactoring is so much cheaper.

    The fact is that ordinary businesses have a LOT of unmet demand for low stakes custom software. The ones that lean into this will not develop superpowers but I do think they will out-compete slow adopters and those companies will be forced to catch up in the next few years.

    I develop presentations now by dumping a bunch of context in a folder with a template and telling Claude Cowork what I want (it does much better than web version because of its python and shell tools and it can iterate, render, review, repeat until its excellent). The copy is quite good, I rewrite less than a third of it and the style and graphics are so much better than I could do myself in many hours.

    No one likes reading a bunch of vibe coded slop and cultural norms about this are still evolving; but on balance its worth it by far.

    croes 2 hours

    Jevon‘s paradox comes into play.

    https://en.wikipedia.org/wiki/Jevons_paradox

    In the end only profit matters

    mullingitover 13 hours

    > - Development velocity is very noticeably much higher across the board

    It's an absolute tornado of PRs these days. Everyone making the most of these tools is effectively an engineering team lead.

    MrDarcy 11 hours

    The CTO/VP of engineering role down is now singularly focused on keeping agents fed with a backlog of Linear issues. This is the new normal.

    ttul 11 hours

    This sounds like my office, but we're a bit more tilted toward Codex. I personally use Claude Cowork for drudge-admin work, GPT 5.5-Pro for several big research tasks daily, and the LLMs munge on each other's slop all day as I try my best to wrap my head around what has been produced and get it into our document repository -- all the while being conscious that the enormous volume of stuff I'm producing is a bit overwhelming for everyone.

    We are definitely reaching the point where you need an LLM to deal with the onslaught of LLM-generated content, even if the humans are being judicious about editing everything. We're all just cranking on an inhumanly massive amount of output and it's frankly scary.

    JambalayaJimbo 7 hours

    Didn’t got 5.5 just come out lol. Am I just reading slop on this website?

    am17an 8 hours

    Sounds exhausting. Are your revenue numbers up?

    eieie 8 hours

    Incremental cash flows is what we should be observing - have to net out the costs of llm associated with the activity.

    Thats just one set of costs but a good starting point.

    xnx 2 hours

    Reducing costs is also a business benefit.

    am17an 2 hours

    The cost being reduced is the cost of your labour. Tokens are only getting more expensive.

    camdenreslink 7 hours

    I am also curious about the correlation between more PRs getting merged faster and actual business outcomes.

    My impression has always been it's more important the build the correct thing (what the customer needs/wants) rather than more stuff faster.

    TeMPOraL 5 hours

    > My impression has always been it's more important the build the correct thing (what the customer needs/wants) rather than more stuff faster.

    The process of learning what the customer needs/wants is a heavily iterative one, often involving throwing prototypes at them or betting at a solution, then course-correcting based on their reaction. Similarly, the process of building the correct thing is almost always an iterative approximation - correctness is something you discover and arrive at after research and prototypes and trying and getting it wrong.

    All of that benefits from any of its steps being done faster - but it's up to the org/team whether they translate this speedup to quality or velocity. For example, if AI lets you knock out prototypes and hypothesis-testing scripts much faster, you can choose whether to finish earlier (and start work on next thing sooner), or do more thorough research, test more hypothesis, and finish as normally, but with better result.

    (Well, at least theoretically. If you're under competitive pressure, the usual market dynamics will take the choice away, but that's another topic.)

    mewpmewp2 3 hours

    This with the ability to research, iterate on prototypes, in my opinion allows to determine the right thing quicker as well. Of course right now the value is largely intuition based, there may be some immediate revenue/profit, but revenue/profit will take time to follow, so in a way it is a speculative intuition based bet. Financial gains will take time to follow, so for a period of time it will be "trust me bro" for at least some cases, but I suppose future will show, since the intuition seems so strong about it. You can't have good data about an emerging tech like that.

    jwpapi 11 hours

    I think if you drop this all you will absolutely kill it.

    komali2 10 hours

    I'm not sure. I have a buddy that's one of the better engineers I know personally, and he struggled to maintain an "AI Lent" for even a month. He found he just wasn't productive enough without it.

    He did a writeup: https://buduroiu.com/blog/ai-lent-end/

    svieira 1 hours

    > I delivered more work that I was less confident about, making me more miserable in the process

    Don't leave the kicker out of the story

    davidcann 13 hours

    Is your team measuring how much of your code is being written with claude and comparing amongst the team, like what works best in your codebase? How are you learning from each other?

    I’m making a team version of my buildermark.dev open source project and trying to learn about how teams would like to use it.

    barnabee 12 hours

    Different teams are using it in very different ways so it can be tough to compare meaningfully.

    Backends handling tens to hundreds of thousands of messages per second with extremely high correctness and resilience requirements are necessarily taking a different approach to less critical services that power various ancillary sites/pages or to front end web apps.

    That said there's a lot of very open discussion around tooling, "skills", MCP, etc., harnesses, and approaches and plenty of sharing and cross-pollination of techniques.

    It would be great to find ways to better quantify the actual value add from LLMs and from the various ways of using them, but our experience so far is that the landscape in terms of both model capability and tooling is shifting so fast that that's quite hard to do.

    davidcann 12 hours

    Thanks for the feedback. I agree that it’s changing very fast, which is why my thesis is that this tooling will be needed to help everyone on the team keep up.

    stasomatic 12 hours

    I am hobbyist playing around. Recently dropped CC (which gave me a sense of awe 2 months ago), but they realized GPUs need CapEx and I want to screw around with pi.dev on a budget. Then on to GH Copilot but couldn't understand their cost structure, ran out of quota half month in, now on Codex. I don't really see any difference for little stuff. I also have Antigravity through a personal Gmail account with access to Opus et al and I don't understand if I am paying for it or not. They don't have my CC so that's a breather.

    It's all romantic, but a bunch of devs are getting canned left and right, a slice of the population whose disposable income the economy depends on.

    It's too late to be a contrarian pundit, but what's been done besides uncovering some 0-days? The correction will be brutal, worse than the Industrial Revolution. Just the recent news about Meta cuts, SalesForce, Snap, Block, the list is long.

    Have you shipped anything commercially viable because of AI or are you/we just keeping up?

    jameshart 9 hours

    There has always been a gap between the experience of solo/small shop developers, vs. developers who work in teams in a large corporate environment. But thanks to open source, we have for the past twenty years at least mostly all been using the same tools.

    But right now, the difference in developer experience between a dev on a team at a business which has corporate copilot or Claude licenses and bosses encouraging them to maximize token usage, vs a solo dev experimenting once every few months with a consumer grade chat model is vast.

    eieie 8 hours

    Let’s take an extreme example.

    Meta seemingly has a constant stream of product managers. If llm’s really augment the productivity of engineers, why isn’t meta launching lots more stuff? I mean there’s no harm in at least launching one new thing.

    What are all those people doing with the so called productivity enhancements?

    What I’m calling into question is how much does generating more code matter if the bottle neck is creativity/imagination for projects?

    The only thing I’ve seen is a really crummy meta AI thing implemented within WhatsApp.

    bushbaba 6 hours

    It’s allowed a sludge of internal tools to spin up, and more bloat. The ability to sand bag and over build these tools has gotten 2-10x worse.

    Only solution I can think of is to drastically cut headcount so productivity is back to prior levels, and profitability is raised. Big Tech is mostly market constrained with not much room to grow beyond the market itself growing.

    As for startups, seems like AI tools have drastically reduced their time to market and accelerated their growth curves.

    elliotec 7 hours

    Forgive my ignorance, but what exactly is the vast difference? Who's doing more of what, or whatever you're implying? And how do you quantify this?

    jameshart 6 hours

    The difference is (if you'll forgive me recruiting a couple of straw men for the purpose of illustrating the spectrum we are talking about here):

    Hobbyist solo dev, counting tokens, hitting quotas, trying things on little projects, giving up and not seeing what the fuss is about.

    vs

    Corporate developer, increasingly held accountable by their boss for hitting metrics for token usage; being handed every new model as soon as it comes out; working with the tools every day on code changes that impact other developers on other teams all of whom have access to those same tools.

    elliotec 5 hours

    Okay, so just to be clear you're not commenting on productivity? Or what does "changes that impact" mean?

    I might be missing a lot of self-evident assumptions here but I feel like I'm still missing so much context and have no idea what this difference is actually describing.

    jameshart 5 hours

    If you have some objective measure of productivity in mind, feel free to share it, but no that's not what I'm commenting on.

    I'm talking more about why threads like this seem to be full of people saying 'this has completely changed how corporate development works' and other people saying 'I tried it a few times and I don't get the hype'

    fc417fc802 11 hours

    > The correction will be brutal, worse than the Industrial Revolution.

    Has it occurred to you that there might not be a correction, and that the outcome would still be brutal, at least on par with the industrial revolution.

    SlinkyOnStairs 2 hours

    It won't get that far.

    It's physically impossible to build out the datacenters required for the "AI is actually good and we have mass layoffs" scenario. This Anthropic investment is spurred on because they've already hit a brick wall with capacity.

    $40B goes a long way, but not for datacenters where nearly every single component and service is now backordered. Even if you could build the DC, the power connection won't be there.

    The current oil crisis just makes all of that even worse.

    fallat 46 minutes

    We pretty much already had the layoffs, at least that's my perception.

    The next level of layoffs is probably still 25 years out.

    SlinkyOnStairs 13 minutes

    There's layoffs, certainly.

    But all the economic indicators suggest those are "bad economy" layoffs dressed up as "AI" layoffs to keep the shareholders happy.

    stasomatic 11 hours

    Do you mean as in there will be no happy ending / reset and no another century of prosperity?

    chpatrick 10 hours

    Imagine you're a typesetter and they just invented computerized printing.

    fc417fc802 11 hours

    I mean as in living through the industrial revolution would have been wild. So whether we have an AI revolution or an AI bubble it's bound to be a roller coaster.

    And that's without accounting for the various wars (and resultant economic impacts) that are already in progress. A large part of what drove the meat grinder of WWI was (very approximately) the various actors repeatedly misjudging the overall situation and being overly enthusiastic to try out their shiny new weapons systems. If one or more superpowers decide to have a showdown the only thing that might minimize loss of life this time around is (ironically enough) the rise of autonomous weapons systems. Even in that case as we know from WWII the logical outcome is a decimated economy and manufacturing sector regardless of anything else that might happen.

    rhubarbtree 10 hours

    Bubble or revolution - not a dichotomy.

    Bubbles like the AI bubble are a game theoretic outcome of a revolution. Many players invest heavily to avoid losing, but as a whole the market over invests. This leads to a bubble.

    Aeolun 5 hours

    > minimize loss of life this time around is (ironically enough) the rise of autonomous weapons systems

    I think that just means the relative civilian loss of life will increase once again.

    fc417fc802 5 hours

    What strategic merit is there in targeting civilians or life critical infrastructure in a fully automated battlebot scenario? Perhaps it's naive but I would expect stockpiles, datacenters, and any key infrastructure on which the local semiconductor fabrication depends to be the primary targets.

    kakacik 2 hours

    Look au Ukraine for answers and how russians target almost purely civilian infrastructure and civilians in terror campaigns every single day and night, same as nazis did to Britain in WWII. With exactly same results but they just double down and send more drones next day.

    russia is really and empire of the dumb and subjugated serfs at this point (again, history repeats), but they are far from only such place.

    Dont expect more, most people are not that nice when SHTF.

    bojan 1 hours

    The current reality doesn't match your expectations. Russia is using automated warfare to strike what are primarily human life-critical targets.

    Jagerbizzle 14 hours

    I'm burning an insane number of tokens 8-12 hours a day for the dramatic improvement of some internal tooling at a big tech company. Using it heavily for an unannounced future project as well.

    I presume I'm not the only one.

    bakugo 13 hours

    I guess that's one way to tout a technology as revolutionary without actually needing to provide any proof of it. Just say you're using it for "internal tooling" and "unannounced projects", that way nobody can look at them and notice they're indistinguishable from the slop that clogs up Show HN nowadays.

    It's better than the "here's my code, it a giant pile of spaghetti but only luddites care about code quality and maintainability anyway" method, at least.

    Daishiman 8 hours

    I'm using it to write frontend code literally 5 times faster. What would have been a shell script is now a GUI backed by an API layer that doesn't require looking up internal documentation to know that it exists.

    I've been using it to write tools that drastically facilitate spinning up local k8s cluster with an entire suite of development services that used to take two days to set up in Docker.

    se4u 14 hours

    I'd be interested to learn what kind of internal tooling are you improving ?

    appplication 14 hours

    I’m not them but we have vastly improved our internal pipeline monitoring/triage/root cause/etc by having a new system that basically its whole purpose is to hook into all of our other systems and consolidate it under a single view with an emphasis on shortening the amount of time it takes to triage and refine issues.

    This will have previously been too ambitious to ever scope but we’ve been able to build essentially all of it in just two months. Since it sits on top of our other systems and acts as more of a window/pass through control pane, the fact that it’s vibe coded poses little risk since we still have all the existing infrastructure under it if something goes awry.

    TimTheTinker 14 hours

    Personally, a static analysis PR check to catch some types of preventable runtime production errors in application code

    Jagerbizzle 14 hours

    We've had a lot of complaints about our review processes, time to submit, etc, and a lot of that boils down to tools no one has time to improve.

    It's now trivial to fix these problems while still doing our day jobs -- shipping a product.

    amluto 13 hours

    I am, oddly, able to get really quite a lot of mileage out of $20/mo of OpenAI plan, and I have never encountered a usage limit. I have gotten warnings that I was close a couple times.

    I wonder what I’m doing differently.

    I did spend quite a bit of time, mostly manually, improving development processes such that the agent could effectively check its work. This made a difference between the agent mostly not working and mostly working. Maybe if I had instead spent gobs of money it would have worked output tooling improvements?

    komali2 10 hours

    I wonder if you're like me? I tried out the MCPs and sub agents and rules and bells and whistles and always just came back to a plain Codex / Claude Code / Cursor Agent terminal window, where I say what I want, @ a few files, let it rip, check the diff, ask for some adjustments, then commit and start the process over after clearing context.

    Haven't found a process that beats this yet and I burn very few tokens this way.

    devmor 8 hours

    I don’t really write code with it at all, and that’s why I burn so many tokens.

    I like writing code, I’m good at writing code. What I hate doing is dredging through logs, filtering out test scenarios and putting together disparate information from knowledge silos - so I have the AI doing that. It’s my research assistant.

    Effectively I’m using it like an automated search engine that indexes anything I want and refines the results by using the statistical near neighbors of how other people explained their searches.

    BloondAndDoom 14 hours

    AI is truly perfect for internal tooling. Security is less or no concern, bugs are more acceptable, performance / scalability rarely a concern. Quickest way to get things done, and speed up production development, MVP development etc.

    13 hours

    jdub 14 hours

    > Security is less or no concern

    [waits for chickens to come home to roost]

    connicpu 14 hours

    Doesn't take long until someone has the bright idea to pipe customer tickets directly into the poorly written internal tool

    2ndorderthought 13 hours

    No problems at all except, unauthorized access to a model they were claiming was a weapon and couldn't be released to the public and having their cli code leaked in the last two weeks. Everything's just fine

    TeMPOraL 2 hours

    If security was the prime concern, there would be no chickens and no coop and no farm - people would still be living in caves. After all, outside is dangerous, and Grug Chief said, smart ass grugs with their smart ass ideas like fire or agriculture just invite complexity and create security vulnerabilities.

    After all (Grug Chief reminds us), the only truly secure computing system is an inert rock.

    overfeed 13 hours

    > [waits for chickens to come home to roost]

    "We are writing down X billions over 4 years, and have cancel several ambitious programs related to our AI experiments. We were following standard practice in the industry, so [shareholders] can't blame us for these chickens coming to roost. If everyone is guilty, is anyone really guilty?"

    sumedh 13 hours

    Anthropic seems to be doing fine :)

    LPisGood 13 hours

    When attackers can move laterally through everything because every internal tool leaks credentials and data there will be issues.

    therealdrag0 9 hours

    Internal tool Doesn’t have credential. Checkmate ;)

    cobolcomesback 12 hours

    This comment makes me want to scream.

    _zoltan_ 5 hours

    This is not going away.

    Even right now the difference with working with 'AI native' developers or with regular developers is day and night.

    I certainly wouldn't want a non-clause enabled developer on my team now.

    svieira 1 hours

    > I certainly wouldn't want a non-clause enabled developer on my team now.

    You only want to work with people who are hip with the North Pole?

    lioeters 9 hours

    This is what happens when entire industries go all in on "Move fast and break things." Imagine what they said about software applying to everything else in the world. That's what's coming.

    > Security is less or no concern, bugs are more acceptable, performance / scalability rarely a concern. Quickest way to get things done

    TeMPOraL 2 hours

    > This is what happens when entire industries go all in on "Move fast and break things." Imagine what they said about software applying to everything else in the world. That's what's coming.

    This is literally how rest of the world works already, and always had. We'd still be living in caves otherwise. Fortunately most people (at least outside software) seem to understand that security is a trade-off against usefulness, and not an end goal in itself.

    hellisothers 14 hours

    Same and it is working really well (I say contra to most individual reporting).

    andriy_koval 14 hours

    I have some coworker who says something similar, he vibe coded tons of cryptic code, which indeed solves some problem though could be way more compact and well structured. Now it is hitting complexity limitation, since llm now cant comprehend it, and human cant comprehend it by large a margin.

    svieira 1 hours

    I went through one the other day which was a nest of Go code which boiled down to a 10 line shell script.

    vbezhenar 14 hours

    Just wait a month, Opus 4.8 will comprehend it for sure.

    overfeed 13 hours

    it will comprehend it well enough to complicate it further into a rats-nest that only Opus 4.9 can comprehend, and so on. Good luck if you run into a bug before the N+1 version launches.

    Jagerbizzle 14 hours

    honest recommendation: nuke and pave after analyzing (w/ AI of course) where it went horribly wrong.

    it's trivial to reimplement a better solution.

    2ndorderthought 13 hours

    The problem was definitely because they didn't use enough AI fast enough. They should just try again

    andriy_koval 14 hours

    Its a bit of workspace politics, I would need to call that guy out to tell that he is not hyper-performer, but just pushed lots of low quality code which will produce lots of negative impact in a long term.

    Also, I am not sure if it is trivial to implement. The code is injected into many scenarios and workflows, so replacement will be painful and risky if new solution break some edge case.

    Jagerbizzle 13 hours

    It sounds like you might have some larger process problems if someone can just inject a bunch of vibe-coded slop into critical workflows while more discerning eyes are dubious of the quality/reliability etc.

    SpicyLemonZest 13 hours

    In some sense, sure. There’s a lot of processes that weren’t previously needed, because sloppy people who couldn’t or wouldn’t think things through were mostly incapable of producing PRs that passed all the existing tests.

    andriy_koval 13 hours

    its partially/largely management problem. One of tier1 productivity metric in the group is # of LoC created by engineers, so it creates dynamics of people exchanging favors of pushing AI slop to codebase, or be labeled as low performers.

    msy 14 hours

    We suddenly have a proliferation of new internal tools and resources, nearly all of which are barely functional and largely useless with no discernible impact on the overall business trajectory but sure do seem to help come promo time.

    Barely an hour goes by without a new 4-page document about something that that everyone is apparently ment to read, digest and respond to, despite its 'author' having done none of those steps, it's starting to feel actively adversarial.

    Gigachad 10 hours

    We had a coworker vibecode an internal tool, do a bunch of marketing to the company at how incredible it is. Then got hired somewhere else.

    I just went and deleted it because it's completely broken at every edge case and half of the happy paths too.

    _zoltan_ 5 hours

    That's not on Claude, that's on the authors.

    Claude is a tool. It can be abused, or used in a sloppy way. But it can also be used rigorously.

    I've been beating my team to be more papercut-free in the tooling they develop and it's been rough mostly because of the velocity.

    But overall it's a huge net positive.

    fc417fc802 11 hours

    Sounds like a workplace wide DDoS.

    cobolcomesback 12 hours

    We’re seeing the exact same where I work. Our main Slack channels have become inundated with “new tool announcements!”, multiple per day, often solving duplicate problems or problems that don’t exist. We’ve had to stop using those channels for any real conversation because most people are muting them due to the slop noise.

    And what’s worse is that when someone does build a decent tool, you can’t help but be skeptical because of all the absolute slop that has come out. And everyone thinks their slop doesn’t stink, so you can’t take them at their word when they say it doesn’t. Even in this thread, how are you to know who is talking about building something useful vs something they think is useful?

    A lot of people that have always wanted to be developers but didn’t have the skills are now empowered to go and build… things. But AI hasn’t equipped them with the skill of understanding if it actually makes sense to build a thing, or how to maintain it, or how to evolve it, or how to integrate it with other tools. And then they get upset when you tell them their tool isn’t the best thing since sliced bread. It’s exhausting, and I think we’ve yet to see the true consequences of the slop firehose.

    komali2 10 hours

    > but sure do seem to help come promo time.

    I personally noticed this. The speed at which development was happening at one gig I had was impossible to keep up with without agentic development, and serious review wasn't really possibile because there wasn't really even time to learn the codebase. Had a huge stack of rules and MCPs to leverage that kinda kept things on the rails and apps were coming out but like, for why? It was like we were all just abandoning the idea of good code and caring about the user and just trying to close tickets and keep management/the client happy, I'm not sure if anyone anywhere on the line was measuring real world outcomes. Apparently the client was thrilled.

    It felt like... You know that story where two economists pass each other fifty bucks back and forth and in doing so skyrocket the local GDP? Felt like that.

    jeremyjh 12 hours

    I'm sorry to hear you have such poor leadership.

    kranke155 14 hours

    Without good management AI is just a new way to make terrible work in unprecedented quantities.

    With good management you will get great work faster.

    The distinguishing feature between organisations competing in the AI era is process. AI can automate a lot of the work but the human side owns process. If it’s no good everything collapses. Functional companies become hyper functional while dysfunctional companies will collapse.

    Bad ideas used to be warded off by workers who in some shape or form of malicious compliance just would slow down and redirect the work while advocating for better solutions.

    That can’t happen as much anymore as your manager or CEO can vibe code stuff and throw it down the pipeline for the workers to fix.

    If you have bad processes your company will die, or shrivel or stagnate at best. Companies with good process will beat you.

    briansm 9 hours

    [flagged]

    hdndjsbbs 13 hours

    My team has also adopted this - it's much easier to add another layer than to refine or simplify what exists. We have AI skills to help us debug microservices that call microservices that have circular dependencies.

    This was possible before but someone would maybe notice the insane spaghetti. Now it's just "we'll fix it with another layer of noodles".

    layoric 12 hours

    Are you concerned this will just lead to coupling everywhere like microservices tend to do?

    mancerayder 12 hours

    Unfortunately I saw this pre-AI with microservices, where while empowering developers with their beloved microservices, we create intense complexity and deployment headaches. AI will fix the slop with an obscuring layer of complexity on top.

    vineyardmike 10 hours

    That's so interesting because where I work, the push was to "add one more API" to existing services, turning them into near monoliths for the sake of deployment and access. Still a mess of util and helper functions recursively calling each other, but at least it's one binary in one container.

    trhway 13 hours

    >Barely an hour goes by without a new 4-page document about something that that everyone is apparently ment to read, digest and respond to, despite its 'author' having done none of those steps, it's starting to feel actively adversarial.

    well, isn't that what AI can be used effectively for - to generate [auto]response to the AI generated content.

    duskdozer 3 hours

    What a delightful world we're building.

    Jagerbizzle 14 hours

    I'm sorry to hear that you have people abusing their new superpowers.

    I run a team and am spending my time/tokens on serious pain points.

    casey2 13 hours

    Such as?

    girvo 11 hours

    For me/my team, I use it to fix DevProd pain points that I would otherwise never get the investment to go solve. Just removed Webpack for Rspack, for example. Could easily do it myself, which is why I can prompt it correctly and review the output properly, but I can let it run while I’m in meetings over more important product or architectural decisions

    Jagerbizzle 13 hours

    I answered this in a different comment below, but a lot of the friction is around the amount of time it takes to test/review/submit etc, and a lot of this is centered around tooling that no one has had the time to improve, perf problems in clunky processes that have been around longer than anyone individual, and other things of this nature. Addressing these issues is now approachable and doable in one's "spare time".

    casey2 13 hours

    The point of that friction is to keep the human in the loop wrt code quality, it's not meant to be meaningless busywork. It's difficult to believe that you sustain the benefit of those systems. Anthropic and Microsoft publicly failed to keep up code quality. They would probably be in a better spot currently if they used neither, no friction, no AI. But that friction exists for a reason and AI doesn't have the "context length" to benefit from it.

    This the the difference between intentional and incidental friction, if your CI/CD pipeline is bad it should be improved not sidestepped. The first step in large projects is paving over the lower layer so that all that incidental friction, the kind AI can help with, is removed. If you are constantly going outside that paved area, sure AI will help, but not with the success of the project which is more contingent on the fact that you've failed to lay the groundwork correctly.

    tonyarkles 11 hours

    I'll throw this out as something where it has saved literally weeks of work: debugging pathological behaviour in third-party code. Prompt example: "Today, when I did U, V, and W. I ended up with X happening. I fixed it by doing Y. The second time I tried, Z happened instead (which was the expected behaviour). Can you work out a plausible explanation for why X happened the first time and why Y fixed it? Please keep track of the specific lines of code where the behaviour difference shows up."

    This is in a real-time stateful system, not a system where I'd necessarily expect the exact same thing to happen every time. I just wanted to understand why it behaved differently because there wasn't any obvious reason, to me, why it would.

    The explanation it came back with was pretty wild. It essentially boiled down to a module not being adequately initialized before it was used the first time and then it maintained its state from then on out. The narrative touched a lot of code, and the source references it provided did an excellent job of walking me through the narrative. I independently validated the explanation using some telemetry data that the LLM didn't have access to. It was correct. This would have taken me a very long time to work out by hand.

    Edit: I have done this multiple times and have been blown away each time.

    zahlman 37 minutes

    > Prompt example: "Today, when I did U, V, and W. I ended up with X happening. I fixed it by doing Y. The second time I tried, Z happened instead (which was the expected behaviour). Can you work out a plausible explanation for why X happened the first time and why Y fixed it? Please keep track of the specific lines of code where the behaviour difference shows up."

    > The explanation it came back with was pretty wild. It essentially boiled down to a module not being adequately initialized before it was used the first time and then it maintained its state from then on out.

    Even without knowing any of the variable values, that explanation doesn't sound wild at all to me. It sounds in fact entirely plausible, and very much like what I'd expect the right answer to sound like.

    jeppebemad 6 hours

    This seems to be a common denominator for what LLMs actually do well: Finding bugs and explaining code. Anything about producing code is still a success to be seen.

    nathancahill 13 hours

    Creating stakeholder value

    natpalmer1776 13 hours

    Promoting synergy

    DANmode 6 hours

    Eating a bagel

    paradoxyl 12 hours

    Creating productivity gain narrtives

    scottyah 11 hours

    Aligning stakeholders

    serf 13 hours

    >Such as?

    it's crazy that the experiences are still so wildly varying that we get people that use this strategy as a 'valid' gotcha.

    AI works for the vast majority of nowhere-near-the-edge CS work -- you know, all the stuff the majority of people have to do every day.

    I don't touch any kind of SQL manually anymore. I don't touch iptables or UFW. I don't touch polkit, dbus, or any other human-hostile IPC anymore. I don't write cron jobs, or system unit files. I query for documentation rather than slogging through a stupid web wiki or equivalent. a decent LLM model does it all with fairly easy 5-10 word prompts.

    ever do real work with a mic and speech-to-text? It's 50x'd by LLM support. Gone are the days of saying "H T T P COLON FORWARD SLASH FORWARD SLASH W W W".

    this isn't some untested frontier land anymore. People that embrace it find it really empowering except on the edges, and even those state-of-the-art edge people are using it to do the crap work.

    This whole "Yeah, well let me see the proof!" ostrich-head-in-the-sand thing works about as long as it takes for everyone to make you eat their dust.

    Peritract 12 hours

    None of that is concrete though; it's all alleged speed-ups with no discernable (though a lot of claimed) impact.

    > This whole "Yeah, well let me see the proof!" ostrich-head-in-the-sand thing works about as long as it takes for everyone to make you eat their dust.

    People will stop asking for the proof when the dust-eating commences.

    hattmall 8 hours

    People ask for examples because they want to know what other people are doing. Everything you mention here is VERY reasonable. It's exactly the kind of stuff no one is going to be surprised that you are getting good results with the current AI. But none of that is particularly groundbreaking.

    I'm not trying to marginalize your or anyone else's usage of AI. The reason people are saying "such as" is to gauge where the value lies. The US GDP is around 30T. Right now there's is something like ~12T reasonably involved in the current AI economy. That's massive company valuations, data center and infrastructure build out a lot of it is underpinning and heavily influencing traditional sectors of the economy that have a real risk of being going down the wrong path.

    So the question isn't what can AI do, it can do a lot, even very cheap models can handle most of what you have listed. The real question is what can the cutting edge state of the art models do so much better that is productively value added to justify such a massive economic presence.

    leptons 12 hours

    That's all well and good, but what happens when the price to run these AIs goes up 10x or even 100x.

    It's the same model as Uber, and I can't afford Uber most of the time anymore. It's become cost prohibitive just to take a short ride, but it used to cost like $7.

    It's all fun and games until someone has to pay the bill, and these companies are losing many billions of dollars with no end in sight for the losses.

    I doubt the tech and costs for the tech will improve fast enough to stop the flood of money going out, and I doubt people are going to want to pay what it really costs. That $200/month plan might not look so good when it's $2000/month, or more.

    Jach 12 hours

    It's an important concern for those footing the bill, but I expect companies really in the face of being impacted by it to be able to do a cost-benefit calculation and use a mix of models. For the sorts of things GP described (iptables whatever, recalling how to scan open ports on the network, the sorts of things you usually could answer for yourself with 10-600 seconds in a manpage / help text / google search / stack overflow thread), local/open-weight models are already good enough and fast enough on a lot of commodity hardware to suffice. Whereas now companies might say just offload such queries to the frontier $200/mo plan because why not, tokens are plentiful and it's already being paid for, if in the future it goes to $2000/mo with more limited tokens, you might save them for the actual important or latency-sensitive work and use lower-cost local models for simpler stuff. That lower-cost might involve a $2000 GPU to be really usable, but it pays for itself shortly by comparison. To use your Uber analogy, people might have used it to get to downtown and the airport, but now it's way more expensive, so they'll take a bus or walk or drive downtown instead -- but the airport trip, even though it's more expensive than it used to be, is still attractive in the face of competing alternatives like taxis/long term parking.

    nvader 11 hours

    Why not try it yourself? Inference providers like BaseTen and AWS Bedrock have perfectly capable open source models as well as some licensed closed source models they host.

    You can use "API-style" pricing on these providers which is more transparent to costs. It's very likely to end up more than 200 a month, but the question is, are you going to see more than that in value?

    For me, the answer is yes.

    leptons 10 hours

    What makes you think I haven't tried it myself?

    The "costs" are subsidized, it's a loss-leader.

    er2d 14 hours

    Im convinced none of these people have any training in corporate finance. For if they did they'd realise they were wasting money.

    I guess you gotta look busy. But the stick will come when the shareholders look at the income statement and ask... So I see an increase in operating expenses. Let me go calculate the ROIC. Hm its lower, what to do? Oh I know, lets fire the people who caused this (it wont be the C-Suite or management who takes the fall) lmao.

    dpark 13 hours

    Do you really think companies have started spending millions on tokens and no one from finance has been involved?

    You could argue that all the spending is wasted (doubtless some is), but insisting that the decision is being made in complete ignorance of financial concerns reeks of that “everyone’s dumb but me” energy.

    wjeje 11 hours

    What a finance team allocates on spend has nothing to do with what the tokens actually get used for.

    Are they peeking over the shoulder of each team and individual? Of course not.

    It can be the case that the spend is absolutely wasteful. Numbers don’t lie.

    hattmall 8 hours

    There is a difference to just noticing and attributing it to and recognizing negative financial outcomes. Right now for most companies they are still adjusting to declining inflation. Their bottom lines are doing quite well because consumer price inflation is much stickier than supply inflation. We are coming off of one of the quickest and largest supply lead inflationary cycles. It may not be immediately apparent for many companies that new expenditures are a drag on profitability.

    The real thing to look at is whether or not the future outlook for company AI spend is heading up or down?

    casey2 13 hours

    More that there is a poor incentive structure. Just like how PE can make money by leveraged buyouts and running businesses into the ground. Many of the financial instruments that make both that and the current AI bubble possible were legal then made illegal within the lifetimes of the last 16 presidents.

    Round-tripping used to be regulated. SPVs used to be regulated. If you need a loan you used to have to go to something called a bank, now it comes from ???? who knows drug cartels, child traffickers, blackstone, russians & chinese oligarchs. Even assuming it doesn't collapse tommorow why should they make double digit returns on AI datacenters built on the backs of Americans?

    dpark 12 hours

    My issue was not with criticism of the money being spent or how it’s being obtained. I was specifically commenting on this statement:

    > “Im convinced none of these people have any training in corporate finance. For if they did they'd realise they were wasting money.”

    This isn’t meaningful criticism. This is a vacuous “those guys are so dumb”.

    temp8830 12 hours

    > Do you really think companies have started spending millions on tokens and no one from finance has been involved?

    Oh, they were involved all right. They ran their analyses and realized that the increase in Acme Corp's share price from becoming "AI-enabled" will pay for the tokens several times over. For today. They plan to be retired before tomorrow.

    dpark 12 hours

    Sounds like they did train in corporate finance.

    wjeje 11 hours

    Sounds like you haven’t had training in corporate finance.

    wjeje 11 hours

    That magic trick only works for publicly traded stocks.

    Most firms are not a google or a Microsoft - a firms cash balance can become a strategic weapon in the right environment. So wasting money is not a great idea. Lest we forget dividends.

    Moreover if you have a budget set re. Spend on tokens - you have rationing. Therefore the firm should be trying to get the most out of token spend. If you are wasting tokens on stuff that doesn’t create a benefit financially for the firm then indeed it is not inline with proper corporate financial theory.

    strange_quark 9 hours

    No, it works for any VC-backed companies. Something like 60% of VC funding last year went to AI companies. VCs aren't going to give you a money unless you're building an agentic AI-native agent platform for agents.

    wjeje 8 hours

    No Employees of publicly traded firms benefit from short-term gains in the stock price, assuming the stock price jump holds throughout the period of grant/vesting.

    People who work at VC-backed firms do not get to enjoy the same degree of liquidity, not even close. There can be some outliers but that is 0.1% of all.

    Can't believe simple stuff like this has to be said.

    strange_quark 8 hours

    CFOs or VPs absolutely benefit by hyping their company up to private investors by allowing tokenmaxxing to go on unchecked. Tender offers, acquisitions, and aquihires all exist. Or just good old fashioned resume padding by saying you "enabled AI transformation" or whatever helps you land a big payday at some other company.

    qingcharles 13 hours

    My main use of vibecoding is creating dozens of internal tools that have sped up tasks, or made tasks possible that were previously not. These tools would have taken weeks of time to build manually and would have been hard to justify, rather than just struggling with manual processes every now and again. AI has been life-changing in creating these kinda janky tools with janky UI that do everything they're supposed to perfectly, but are ugly as hell.

    Jach 12 hours

    Are you able to describe any of those internal tools in more detail? How important are they on average? (For example, at a prior job I spent a bit of time creating a slackbot command "/wtf acronym" which would query our company's giant glossary of acronyms and return the definition. It wasn't very popular (read: not very useful/important) but it saved myself some time at least looking things up (saving more time than it took to create I'm sure). I'd expect modern LLMs to be able to recreate it within a few minutes as a one-shot task.)

    qingcharles 11 hours

    The ones I can mention.. one that watches a specific web site until an offer that is listed expires and then clicks renew (happens about once a day, but there is no automated way in the system to do it and having the app do it saves it being unlisted for hours and saves someone logging in to do it). Several that download specific combinations of documents from several different portals, where the user would just suck it up previously and right-click on each one to save it (this has a bunch of heuristics because it really required a human before to determine which links to click and in what order, but Claude was able to determine a solid algo for it). Another one that opens PDFs and pulls the titles and dates from the first page of the documents, which again was just done manually before, but now sends the docs via Gemma4 free API on Google to extract the data (the docs are a mess of thousands of different layouts).

    Denzel 9 hours

    None of these projects sound like weeks worth of scope w/o AI.

    shimman 11 hours

    It's almost always a CRUD app or dashboard that no one uses while being extremely overkill for their use case.

    edit: LOL called it, a bunch of useless garbage that no one really cares about but used to justify corporate jobs programs.

    girvo 11 hours

    Ah but it looks cool and I can put it on my stack ranking perf eval

    Daishiman 9 hours

    If it's useless that's a you problem. I've been building CRUDs that would have taken me a month to get perfectly right in the span of 4-5 days which save an enormous number of human tech support hours.

    shimman 9 hours

    Sorry man but the software world is littered with CRUD apps, they are called CRUD apps for a reason. They're basically the mass assembled stamped L-bracket of the software world. CRUD apps have also had template generators for like 30 years now too.

    Still useless in the sense that if you died tomorrow and your app was forgotten in a week the world will still carry on. As it should. Utterly useless in pushing humanity forward but completely competent at creating busy work that does not matter (much like 99% of CRUD apps and dashboards).

    But sure yeah, the dashboard for your SMB is amazing.

    Daishiman 7 hours

    The software industry's value proposition for the vast majority of businesses running the world lies in CRUD apps that properly capture business requirements. That's infinitely more relevant in insurance, pharma, banking and logistics than any technological breakthrough of the past 25 years.

    Your rant just shows you don't understand why people pay for software.

    scottyah 11 hours

    I have one that serves a few functions- Tracks certificates and licenses (you can export certs in any of the majorly requested formats), a dashboard that tells you when licenses and certs are close to expiring, a user count, a notification system for alerts (otherwise it's a mostly buried Teams channel most people miss), a Downtime Tracker that doesn't require people to input easily calculatable fields, a way for teams to reset their service account password and manage permissions, as well as add, remove, switch which project is sponsoring which person, edit points of contact, verify project statuses, and a lot more. It even has some quick charts that pull from our Jira helpdesk queue- charts that people used to run once a week for a meeting are just live now in one place. It also has application statuses and links, and a lot more.

    I'd been fighting to make this for two years and kept getting told no. I got claude to make a PoC in a day, then got management support to continue for a couple weeks. It's super beneficial, and targets so many of our pain points that really bog us down.

    CoolThings 10 hours

    >> a dashboard that tells you when licenses and certs are close to expiring

    Or, Excel > Data > Sort > by the Date column. No dashboard needed, no app needed.

    Daishiman 9 hours

    Why are you ignoring the fact that grabbing data from heterogeneous sources, combining it and presenting it is generally never a trivial task? This is exactly what LLMs are good for.

    camdenreslink 7 hours

    If you are using an LLM to actually fetch that data, combine it, and present it to you in an ad hoc way (like you run the same prompt every month or something), I wouldn't trust that at all. It still hallucinates, invents things and takes short cuts too often.

    If you are using an LLM to create an application to grab data from heterogeneous sources, combine it and present it, that is much better, but could also basically be the excel spreadsheet they are describing.