41 comments

  • rbartelme an hour ago

    I wonder if the same thing happened with--or is happening at--NSF? I know researchers that did not get funding for quantitative ecology fellowships or grants. After back channeling with program managers, it seems that using "diversity"--as in the quantitative ecological measures, metrics, or derived functional values--may have flagged proposals to be rejected.

    https://en.wikipedia.org/wiki/Alpha_diversity https://en.wikipedia.org/wiki/Beta_diversity https://en.wikipedia.org/wiki/Gamma_diversity https://en.wikipedia.org/wiki/Zeta_diversity

    • duxup 44 minutes ago

      The "lists" that were public of science grants identified as DEI absolutely indicated they were just doing a ctrl+f on diversity, women, race at times. Same went for removing people from government websites and so on.

  • derbOac 2 hours ago

    My guess is this will garner attention for use of AI — that's where my attention went as well initially. But there's another layer to this, which is whether a grant should be terminated just because it pertains to DEI, regardless of AI being involved or not.

    My guess is you couldn't get a roomful of experts to agree on what "DEI" means; I doubt AI could do better, and even if it could, I'm not sure I'd want that to be the determining factor about whether it would get funded. To the extent it was, I'm not sure it would be a bad thing.

    • yks 2 hours ago

      > My guess is you couldn't get a roomful of experts to agree on what "DEI" means

      let's not pretend that anyone involved cares one bit

    • amelius an hour ago

      > But there's another layer to this, which is whether a grant should be terminated just because it pertains to DEI, regardless of AI being involved or not.

      Another layer?

      I think it's the same level of stupidity/shadiness.

    • RangerScience an hour ago

      I remember (the one time I snuck into NIPS) a buuunch of papers on "fairness", and it was basically: "We have decided that this input should not affect the outcome. Does it? If so, how?"

      So that seems like a pretty good actual "what's DEI?" - Does race/gender/sexuality/etc affect some outcome? Should it? If it does affect it and shouldn't, what we can we do about it?

      That said... yeah, not gonna get a room full of anyone to agree on that. Starting with that "should".

    • EdiX an hour ago

      If you can use a criteria to finance academia [1] you can use the reverse to define what DEI is. Every major corporation had DEI departments and yet it doesn't exist, it's a ghost.

      I really hope the backlash to this bullshit finally reaches europe.

      Same song and dance for 14 years straight, there is no SJW you're just imagining them, it's just called being a decent human being, they say as they kick me repeatedly in the face.

      [1] https://archive.is/UmkH3

    • tastyface an hour ago

      DEI has nothing to do with experts or any legitimate analysis. It’s in the same basket as woke, CRT, SJW, BLM, and so on -- thought-terminating keywords that the right deploys to focus the ire of their base. If one starts losing its mindshare, a new one gets introduced and propagated. But they all point to the same liberal boogeyman.

    • krapp an hour ago

      >My guess is you couldn't get a roomful of experts to agree on what "DEI" means;

      You don't need to, it's clearly defined within existing legal frameworks.

      A lot of people including the current administration seem to believe it means "racism against white men," but those people are simply wrong.

  • speak_plainly 3 hours ago

    It sounds like they stupidly did exactly what was stupidly expected.

    • amelius an hour ago

      That's like calling it just incompetence where it clearly is both incompetence and malice.

  • blibble 3 hours ago

        def accept_grant(application):
            return random.choice([True, False])
    • layer8 an hour ago

      The article is about grant termination, not the acceptance of applications.

      • blibble an hour ago

        turns out this function works just as well for that too

    • Hamuko 2 hours ago

      Is this the classic model?

      • gs17 2 hours ago

        No, that accepts far too many grants. You'd need to add weights=[1, 100] to it.

  • TZubiri an hour ago

    Fun technical note:

    >"Begin with ‘Yes.’ or ‘No.’ followed by a brief explanation. ”"

    GPT models generate tokens from left to right, they are causal. That prompt causes the model to lock in to an answer and then generate the explanation after the fact. This is why you can sometimes see the failure mode "The answer is X because the answer can't be X so the answer is Y"

    Asking for the Yes/No to be placed at the end would put the CoT before and generate 100% objectively better results.

    I used to think prompt engineering was a bullshit term like you don't need to be trained at all to use this thing. But apparently you need to a little bit.

    So if the idea alone that an application is fed into chatgpt isn't dumb enough, consider that they failed to even use chatgpt correctly, which apparently is a thing.

    • viraptor 31 minutes ago

      It matters less these days with the thinking models. They'll automatically inject some extra content before the answer for basically the same purpose. But if you're using something simpler with immediate responses - yeah, the order is important.

  • xg15 2 hours ago

    > We’ve mentioned Cavanaugh here before, for the time when he was head of the US Institute for Peace, and Elon and DOGE falsely labeled a guy who had worked for USIP a member of the Taliban, causing the actual Taliban to kidnap the guy’s family.

    Sorry for the OT, but... what on earth?

    • mc_maurer 2 hours ago

      Yeah the story linked there is absolutely nuts.

  • hsbauauvhabzb 2 hours ago

    I wonder what the economic cost of DOGE basing policy entirely on whether something is DEI or not. Talk about cutting off your nose to spite your face.

  • jgbuddy 3 hours ago

    Simple, cheap and fast

    • McGlockenshire 3 hours ago

      "Simple, cheap, fast," and somewhere between inaccurate and wrong. From the article:

      > To flag grants for their DEI involvement, Fox entered the following command into ChatGPT: “Does the following relate at all to DEI? Respond factually in less than 120 characters. Begin with ‘Yes.’ or ‘No.’ followed by a brief explanation. Do not use ‘this initiative’ or ‘this description’ in your response.” He then inserted short descriptions of each grant. Fox did nothing to understand ChatGPT’s interpretation of “DEI” as used in the command or to ensure that ChatGPT’s interpretation of “DEI” matched his own.

      The culture warrior understanding of the term "DEI" does not reflect reality. The prompt is trash. Garbage in, garbage out.

      This is somehow even stupider than similar reports of grants being canceled simply for containing specific keywords commonly used in scientific research but also on the culture warrior no-no list.

      • watwut 2 hours ago

        Women or minorities being there or being cared about is DEI.

        Only men naturally matter. White men I mean. Only right wing white men, actually, bonus point if they are aggressive assholes. That makes them proper masculine.

      • tt24 3 hours ago

        In what way does ChatGPT’s understanding of the term DEI not reflect reality?

        • stvltvs 2 hours ago

          The pejorative sense of DEI has probably poisoned the training data. You might be able to prompt around it, but the existing prompt is pretty lazy.

          • fruitworks an hour ago

            the dataset is poisoned with a definition you disagree with

            • stvltvs 11 minutes ago

              Agreement or disagreement are irrelevant if you're asking the LLM for something more precise than generalized racial grievance labeled by the public as DEI.

            • Capricorn2481 an hour ago

              Newsflash: the definition is amorphous to justify whatever people want.

        • eesmith 2 hours ago

          From the article:

          > For example, the AI searches [purportedly related to DEI] flagged .... a film examining how the game of baseball was “instrumental in healing wounds caused by World War I and the 1980s economic standoff between the US and Japan,”

          How at all is that DEI? (Surely that should be WWII, yes? The complaint also says "I".)

          And, is this also DEI?

          > another charting “the rise and reforms of the Native Americans boarding school systems in the U.S. between 1819 and 1934,”

          American football would be impoverished without the contributions of Native Americans from the Carlisle Indian Industrial School, an experimental Native American boarding school.

          Pratt, who founded the school, wrote "If all men are created equal, then why were blacks segregated in separate regiments and Indians segregated on separate tribal reservations? Why weren't all men given equal opportunities and allowed to assume their rightful place in society? Race became a meaningless abstraction in his mind." Is that also DEI?

          Would you care to summarize what DEI means in reality?

        • nancyminusone 2 hours ago

          In the sense that viewpoints of people that use the term DEI do not appear to reflect reality?

          What kind of sentiment do you think you would find in the training material regarding the term DEI?

  • spwa4 2 hours ago

    That's going to be another big problem with AI. The same problem they have with developers.

    Management: "We need to do X"

    AI does X

    Management: "It's not working"

    AI: what do you mean? It does exactly what you asked.

    Management: "I wanted it to do Y, and that's how you do it" (with Y having nothing to do with X whatsoever)

    AI: ...

    Management: I'm hiring the developers back ...

    • hleszek 2 hours ago

      That has always been a thing since the invention of computers. The great thing about computers is that they do exactly what you ask them to do. The problem with computers is that they do exactly what you ask them to do.

      • enlightens 44 minutes ago

        from 1864:

        > On two occasions I have been asked, — "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

        https://en.wikiquote.org/wiki/Charles_Babbage#:~:text=226-,P...

    • layer8 an hour ago

      The board will solve this by replacing management with AI.

  • mindslight 3 hours ago

    We're going to look back at the second Grump admin as what happens when society enthusiastically embraces ego-stroking hallucinations - from "magic computer" LLMs, hollow TV personalities, and of course good old combative dementia.

  • mrits an hour ago

    It sounds like the author thinks you need a graduate degree from Cornell to become a professional blogger. He probably should have gone the college dropout path as well

  • Mr_Eri_Atlov an hour ago

    I hate it here

  • drivingmenuts an hour ago

    Can we please put these guys on trial for malfeasance?!??!?

  • colinplamondon 3 hours ago

    Ironically... ChatGPT having such a positive attractor basin for DEI probably widened the net here tremendously.