• Jankatarch@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    2 days ago

    Read the article. Their definition of “sabotage” includes not using AI tools.

    I guess the wording says “sabotaging AI strategy” so it’s our fault for the intended misunderstanding?

    • mrgoosmoos@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      15 hours ago

      yeah I guess I fall under that definition as well

      sure, I am an AI skeptic. I work in engineering. I should be critical of any tools. that doesn’t mean that I’m sabotaging the company’s strategy, unless their strategy is to blindly implement AI tools. in which case yeah sure, but like surely that’s not the actual strategy, right?

      anyways, short story time:

      • our company had a demo for an AI drawing creation tool. it was not very impressive and their team couldn’t answer many questions about how it works. it didn’t seem like it would provide any value to us since it couldn’t do complex drawings and simple drawings take little time and effort to create. so the moderate complexity stuff is where it could shine, which coincidentally is also where junior people train their skills to become intermediates and seniors. and like I’m not going to choose to turn our jobs into reviewing AI output, because that’s bad for the company - it leads to poor job satisfaction, poor work quality, and inexperienced team members

      • our CTO is vibe coding a bunch of tools for us right now. his approach is to basically not validate anything and let people find issues. I report these issues when I find them, as well as go looking for them if I suspect something is wrong. does that make me a saboteur? trying to correct something?

      • same guy also has used Claude to create some technical specifications/ document summaries, and sent them out external. external team had many questions because the document didn’t make sense. bad look for our company, I think, and lots of wasted time trying to figure out that information and then going back and correcting it later. am I a saboteur because I don’t blindly adopt AI document generation and keep asking who validated certain information instead of just using it and proceeding with my work?

      • Jankatarch@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        16 hours ago

        Funny enough they count even “using unapproved AI tools” as sabotage. So I would say your actions fell under high treason?

  • Signtist@bookwyr.me
    link
    fedilink
    English
    arrow-up
    75
    arrow-down
    1
    ·
    2 days ago

    My boss asked people with tech skills to chime in about AI, so I made a quick report of a bunch of things that AI sounds like it’d be useful for in my line of work, and why it wouldn’t be, with examples of times when a real human made the same kind of mistake while I’ve been working there, and how much that mistake cost the company. He decided not to pursue AI. You can call it sabotaging the AI strategy, or you can call it helping keep the company from making a major fuckup, take your pick.

    • kkj@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      31
      ·
      2 days ago

      I’d call it helping form the AI strategy. Sabotaging it would be waiting until they make one and then not following it.

  • Sundray@lemmus.org
    link
    fedilink
    English
    arrow-up
    94
    arrow-down
    1
    ·
    3 days ago

    I see they’re laying the groundwork for a “stabbed in the back” narrative, when AI inevitably bursts.

  • uuj8za@piefed.social
    link
    fedilink
    English
    arrow-up
    115
    arrow-down
    3
    ·
    3 days ago

    Fuck this ad for AI. It’s trying to make it seem like workers don’t use AI because they’re scared. It’s only 8% that said they were scared. The rest of us 92% workers aren’t scared we’re going to be replaced by AI. We see how shitty AI is and we don’t like it because it sucks and makes things slower, not faster.

    The super-users we surveyed were around 3x more likely to have received both a promotion and pay raise in the past year, compared to employees who have been slow to adopt these tools

    I do agree with this point. One of my team members recently got a lot of brownie points because he’s been doing AI demos. The execs love him because he’s visibly following orders. Does he generate way more code than everyone else? YES, this is actually a horrible thing, but execs are clueless and think more code == more better. Is he more productive than others? Definitely not. The hot garbage he’s generating is just bug-ridden tech debt.

    I guess I’m sabotaging our AI rollout by getting out of the way. You wanna inject AI everywhere? Fine, do it. I’m not gonna review it though. If you can’t take the time to write something, I’m not going to spend my time reading it.

    • jaybone@lemmy.zip
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      1
      ·
      3 days ago

      In about five years from now, there will be so much garbage code with unfixable bugs. It’s difficult for me to imagine what kind of collapse this will cause. Or how we will recover from it, which might take another decade. Fortunately we might be fighting eachother with spears over fresh water by then, so we will have bigger problems to not solve.

      • drcobaltjedi@programming.dev
        link
        fedilink
        arrow-up
        9
        ·
        2 days ago

        I’d say be hopeful, but I don’t know.

        I am a software developer, and there’s absolutely been times where a temp fix becomes permanent, but I’ve also had times where my boss has told me to clean up tech debt or I’ve been “look this whole chunk of code is both wrong and unmaintainable” (wrong as in it didn’t do the thing correctly but it looked correctish) and I’ve been allowed to just rewrite the broken code from scratch.

        Idk, I feel like also at a certain point the codes bugs might be so obvious and troublesome that companies are forced to actually deal with the problem code and when that happens will be different for every company and every program.

  • CombatWombat@feddit.online
    link
    fedilink
    English
    arrow-up
    95
    arrow-down
    1
    ·
    3 days ago

    Oh no, it’s so irresponsible of europesays.com to publish this practical list of ways to sabotage your company’s AI rollout. Hopefully no other outlets include longer, more detailed lists, or we might see this kind of behavior start to spread:

    The sabotage entails entering proprietary information into public AI tools, or using unapproved AI tools. Some employees report outright refusing to use AI tools. Others have even admitted to tampering with performance reviews or intentionally generating low-output work to make AI appear less effective.

    • searabbit@piefed.social
      link
      fedilink
      English
      arrow-up
      64
      arrow-down
      1
      ·
      3 days ago

      This is amateur work. I’ve seen someone volunteer to head the staff AI training and in the presentation outline how bad AI is (i.e., terrible for the environment, not reliable, all true things) and also just put out the most half-assed training rollout. It had the effect of half the staff intentionally or unintentionally doing other forms of sabotage.

    • Hackworth@piefed.ca
      link
      fedilink
      English
      arrow-up
      35
      arrow-down
      1
      ·
      3 days ago

      The sabotage entails entering proprietary information into public AI tools, or using unapproved AI tools.

      Not sure how that one sabotages the company’s AI strategy. That’s just plain old data insecurity. Posting the same information to a forum would accomplish the same harm.

      • CombatWombat@feddit.online
        link
        fedilink
        English
        arrow-up
        31
        arrow-down
        1
        ·
        3 days ago

        If the data leaks via an LLM, it discredits the LLM. If it leaks via a forum, it discredits the forum.

        • dreamkeeper@literature.cafe
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          2 days ago

          Not really imo. People will blame the leakers, not the llm, and they wouldn’t be wrong. There’s nothing you can do to stop people from leaking info into the public other than the threat of job loss and a massive lawsuit.

          What would discredit the llm is if the llm provider violated their contract and used the data for something their customers didn’t agree to.

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        That just sounds like the employees are using AI as asked of them, but the company’s own offerings/tools are bad, or they’re given bad goals, so they just turn to one of the major AI companies, like ChatGPT, since it’s all AI anyway, rather than overt sabotage.

  • Slotos@feddit.nl
    link
    fedilink
    arrow-up
    74
    arrow-down
    1
    ·
    3 days ago

    Some employees report outright refusing to use AI tools.

    So having morals is sabotage now?

  • ThePowerOfGeek@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    3 days ago

    From the article:

    An Anthropic study released last month found AI is already theoretically capable of completing the majority of tasks associated with computer science, law, business, and finance, and other major white-collar fields

    There’s a huge difference between “capable of completing the majority of tasks” and “capable of completing the majority of tasks WELL”.

    Sure, you can have an AI code your web app or mobile app, due example. But it will be riddled with bugs, and bloated with inefficient code.

    And from what I’ve seen, it’s not getting noticeable better at that.

    But the AI companies won’t acknowledge that, of course. They will continue selling the snake oil that cures everything that ails you.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      2
      ·
      3 days ago

      An Anthropic study says AI can do everything

      Crack dealer says crack is awesome

      🙄

      Honestly, the only reason I use that LLM shit now is because the job market in tech is getting a bit like the hunger games and my employer strongly encourages its use - and even then, the vast majority of my usage is “fancy search engine”. Even the boilerplate it gives me sometimes is like… really weirdly styled and quite often has to be corrected to be fit for purpose. I cannot understand people who just slap that shit in without even bothering to check it.

      • GrindingGears@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        I cannot understand people who just slap that shit in without even bothering to check it.

        Really? Because that’s exactly what I do. I’m metered by my spend, so I produce just absolutely wholesale volumes of slop, don’t review it or even look at it and just am like fine here you go, this is what you want. No one else reviews it, it gets piled on top of other slop, shits going to really start piling up and breaking. Execs won’t care, they’ll start firing people anyways, it’ll continue to pile up, problems will multiply…

        I mean the alternative is do nothing, get fired. Or contribute to a downfall, survive a little longer maybe, still get fired. Or fix it, produce really good work at antagonizing levels of fixing everything, and still get fired. So what’s not to understand here?

        • gravitas_deficiency@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          Well, I do my best to fight the good fight and make systems that aren’t impossible to maintain, because I work at an oncology biotech and despite all its flaws it’s a very compelling and worthwhile mission to me overall. So I do my best to not treat things too transactionally.

  • TheDoctorDonna@piefed.ca
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    3 days ago

    The company I work for is pushing Claude and have set up a bunch of training sessions for us after noticing the lack of use instructing us to plan to attend one of these sessions. I keep ignoring every push they make. Why the hell would I train this hallucinating liar to take my job? It’s crazy to expect us to use the thing they want to eventually replace us with.

    • GrindingGears@lemmy.ca
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      2 days ago

      Use it. Fuck it. You are cooked either way. It’s not going to replace you, it’s like basically the next evolution of basically an excel formula for tasks (claudes referencing tasks like matching actually does work pretty well), and it’s like an evolution of a search engine. I am unaware of any successful companies that are solely staffed by fancy search engines and next gen excel formulas. You still need a human in the middle. This whole wave is like when they tried to offshore everyone a decade ago. Their customers left in droves because you couldn’t understand the guy on the other side of the conversation, and someone being paid like 30 cents an hour or whatever crap those poor people are exposed to, they aren’t going to have any sort of due care or attention to the details.

      Execs continue to funnel money and burn everything to the ground. This whole AI thing might actually be a good thing in the long run, in a round about way, in that it’s going to cause a financial catastrophe, and everyone is going to get so burned by all this overhype and promising that they’ll both be terrified to spend money on stuff like this for quite some time+ it’ll maybe force a day of reckoning where the world maybe finally wakes up as to what the value of a modern executive is (near zero).

      • TheDoctorDonna@piefed.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Trying to use AI to do my job would honestly only make my job more stressful. It is a very detail oriented job and I do not trust AI to do it for me and I’m not willing to risk my job over having it done wrong, at least if I’m replaced I get severance and shit, if I’m fired I get squat.