• 2 Posts
  • 13 Comments
Joined 3 years ago
cake
Cake day: August 10th, 2023

help-circle

  • First thing you should check is if the school offers VDI - Virtual Desktop Infrastructure.

    My college has VDI, where you can access a GPU accelerated Windows machine from your browser, preinstalled with tools like Autocad, Photoshop, and other stuff.

    If your school doesn’t, then you should look at options like VM’s. The problem, however, is that CAD and a lot of other software is GPU intensive, and simply using it in a VM might be too slow for practical usage.




  • If you’re not on archlinux, you should probably switch. It has the latest packages of everything, and the Arch User Repos are essentially compiling whatever xyz program you want from source, in one command.

    You should also be careful with doing stuff like installing deb/rpm’s directly from sites, because that’s how you can break your system. Also, I suspect you installed pip packages to the system itself, which can also can break your system.

    Anyway, mesa, a “system” package is definitely more challenging as well, since it needs to be deeply integrated into the system. If you actually need a newer version of it, then the easiest is to just switch to a distro that has a newer version, or if you only need the userspace version, you can use it within a docker container like the one’s offered by distrobox or junest.

    If you were wanting a newer version of an “application”, flatpak would probably be good enough to get it onto your system. “Applications” don’t need to be as integrated with the rest of your system.

    As a rebuttal to your post though, there is a very good reason why Linux does packaging the way it does. Installing a program on Windows is nowhere as simple as it may seem to you.

    You probably have an adblocker, and use a non google search engine, and know your way around sites. But consider the average users actual process of installing a program on Windows. It looks something more like:

    • Search on google for program
    • Click first link. Oh wait, that’s a sponsored link that leads to malware.
    • Click second link. Oh wait, that site is not an ad but also probably malware
    • Navigate through “You’ve got a virus on your PC”
    • Go back to google
    • Find the real link. Click through the ads on that site because of course it has ads.
    • Download the real software

    Of course, to you the process probably takes 15 seconds. But to a real average, non advanced user, this experience is fraught with risks. If they select wrongly, then they get malware on their computer. Compare this to installing software on Linux from a distro’s repos:

    • Open app store / package manager GUI
    • Find program. Click install. Enter password.
    • Don’t think about things like program versions, and just be happy you now have Krita or whatever program you want.

    No risk. No pain. Simple.

    There is a very good reason for older packages in distro repos as well. There are two main reasons:

    The first is stability. Stability vs unstability doesn’t mean anything about system reliability, but is instead about lack of change. I like to say that a stable release distros doesn’t just mean you older packages, it means you get the same system behavior over a period of time. Instead of a constantly changing set of bugs, you deal with the same set.

    I like Arch. I like new packages. I can find workarounds for the current annoying bug this update cycle. But the average user probably doesn’t want to have to deal with that. They probably don’t want to have to deal with the bug of the week, and they would rather just have some predictable bug that stays there for a few years that they already know their way around.

    I remember watching a twitch streamer hit this, actually. They were complaining about new packages, and I pointed out that the reason why older packages are there is to have the same predictable set of bugs, instead of a changing set. They dismissed me, claiming they needed new packages, which is understandable. But then they (an ArchLinux user) immediately encountered an issue with Dolphin (Linux file browser) where the top bar / UI wouldn’t load at all and got really frustrated. I didn’t say anything, but I did laugh to myself and feel vindicated when it happened. Of course, eventually that bug will be fixed. But new ones will come along.

    The second reason, is supply chain security. Debian, and Red Hat Enterprise Linux, where not affected by the XZ utils backdoor, due to having a policy of only doing carefully cherry picked security updates. I won’t go into detail here, but I have another comment about it.







  • a grand scale with the XZ backdoor

    The XZ backdoor, affected a lot less machines than you think. It did not affect:

    • Debian Stable, or Red Hat Enterprise Linux (and rebuilds) — RHEL/rebuilds
    • Linux distros that did not integrate ssh into ssytemd
    • Linux distros that do not use systemd

    The malicious code never made it into RHEL or Debian. Both of those distros have a model of freezing packages at a specific version. They then only push manually reviewed security updates, ignoring feature updates or bugfixes to the programs they are packaging. This ensures maximum stability for enterprise usecases, but the way that the changes are small and reviawable also causes them to dodge supply chain attacks like xz (it also enables these distros to have stable auto update features, which I will mention later). But those distros make up a HUGE family of enterprise Linux machines, that were simply untouched by this supply chain attack.

    As for linux distros that don’t integrate ssh with systemd or non systemd distros being affected, that was because the malware was inactive in those scenarios. Malicious code did make it there, but it didn’t activate. I wonder if that was sloppiness on the part of the maker of the malware, or intentional, having it activate less frequently as a way of avoiding detection?

    Regardless, comparing the XZ backdoor to the recent NPM and other programming language specific package manager supply chain attacks is a huge false analogy. They aren’t comparable at all. Enterprise Linux distros have excellent supply chain security, whereas programming language package managers have basically none. To copy from another comment of mine about them:

    Debian Linux, and many other Linux distros, have extensive measures to protect their supply chain. Packages are signed and verified, by multiple developers, before being built reproducibly (I can build and verify and identical binary/package). The build system has layers, such that if only a single layer is compromised, nothing happens and nobody flinches.

    Programming langauge specific package repos, have no such protections. A single developer has their key/token/account, and then they can push packages, which are often built on their own devices. There are no reproducible build to ensure the binaries are from the same source code, and no multi-party signing to ensure that multiple devs would need to be compromised in order to compromise the package.

    So what happened, probably, is some developer got phished or hacked, and gave up their API key. And the package they made was popular, and frequently ran unsandboxed on devs personal devices, so when other developers downloaded the latest version of that package, they got hacked too. The attackers then used their devices to push more malicious packages to the repo, and the cycle repeats.

    And that’s why supply chain attacks are now a daily occurrence.

    And then this:

    You should probably turn off Dependabot. In my experience, we get more problems from automatic updates than we would by staying on the old versions until needed.

    Also drives me insane as well. It’s a form of survivorship bias, where people only notice when automatic upgrades cause problems, but they completely ignore the way that automatic security upgrades prevent many issues. Nobody cares about some organization NOT getting ransomwared because their webserver was automatically patched. That doesn’t make the news the way that auto upgrades breaking things does. To copy from yet another comment of mine

    If your software updates between stable releases break, the root cause is the vendor, rather than auto updating. There exist many projects that manage to auto update without causing problems. For example, Debian doesn’t even do features or bugfixes, but only updates apps with security patches for maximum compatibility.

    Crowdstrike auto updating also had issues on Linux, even before the big windows bsod incident.

    https://www.neowin.net/news/crowdstrike-broke-debian-and-rocky-linux-months-ago-but-no-one-noticed/

    It’s not the fault of the auto update process, but instead the lack of QA at crowdstrike. And it’s the responsibility of the system administrators to vet their software vendors and ensure the models in use don’t cause issues like this. Thousands of orgs were happily using Debian/Rocky/RHEL with autoupdates, because those distros have a model of minimal feature/bugfixes and only security patches, ensuring no fuss security auto updates for around a decade for each stable release that had already had it’s software extensively tested. Stories of those breaking are few and far between.

    I would rather pay attention to the success stories, than the failures. Because in a world without automatic security updates, millions of lazy organizations would be running vulnerable software unknowingly. This already happens, because not all software auto updates. But some is better than none and for all software to be vulnerable by default until a human manually touches it to update it is simply a nightmare to me.