Sounds like you just need to keep the data on your server and use samba or NFS and a network mount on the other devices.
Sounds like you just need to keep the data on your server and use samba or NFS and a network mount on the other devices.
It doesn’t technically have drivers at all or go missing. All supporting kernel modules for hardware are always present at the configuration level.
This isn’t true? The Linux kernel has a lot of drivers in the kernel source tree. But not all of them. Notably NVIDIA drivers have not been included before. And even for the included drivers they may or may not be compiled into the kernel. They can and generally are compiled with the kernel but as separate libraries that are loaded at runtime. These days few drivers are compiled in and most are dynamically loaded depending on what hardware is present on the system. Distros can opt to split these drives up into different packages that you may or may not have installed - which is common for less common hardware.
Though with the way most distros ship drivers they don’t tend to spontaneously stop working. Well, with the exception of Arch Linux which deletes the old kernel and modules during an upgrade which means the current running kernel cannot find its drivers and stops dynamically loading them - which often results in hotplug devices like USB to stop working if you try to plug them in again after the drivers get unloaded (and need a reboot to fix as that boots into the latest kernel that has its drivers present).
Or refactored at a later date.
I don’t get it? They seem to be arguing in favor of bootc over systemd because bootc supports both split /usr and /usr merge? But systemd is the same. There is really nothing in systemd that requires it one way or another even in the linked post about systemd it says:
Note that this page discusses a topic that is actually independent of systemd. systemd supports both systems with split and with merged /usr, and the /usr merge also makes sense for systemd-less systems.
I don’t really get his points for it either. Basically boils down to they don’t like mutable root filesystem becuase the symlinks are so load bearing… but most distros before use merge had writable /bin anyway and nothing is stopping you from mounting the root fs as read-only in a usr merge distro.
And their main argument /opt and similar don’t follow /usr merge as well as things like docker. But /opt is just a dumping ground for things that don’t fir the file hierarchy and docker containers you can do what you want - like any package really nothing needs to follow the unix filesystem hierarchy. I don’t get what any of that has to do with bootc nor /usr merge at all.
TLDR; yes it does affect security. But quite likely not by any meaningful amount to be worth worrying about.
Any extra package you install is extra code on your system that has a chance to include vulnerabilities and thus could be an extra attack vector on your system. But the chances that they will affect you are minuscule at best. Unless you have some from of higher threat model then I would not worry about it. There are far more things you would want to tackle first to increase your security that have far larger effects than a second desktop environment being installed.
So, why wait for windows 10 EOL? If you are already mostly on Linux and are planning on getting rid of the last bits anyway? If you really need to you can always reinstall windows on a second disk or in a VM later on if you really need to - no real need to preemptively do that if you dont plan on using it.
If you have everything you need backed up you can reinstall on a new hard drive and restore everything you need. So you should not be completely fucked. Just an inconvenience you might have to go through. You will lose the stuff not backed up so if any of that is a pain to get again it might be more painful to restore everything.
Others have said some thing you might want to try. But having a spare disk you can swap to is never a bad idea. Disks to fail and you should plan for what to do when they do. Backing up your data is a good first step.
I would say it is not a bad idea to just get a new disk now and go through the process of restoring everything anyway - you can treat it like your disk has failed and do what you would need to do to restore. With the ability to swap back when you need to.
This is a good way to find things you might have missed in your backups.
Not if you have backed up your data. You have a backup of your data right?
Why wait? Start using Linux friendly software in your day to day workflows. Then start to dual boot Linux with your current system and start using it more and more. By the time windows 10 reaches EOL you will know if you still need a Windows install or not.
I don’t mind ads so much. What I don’t want in invasive tracking and collection of every scrap of data they can to push ads on you. Give some dumb ads based on the damned contents of the page and I would be fine. But no, ads is basically a synonym for tracking these days.
With teams like MS and Apple also working on it, I expect this to be figured out on a faster timeline. Months/ few years.
That assumes they will share a meaningful amount of work. I do not see what Apple have done to help much at all - completely closed ecosystem with their own custom chips that they are not going to want to share.
MS have done a really bad job as well at getting ARM to kick off and have not been putting a huge effort into it that I have seen. And especially since valve is doing this in part to get away from MS systems why would MS help valve with this goal?
So yeah, if they did put in and share the effort it would take less time. But I don’t see them doing that. Plus, it takes years to develop a product like this. And all evidence ATM suggests they have barely if at all started on the next version. Which does suggest that the next deck is likely more than a year away, likely two. Which does increase the chances that it could be arm based.
I wouldn’t sound so sure. There are a lot of blockers to getting a working ARM based steam deck. First Arch Linux (which steam os is based on) does not offer official ARM binaries. This would mean they would need a new base os or work on getting Arch Linux to support ARM. With their recent donations to Arch Linux were focused on unblocking some issues with supporting Arch on ARM (notibally stuff needed for better automated builds) would suggest they want to stick with Arch.
Next you need good emulation layers for x64 and x86 as that is what all games are written in. Which there are leaks that say they are working on this as well.
But that is two big blockers that could take years to solve. So all comes down to when they want to release the next deck. Within a couple of years and I don’t think it will be arm based. After that the chances go up quite a bit.
And given some recent news about Valve working on an ARM emulator and funding Arch Linux to help them start supporting ARM as well they might be working towards that. Though if that is for the deck 2 or something else further in the future is yet to be seen.
The only other major thing for repairability/upgradability would be less glue on the battery and threaded inserts, which doesn’t add size.
The glue was reduced on later versions and especially on the OLED version which also got threaded inserts. So those are already done and I doubt the next version would regress in that regard.
And then relabel all the old standards when they create a new one so every generation you need to figure-out what al the new names mean.
Oh they care. They care a lot. Particularly that you don’t have any so they can sell all your details to any bidder.
This is irrelevant with Steam though. Steam offers a runtime with preconfigured versions of everything that is needed to give the devs a consistent environment for their games to run no matter how fragmented the linux install base might be. This runtime is also what proton uses for ship its different versions.
Linux makes up exactly one package on a so-called Linux system.
True, it was a poor proxy for what I really meant - which was the amount of code that my system runs. Linux as a project is growing quite fast these days and is getting bigger and bigger. But the number of GNU tools I use (and thus their code that I use) is growing smaller and smaller.
Musl, systemd, Freedesktop, etc. were never OS projects. GNU and Linux are OSes.
What the hell makes a project an OS project? What even is an OS - that is a debate as old as computers. What makes GNU more of an OS than systemd or musl or anything else? GNU is not a complete OS on its own. It has failed to meet that goal for decades. Is it just because it claims that title? Are the other projects just not ambitious enough? Hell why are we not raising pitchforks at GNU for being a all encompassing project that wants to consume everything like everyone complains systemd is trying to do?
The lines drawn here are meaningless and arbitrary. GNU is no more important to my systems as any other project mentioned here and makes up no more of my system then they do. I don’t see why so many are obsessed with singling out GNU and explicitly excluding everything else. It is a pointless distinction created by a guy that was pissy that his pet project was not getting the attention he thought it deserved.
Whaaatt!?!!? That sounds like you don’t use git? You should use git. It is a requirement for basically any job and there is no reason to not use it on every project. Then you can keep your projects on a server somewhere, on your NAS if you want else something like github/gitlab/bitbucket etc. That way it does not really matter about your local projects, only what is on the remote and with decent backups of that you don’t need to constantly archive things from your local machine.