• stinky@redlemmy.com
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    7 days ago

    took me a few days but I fully switched to firefox. my computer finally runs the way it should.

    • AugustWest@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      I was really hesitant to switch to Firefox. It took a long time for me to finally leave the Mozilla suite and accept that Firefox was the one going forward.

    • MonkeMischief@lemmy.today
      link
      fedilink
      arrow-up
      1
      ·
      6 days ago

      It gets even better when you add:

      Tab Stash

      And

      Auto Tab Discard

      Tab Stash lets me stash a big ridiculous research or shopping session I’d want to return to, under a nice collection label for later.

      And Auto Tab Discard will essentially unload open tabs you haven’t touched in a while, so they’ll load from scratch when you “wake them up”, but they’re not hogging all your RAM. It’s fantastic.

  • MNByChoice@midwest.social
    link
    fedilink
    arrow-up
    14
    ·
    7 days ago

    Linux and FreeBSD systems? Happy and snappy.

    Work Windows system filled with crap corp security software? Open electron apps and wait for them to load.

    Personal Windows system? Master of Orion, the remake.

      • ayyy@sh.itjust.works
        link
        fedilink
        arrow-up
        7
        ·
        7 days ago

        The file system Windows uses (NTFS) has a lot of neat features, but ends up being astronomically slow in unexpected ways for some file operations as a result.

        • JordanZ@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          7 days ago

          I remember playing around with NTFS streams. They’re usually used to store random metadata about a file. The size of which doesn’t appear in the normal file size calculation/display in Windows. So you can have this 2kb text file that has an alternate stream with a zip file of the entire discography of a band stuffed into it. Longest file transfer of 2kb ever. Another gotcha, the second you copy that file to a file system that doesn’t support the alternate streams they just vanish. So all the sudden that long file transfer is super quick.

          • DahGangalang@infosec.pub
            link
            fedilink
            arrow-up
            1
            ·
            6 days ago

            See, so I’ve never seen the purpose of NTFS streams. In a cyber security course, I was warned to look out for Alternate Data Streams, but got an unsatisfactory answer when I prodded the instructor for more (it was apparent that didn’t have anything beyond a surface level understanding of them).

            Your link was informative in grasping what they are, but I still don’t think I’m clear on how they’re used in the “real world”. Like, what (and how) would one use them for a legitimate purpose?

            • JordanZ@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              6 days ago

              It’s been a few years since I last looked at them but I believe one of the most notable uses was the icon. If you had a custom icon for an application or the thumbnail image for a photo.

  • Dimi Fisher@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    6 days ago

    I remember not long ago showing my linux desktop on reddit and everybody was going crazy because I was still using Firefox and Chrome is the browser to use nowdays and all that crap, I guess times have changed for the better, and yes I still use firefox

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      4
      ·
      6 days ago

      I started to use firefox back in 2007.

      I have never changed back. If someday there’s a better alternative I’d switch. But sure thing that chromium based browsers are not an alternative.

  • suicidaleggroll@lemm.ee
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    7 days ago

    I’d be in trouble, since between ZFS and my various VMs, my system idles at ~170 GB RAM used. With only 32 I’d have to shut basically everything down.

    My previous system had 64 GB, and while it wasn’t great, I got by. Then one of the motherboard slots died and dropped me to 48 GB, which seriously hurt. That’s when I decided to rebuild and went to 256.

    • jaschen@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      7 days ago

      Real question. Doesn’t the computer actually slow down when you have that much memory? Doesn’t the CPU need to seek into a bigger vast vs a smaller memory set?

      Or is this an old school way of thinking?

      • IHawkMike@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        7 days ago

        No that’s not how it works. Handling a larger address space (e.g., 32-bit vs 64-bit) maybe could affect speed between same sized modules on a very old CPU but I’m not sure that’s even the case by any noticeable margin.

        The RA in RAM stands for random access; there is no seeking necessary.

        Technically at a very low level size probably affects speed, but not to any degree you’d notice. RAM speed is actually positively correlated with size, but that’s more because newer memory modules are both generally both bigger and faster.

        • The_Decryptor@aussie.zone
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 days ago

          The RA in RAM stands for random access; there is no seeking necessary.

          Well there is, CPUs need to map virtual addresses to physical ones. And the more RAM you have the more management of that memory you need to do (e.g. modern Intel and AMD CPUs have 5 levels of indirection between a virtual and physical address)

          But it also caches those address mappings, as long as your TLB is happy, you’re happy. An alternative is to use larger page sizes (A page being the smallest amount of RAM you can address), the larger the page the less you need recurse into the page tables to actually find said page, but you also can end up wasting RAM if you’re not careful.

          • IHawkMike@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            7 days ago

            You clearly know more than me, but wouldn’t everything from 4GB to 1TB have the same number of walks? And one more walk gets you up to 256TB?

            • The_Decryptor@aussie.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 days ago

              So one of the problems is the size of a “physical page”, on a stock x86 system that’s only 4KiB. If you allocate just 1MiB of RAM you need to back that with 256 “page table entries”, and to then load a virtual address within that allocation you need to walk that list of 256 entries to find the physical address in RAM that the CPU needs to request.

              Of course these days an app is more likely to use 1 GiB of RAM, that’s a mere 262,144 page table entries to scan through, on each memory load.

              Oh but then we’re also not running a single process, there’s multiple processes on the system, so there will be several million of these entries, each one indexed by address (Which can be duplicated, each process has its own private view of the address space), and then by process ID to disambiguate which entry belongs to each process.

              That’s where the TLB comes in handy, to avoid the million or so indexing operations on each and every memory load.

              But caching alone can’t solve everything, you need a smarter way to perform bookkeeping than simply using a flat list for when you don’t have a cached result. So the OS breaks down those mappings into smaller chunks and then provides a table that maps address ranges to those chunks. An OS might cap a list of PTEs at 4096 and have another table index that, so to resolve an address the CPU checks which block of PTEs to load from the first table and then only has to scan the list it points to.

              Like this, this is a 2 level scheme that Intel CPUs used before the Pentium Pro (iirc), the top 10 bits of an address selected an entry in the “page directory”, the CPU loads that and uses the next 10 bits to select the group of PTEs from that list, following that link that it finds the actual PTEs that describe the mappings and then it can scan that list to find the specific matching entry that describes the physical address to load (And it then promptly caches the result to avoid doing that again)

              So yes, for a given page size and CPU you have a fixed number of walks regardless of where the address lives in memory, but we also have more memory now. And much like a hoarder, the more space we have to store things, the more things we do store, and the more disorganised it gets. And even if you do clear a spot, the next thing you want to store might not fit there and you end up storing it someplace else. If you end up bouncing around looking for things you end up thrashing the TLB, throwing out cached entries you still need so now need to perform the entire table walk again (Just to invariably throw that result away soon after).

              Basically, you need to defrag your RAM periodically so that the mappings don’t get too complex and slow things down (Same is true for SSDs btw, you still need to defrag them to clean up the filesystem metadata itself, just less often than HDDs). Meta have been working on improvements to how Linux handles all this stuff (page table layout and memory compaction) for a while because they were seeing some of their long-lived servers ending up spending about 20% of CPU time simply wasted on doing repetitive walks due to a highly fragmented address space.

      • suicidaleggroll@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 days ago

        That’s a complicated question. Bigger memory can split it between more banks, which can mean more precharge penalties if the memory you need to access is spread out between them.

        But big memory systems generally use workstation or server processors, which means more memory channels, which means the system can access multiple regions of memory simultaneously. Mini-PCs and laptops generally only have one memory controller, higher end laptops and desktops usually have two, workstations often have 4, and big servers can have 8+. That’s huge for parallel workflows and virtualization.

  • DFX4509B@lemmy.org
    link
    fedilink
    English
    arrow-up
    6
    ·
    7 days ago

    I have 32GB and for most of what I do, which is normal desktop stuff and gaming, and occasionally messing with VMs, it’s fine if not overkill.

    • hansolo@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      I once had a machine with 4mb of ram. Was fine for Word 5.5 and Windows 3.1. Needed a boot disk to run Doom. Upgraded to 8mb and it was fine, if not overkill.

      Son, have you tried just pulling your computer up by its boot straps and telling it that it only needs 8mb of ram because that was fine 35 years ago? /s

  • MissJinx@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    7 days ago

    Why is that in my personal laptop chrome works like shit and in my work laptop it works wonders?! The only difference between them is that in y professional laptop we have thousands of different security apps. It’s almost like chrome is shit because it invades u your computer like a virus lol

  • laranis@lemmy.zip
    link
    fedilink
    arrow-up
    2
    ·
    6 days ago

    Damn straight. I always wanted to do that, man. And I think if I had 32G of RAM I could hook that up, cause apps dig a dude with memory.

    Well, not all apps.

    Well the kind of apps that’d let me watch two chicks at the same time do.

    Good point.