• 0 Posts
  • 520 Comments
Joined 2 years ago
cake
Cake day: July 16th, 2023

help-circle
  • LeFantome@programming.devtoLinux@lemmy.mlAnd so it begins
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    8 hours ago

    Mint is very boring and middle of the road, exactly as a default recommendation should be. They are also very protective of the user experience. They are unlikely to embarrass me.

    Mint has a familiar UX if you are new to Linux. It is not nearly as foreign or locked down as GNOME. It is not as configurable and complex as KDE. There are good GUI tools for most common tasks.

    Mint does not change too rapidly or have too many updates but the desktop and tools are kept up-to-date.

    They are being very conservative with the Wayland transition. But nobody on Mint is moaning that Wayland is not ready. They are very protective about the user experience.

    And there is really no desktop use case that Mint is not suitable for.

    I do not use Mint but it is a very solid recommendation for “normal” users.

    I think Pop!OS is back to being that too and COSMIC is Wayland only (so no future transition to manage).




  • CachyOS will work on older hardware as well. There are four repositories for x86-64 v1, v2, v3, and v4. If you have newer hardware, the v3 or v4 packages will theoretically give you better performance. That is probably what you are talking about.

    That said, the v1 repos will work on x86-64 machines going back to 2003. Not exactly bleeding edge.

    The only thing that I have noticed is that packages are not all in sync between repos with v1 lagging behind v3. For example, I think Cachy is already on the 6.18 kernel but the v1 repos still only have 6.17. I have seen svt-av1 lag as well.

    I am not a CachyOS user so apologies if any of my info is dated.

    I will never say anything bad about EndeavourOS.


  • Thank you for the suggestion. I am ashamed to confess that a temporary PATH variable had not occurred to me.

    I first ran into these issues creating package templates. Chimera has a beautiful package build system where packages get built in containers and source code gets downloaded into the container and and built against a clean environment. As you point out, creating a package that creates the symlinks as a dependency (along with the GNU utils) could be a viable approach here. Maybe even just in /usr/local. The GNU utils get installed to /usr/bin in Chimera and the container gets recycled for every new package. The distro would never accept such hacky packages but I can use them myself.

    For just generally working in the distro at the command-line, your temporary path idea could work well.

    Thanks again. I appreciate it!




  • First, I use either Niri or KDE Plasma on Chimera Linux. Both are just an “apk add” away. You do not have to use GNOME. There is even a KDE live image so you do not even have to run GNOME once to install if you do not want.

    I really like the BSD utils and have come to prefer them. Well written. Sleek. Well documented. The man pages are a walk through UNIX history. They feel “right” to me.

    That said, the BSD userland is frequently a pain when interacting with the rest of the Linux universe. You cannot even build a stock kernel.org kernel without running into compatibility problems. The first time I built the COSMIC desktop on Chimera, I had to edit a dozen files to make them “BSD” compatible.

    Sed, find, tar, xargs, and grep have all caused me problems. And you need bash obviously. But bash is no big deal because it has a different name.

    The key GNU utils are available in the Chimera repos. But you get files named gfind, gtar, gxargs, gsed, etc. so scripts will not find them.

    You often have to either add the ‘g’ to the beginning of utilities in scripts or edit the arguments to work with the BSD versions.

    I mean, most things are compatible and I bet most of the command-line switches you actually use will work with the BSD utils. But I would be lying if I did not say third-party scripts are a hassle.

    If I could do Chimera all over again, I would make it bsdtar and bsdsed (or bsed maybe) for the BSD versions.

    Maybe the regular names could be symlinks with sed pointing to bsdsed by default but you could point it to gsed instead of you want. The system Chimera scripts and tools could use the longer names (eg. bsdsed) instead of the symlinks. The GNU tools could be absent by default like they are now. That would be the best of both worlds. The base system would have the advantages of the BSD tools (like easier builds as outlined on the Chimera site), the system could be GNU free if you want, and you could have a system that actually works out of the box more often with third-party scripts.

    It pains me to say this. I would prefer not to use the GNU stuff but the GNU tools are the de facto standard on Linux and many, many things assume them. No wonder UUtils aims for 100% compatibility.

    Anyway, even with what I say above, Chimera is my favourite distro. The dev can be a little prickly, but they do nice work.


  • Chimera Linux is great. APK and cports are so good I cannot imagine going back to anything else.

    Bash is not the default shell though. Chimera uses the Almquist Shell from FreeBSD. Other Linux distros have “dash” which is basically an Almquist variant.

    Almquist is lighter than fish and fish is not POSIX compatible.

    Bash is available in the Chimera Linux repos of course and is required for many common scripts.

    “Not run by fascists”. Sometimes I wonder.


  • Ah thank you. You likely guessed the reason for the question.

    Many popular projects written in Rust, including the UUtils core utils rewrite, are MIT licensed as Rust is. There have been people that purposely confuse things by saying that “the Rust community” is undermining the GPL. I can see how that may lead somebody to believe that there is some kind of inherent licence problem with code written in Rust.

    Code written in Rust can of course be licensed however you want from AGPL to fully proprietary.

    I personally perceive a shift in license popularity towards more permissive licenses at least with the “younger generation”. The fact that so many Rust projects are permissively licensed is just a consequence of those kinds of licenses being more popular with the kinds of “modern” programmers that would choose Rust as a language to begin with. Those programmers would choose the same licenses even they used the GCC toolchain. But the “modern” languages they have to choose from are things like Rust, Swift, Zig, Go, or Gleam (all permissively licensed ). Python and TypeScript are also still trendy (also permissively licensed).

    Looking at that list, it is pretty silly to focus on Rust’s license. Most of the popular programming languages released over the past 20 years are permissively licensed.


  • I have never heard the licensing of Rust being raised as a concern for the Linux kernel.

    As Charles Babbage would say, “I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.”

    The distro I use builds the entire Linux kernel with Clang which uses the same license as Rust. Linux is bound by the same modified GPL license regardless of what compiler I use to build it.

    The compiler has no impact on the license applied to the code you build with that compiler. You can use closed source tools to build open source software and vice versa.

    And, of course, the Rust license is totally open source as it is offered as both MIT and Apache. Apache 2.0 even provides patent guarantees which can matter for something like a compiler.

    If you prefer to use GPL tools yourself, you may want to keep an eye on gccrs.

    https://rust-gcc.github.io/

    A legitimate concern about Rust may be that LLVM (Rust) supports a different list of hardware than GCC does. The gccrs project addresses that.






  • It is funny. You and I landed in different places but for almost the same reasons.

    I use a rolling release because I want my system to work. “Tinkering with my tech stuff” is an activity I want to do when I want and not something I want thrust upon me.

    On “stable” distros, I was always working around gaps in the repo or dealing with issues that others had already fixed. And everything I did myself was something I had to maintain and, since I did not really, my systems became less and less stable and more bloated over time.

    With a rolling distro, I leave everything to the package manager. When I run my software, most of the issues I read other people complaining about have already been fixed.

    And updates on “stable” distros are stressful because they are fragile. On my rolling distro, I can update every day and never have to tinker with anything beyond the update command itself. On the rare occasion that something additional needs to be done, it is localized to a few packages at most and easy to understand.

    Anyway, there is no right or wrong as long as it works for you.


  • Where did the idea come from that rolling releases are about hardware?

    Hardware support is almost entirely about the kernel.

    Many distros, even non-rolling ones like Mint and Ubuntu, offer alternative kernels with support for newer hardware. These are often updated frequently. Even incredibly “stable” distros like Red Hat Enterprise Linux regularly release kernels with updated hardware support.

    And you can compile the kernel yourself to whatever version you want or even use a kernel from a different distro.

    Rolling releases are more about the other 80,000 packages that are not the kernel.


  • I would say

    Is this based on experience? Or are you guessing?

    I ask because my lived experience is that rolling releases break less in practice

    Before I used rolling releases, I spent more time dealing with bugs in old versions than I do fixing breakages in my rolling disto.

    And non-rolling “upgrades” were always fraught with peril whereas I update my rolling release without any concern at all.


  • I use ancient hardware (as old as 2008 iMacs) and I greatly prefer rolling releases.

    Open Source software is always improving and I like to have the best available as it makes my life easier.

    In my experience, things just work better. I have spent years now reading complaints online about how Wayland does not work, the bugs in certain software, and features that are missing. Almost always I wonder what versions they are running because I have none of those problems. Lots of Wayland complaints from people using systems that freeze software versions for years. They have no idea what they are missing. This is just an example of software that is rapidly evolving. There are many more.

    Next is performance. Performance improvements can really be felt on old hardware. Improvements in scheduling, network, and memory handling really stand out. It is surprising how often improvements appear for even very old hardware. Old Intel GPUs get updates for example. Webcams get better support, etc.

    Some kinds of software see dramatic improvements. I work with the AV1 video codec. New releases can bring 20% speed improvements that translate to saving many minutes or even hours on certain jobs. I want those on the next job I run.

    I work on my computer every day and, on any given day, I may want or enjoy a feature that was just added. This has happened to me many times with software like GIMP where a job is dramatically easier (for example text improvements tag appeared in GIMP 3).

    If you do software development, it is common to need or want some recently developed component. It is common for these to require support from fairly recent libraries. Doing dev on distros like Debian or RHEL was always a nightmare of the installed versions being too old.

    And that brings me to stability.

    On systems that update infrequently, I find myself working against the software repos. I may install third-party repos. I may build things myself. I may use Flatpak or AppImage. And all of that makes my system a house of cards that is LESS stable. Over time, stuff my distro does not maintain gets strewn everywhere. Eventually, it makes sense to just wipe it all and start fresh. From what I see online, a lot of people have this experience.

    On of the biggest reasons I prefer rolling releases with large repos is because, in my experience, they result in much more stable systems in practice. And if everything comes from the repo, everything stays much more manageable and sustainable.

    I use Debian Stable on servers and in containers all the time. But, to single it out, I find that actually using it as a desktop is a disaster for all of the above reasons but especially that it becomes an unstable mess of software cobbled together from dozens of sources. Rolling releases are easier to manage. This is the opposite of what some others say, I realize.

    In fact, if I do have to use a “more stable” distro, I usually install an Arch Linux Distrobox and use that to get access to a larger repo of more frequently updated packages.


  • I use EndeavourOS on Mac hardware for very similar years.

    Wifi (Broadcom-wl on the older stuff and brcmfmac_wcc on the newest) works well on all of them.

    Webcams work well on all of them as well. Most are just USB cams but some use the FaceTimeHD module that builds with DKMS but works very well for me.

    I cannot remember if I had to install the FaceTimeHD driver or if it was auto-installed by EOS. Even if not, it is in the repos and one line to install the package.