<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom"><title>frankie-tales</title><id>https://lovergine.com/feeds/tags/distributions.xml</id><subtitle>Tag: distributions</subtitle><updated>2026-02-25T15:33:03Z</updated><link href="https://lovergine.com/feeds/tags/distributions.xml" rel="self" /><link href="https://lovergine.com" /><entry><title>The perfect desktop is a matter of points of view, or not?</title><id>https://lovergine.com/the-perfect-desktop-is-a-matter-of-points-of-view-or-not.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2026-01-22T19:40:00Z</updated><link href="https://lovergine.com/the-perfect-desktop-is-a-matter-of-points-of-view-or-not.html" rel="alternate" /><content type="html">&lt;p&gt;I recently learned about an opinionated flavor of the Arch distribution called
&lt;a href=&quot;https://omarchy.org/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Omarchy&lt;/a&gt;, which is basically a collection of desktop
packages built on top of a rolling Arch distribution. Nothing special, but for
the vocal original author of the scripting job at the base of such flavor, who
is, as it happens, for many old-school self-centered geeks out there, the quite
discussed DHH. I will not enter into the merits of the reasons for the dubious
fame of David &amp;quot;DHH&amp;quot; Heinemeier Hansson, which basically stem from some of his
past posts on X/Twitter and some of his questionable ideas.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;/images/wm-vs-de.png&quot; alt=&quot;The great fight between WMs and DEs&quot; /&gt;&lt;/p&gt;&lt;p&gt;I’m not interested in that here. I’m more interested in some spontaneous
thoughts about the hype (well, at least among the very restricted niche group of
Linux desktop fans) around this desktop flavor. It is not something new; the
Hyprland UX is basically an &lt;a href=&quot;https://i3wm.org/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;i3&lt;/a&gt;-like
&lt;a href=&quot;https://en.wikipedia.org/wiki/Tiling_window_manager&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;tiling window manager&lt;/a&gt; with steroids,
based on the current non-Xorg incarnation of Wayland, with a few whistles and
bells.&lt;/p&gt;&lt;p&gt;I have been a long-time Linux desktop user since the 90s, and a tiling window
manager (specifically one of the suckless incarnations, &lt;a href=&quot;https://awesomewm.org/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Awesome WM&lt;/a&gt;) has been my
main desktop for quite a few years. Some years ago, I abandoned such a paradigm
when I finally realized that a pure tiling window manager is a great idea until
it isn't. Basically, most of its &lt;em&gt;pros&lt;/em&gt; (one application per virtual desktop, easy
tiling on big displays, keyboard-driven navigation) can be easily replicated in
a capable desktop environment like any current Gnome version. This has the big
advantage of being ready for use right after installation and of being easily
and fully customizable via plugins. The &lt;em&gt;cons&lt;/em&gt; of a tiling WM are always present,
based on workflows, and there are generally no easy workarounds. The biggest is
the need to find tricks and third-party tools to solve use cases that are not
always trivial (or worse, that are trivial on a DE instead).&lt;/p&gt;&lt;p&gt;A DE has the indisputable advantage of including all batteries for widgets and
customization tools, whereas most (all) WMs require third-party tooling to
manage many disparate configuration snippets, such as Bluetooth, Wi-Fi, hot-plug devices,
auto-sensing of beamers, dynamic multi-display, fast binding of container apps,
accessibility featues, and many others. Too often, also, such WMs require using a
command-line tool or a workaround to perform tasks that are simply part of the common DE experience.&lt;/p&gt;&lt;p&gt;I also remember the pain of using the multiwindow GUI of GrassGis under Awesome,
which was at the time just another type of application under a floating window
manager, instead. When an application opens a new window for every new module
used, well, the UX could become a nightmare under a tiling WM, if you are not
using a 43-inch display. The same goes for virtualized desktops, too: when the
guest and host collide for keyboard use, continuous control switching can
rapidly lead to madness. That’s just a pair of examples to conclude that the
coolness of a desktop implementation is often a matter of perspective and
personal workflows, and I constantly found that a mandatory tiling WM paradigm
is simply less flexible in some practical cases.&lt;/p&gt;&lt;p&gt;To be honest, I find the Omarchy UX to be the typical incarnation of a canonical
WM-based interface for fresh Linux desktop users. Such users are divided into
two classes:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;The class of people who are searching for an exact replica of Windows/macOS
GUI. A no-hope group of people: if something has to look like Windows, have
exactly the same policies and the same applications and icons, even, well,
probably they should stay with Windows: simple and clean. They are the most
critical and vocal complaining users for whom the Linux-on-the-desktop era will
never come.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The class of people who look for something radically different and discover
the keyboard-driven interfaces as something almost magical (without fully
realizing that such an experience can be replicated easily and mostly by using
environment shortcuts and some simple plugins). A trivial secret, I would say.
They are the most enthusiastic about this kind of desktop, but also regular
distro-hoppers (yes, it is an offense for me: distro-hopping is for gamers, not
for workers who need to complete primary tasks daily) from time to time; more often,
they will never admit they are simply playing around, and solving auto-inflicted problems
is part of their game.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Of course, the WM-based desktop paradigm still has its own use cases,
which I group under a very few  limited cases:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;You are using a strictly roll-on distribution, such as Debian sid/unstable,
Arch, Guix, Gentoo, and many other in-development flavors of other mainstream
distributions, including Fedora and OpenSUSE. On such distributions, avoiding
desktop environments reduces the likelihood of encountering temporary problems
after daily upgrades, as some transient (in the order of days/weeks) breakages
can occur. But who is the user of such platforms today? Seriously, I think only
someone actively involved in development and testing should be interested in
such distributions. Today, most desktop apps are distributed as containerized
packages via one of the multiple available hubs for Flatpak, AppImage, Snap,
Docker/Podman. I can’t see the practical advantage of using an unstable
distribution on a daily basis. If you think differently, dude, you have a
problem, and it is not the distribution you are using, but it is what you see in
the mirror. If you are not a YouTuber who needs to produce videos to monetize,
well, you are probably simply using the wrong distribution pointlessly (and
creating your own problems from time to time, which are perfectly part of the
roll-on experience, by manual).&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;You are using an old, low-resource box with limited RAM and cores. A platform
that cannot simply be used with the current desktop environments. I seriously
doubt it could also be used for general computing, indeed. Nowadays, even a web
browser is simply a hungry hog on such platforms. I mean, a dual-core box with 4 GB of
RAM that could be more than 15 years old. If this is your platform, well, a
window manager is perfectly legitimate, but it probably couldn't be used with
Hyprland either. But to do what? I mean, but for installing it and telling it to
friends...&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;You are an old-style lazy geek, anchored to your own configurations, refined
over dozens of years, with very few reasons to change. That’s perfectly
legitimate, but most of those configurations could probably be out of date. I
know, you are still adjusting your Modelines in your Xorg configuration. Well,
dude, probably it’s time to come down the tree in the forest.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Of course, I also tried to install Omarchy on an old box of mine (an 11-year-old
Lenovo ThinkPad L540 with a dual-core i5 and 8GB RAM) that runs perfectly with
the current Debian 13 and Gnome 48. Sadly, it was not even booted to install:
just a dark screen. Good, but not too good, dudes.&lt;/p&gt;&lt;p&gt;And this leads me to the elephant-in-the-room argument for this post. Most users
need stability and, occasionally, up-to-date applications. The average users
need certainty that they can easily install an OS on most platforms and have a
stable UX for a decently long period (let’s say 2-5 years without any
reinstallation in between). The more users, the more stability. The simpler, the
more effective, too.  And that’s the real point most devs (or wannabe experts)
have probably missed in the meantime.
The desktop is a mere tool; it should not require an expertise addiction.&lt;/p&gt;&lt;p&gt;It is not a matter of DE vs WM, but of homogeneity and generality versus good,
but not enough for all. If one has to rediscover the warm water to manage a
configurable tool that, in a DE, is a click-n-point away, it is a failure
in general UX. Of course, even DEs are far from perfect, but too often, WM UX is
far from even being basically complete.&lt;/p&gt;&lt;p&gt;For instance, I can easily manage my full clipboard history with inter-session
persistence thanks to a simple Gnome plugin (i.e., Clipboard Indicator). There is no
equivalent widget in most WMs, but they need to use a third-party tool to
provide something that is almost equivalent, but often incomplete. Well,
Houston, we have a problem! That’s just an example, but the general approach is
clear: if one has to constantly sacrifice immediate, good-enough implementations
to adopt half-finished tools or workarounds to solve basic GUI workflows, WMs
become not accelerators of productivity but defective implementations, and that
has been my constant experience in that regard with WMs. At some point, one has
to set priorities, and after years, my priority has become not to waste time
reinventing the wheel for desktop GUIs. Sorry, guys. There is more than one way
to implement a desktop interface, but many of them can simply become a pain
because they are not flexible enough or incomplete, resulting in continuous
adjustings and workarounds to have something decently working.&lt;/p&gt;&lt;p&gt;And yes, this is another damn opinionated post about
the current &lt;em&gt;Year of Linux on Desktop&lt;/em&gt;. Don't take it too seriously...&lt;/p&gt;</content></entry><entry><title>About computing environments for reproducible science</title><id>https://lovergine.com/about-computing-environments-for-reproducible-science.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2025-12-09T13:00:00Z</updated><link href="https://lovergine.com/about-computing-environments-for-reproducible-science.html" rel="alternate" /><content type="html">&lt;p&gt;A few weeks ago I gave a lecture for the &lt;a href=&quot;https://spatial-ecology.net/course-geocomputation-machine-learning-for-environmental-applications-intermediate-level-2025/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Spatial Ecology
course&lt;/a&gt;
to introduce a handful of junior and not-so-junior researchers from various
domains to the not-so-nice world of scientific computing environments.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;/images/galileo.png&quot; alt=&quot;Poor Galileo working on modern computer&quot; /&gt;&lt;/p&gt;&lt;p&gt;For people interested,
&lt;a href=&quot;https://spatial-ecology.net/docs/source/lectures/lect_20252511_dependency_management_in_data_science.pdf&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;here&lt;/a&gt;
are my slides about this topic. They are somehow specialized for the Python
ecosystem (which has become nowadays the programming language adopted for
scientific computing in multiple contexts), where, in the last few years, a lot
of evolutions have taken place for the management of dependencies and the
management of the computing environment. This problem is amplified in the HPC
context (I already wrote &lt;a href=&quot;/does-hpc-mean-high-pain-computing.html&quot;&gt;a semi-serious post&lt;/a&gt; about such an argument).&lt;/p&gt;&lt;p&gt;I also cited &lt;a href=&quot;https://guix.gnu.org/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;&lt;em&gt;guix&lt;/em&gt;&lt;/a&gt; without more details (it was impossible to deal with all
sub-topics in the lecture, and I know that multiple listeners already had
problems fully understanding the matter).&lt;/p&gt;&lt;p&gt;Reasoning about that, it is not a silly idea to write some blog notes about the
whole topic. First of all, what is the context? &lt;a href=&quot;https://pmc.ncbi.nlm.nih.gov/articles/PMC2981311/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Reproducible science&lt;/a&gt;
is not a novel matter. Any scientific experiment should be reproducible, starting from the same
data and giving comparable results: this is the basis of the &lt;a href=&quot;https://en.wikipedia.org/wiki/Scientific_method&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;scientific method&lt;/a&gt;
(Galileo docet). In the
context of scientific computing, that implies that the whole execution
environment should be fully reproducible in order to ensure the possibility of
replication of the executions, with the same outputs starting from the same
inputs. Possibly later on, running on the same platform, or after deployment
on a new, completely different system.&lt;/p&gt;&lt;p&gt;The key point is that the long-term reproducibility of such results on current
platforms and with current languages is minimal, to be generous.  Having the
full source code of a Python notebook, a git source repository, or anything
comparable is simply only the starting point. The sad reality is that in
practice, the source code has, in too many cases, a lifetime of a few months
because of the understimation of such a problem by the average scholar. When
following a few good practices, such a lifetime can be extended to a few years,
maybe.&lt;/p&gt;&lt;p&gt;When I wrote my thesis, too many years ago, I developed the whole C source for
execution on a parallel computer of the time. It was a &lt;a href=&quot;https://en.wikipedia.org/wiki/Meiko_Scientific&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Meiko Computing Surface&lt;/a&gt;,
a SIMD platform based on &lt;a href=&quot;https://en.wikipedia.org/wiki/Transputer&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;INMOS Transputers&lt;/a&gt;. The C code
used a proprietary message-passing library, CSTools, to enable communication
among T-800 processors (unfortunately, there is no relation with the Terminator
series, sorry). Now, it is somewhat expected that a code based on a dead
proprietary library running on a dead hardware platform could have
reproducibility issues today, after more than 30 years.&lt;/p&gt;&lt;p&gt;What is unexpected is that one could have the same reproducibility problems
after 30 months, or in some limited cases, after 30 days. I mean both at the
binary and source levels, often. Now, part of the problem is due to the &lt;a href=&quot;https://www.merriam-webster.com/slang/fafo&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;FAFO&lt;/a&gt;
attitude of some development communities. Not all teams are like the GDAL
one, which is capable of maintaining the same well-refined APIs for dozens of
years. More usually, from time to time, new versions of libraries and tools
introduce expected or unexpected breakages against past versions and APIs, which
backfire on programs that use them. In other cases, new versions can fix and/or
introduce bugs of primary interest for dependent software. Those are the main
reasons to meticulously annotate and document every single version of direct and
indirect dependencies. This is somewhat solved by dependency resolvers, as
explained in my lecture. But that's only part of the whole chain.&lt;/p&gt;&lt;p&gt;Unfortunately, nowadays this chain of dependencies traverses a single language
and crashes against system-level dependencies, including the whole operating
system, with its system compilers, interpreters, and libraries. This problem is
amplified in a fully containerized world, which is nowadays used intensively.
Depending on a third-party-provided binary image taken from any hub out there is
not a safer approach. Such images can disappear from night to day or have a
limited lifetime, so the conscious scholar should also develop his/her own from
scratch, which often is particularly out of the skill perimeter of the average
scholar.&lt;/p&gt;&lt;p&gt;This is exactly where Guix tries to give an answer. Guix is a source-level
package manager with a set of full descriptions written in Guile Scheme for the
whole chain of dependencies up to the kernel level. Combining such an analytical
description of the system for any built artifact in the timeline from the
starting point (derivations), along with the possibility to use build systems to
cache binary artifacts (substitutes), and install any software at the user
level, does allow the creation of a source-level definition of a full execution
environment.&lt;/p&gt;&lt;p&gt;Such an ambitious goal is not without problems, as magistrally summarized by
Ludovic Courtès
&lt;a href=&quot;https://hpc.guix.info/blog/2024/03/adventures-on-the-quest-for-long-term-reproducible-deployment/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;here&lt;/a&gt;,
but anyway it is at light-years of distance from the possibility of the average
deployment system that needs instead continuous babysitting in order to ensure a
working environment.&lt;/p&gt;&lt;p&gt;What is probably of interest for general consumption would also be a consistent additional
security tagging for derivations in order to fast identify sources with known
CVE-tagged versions in the chain of dependencies. That would increase the level
of self-awareness when the Guix time machine is used to go back in the past and
pick some sources from Pandora's box. It would also be of considerable interest
in the &lt;a href=&quot;https://en.wikipedia.org/wiki/Software_supply_chain&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;SBOM&lt;/a&gt; context
outside the perimeter of science computing.&lt;/p&gt;&lt;p&gt;So, guix is not perfect, but again a sure advancement towards reproducible computing
environments, which currently lack in a way or another in the science domain
(and not only that).&lt;/p&gt;&lt;h2 id=&quot;references&quot;&gt;References&lt;/h2&gt;&lt;p&gt;[1] &lt;a href=&quot;https://hpc.guix.info/blog/2023/06/a-guide-to-reproducible-research-papers/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;A guide to reproducible research papers&lt;/a&gt;&lt;/p&gt;&lt;p&gt;[2] &lt;a href=&quot;https://zenodo.org/records/7088068&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Guix as a tool for reproducible science&lt;/a&gt;&lt;/p&gt;&lt;p&gt;[3] &lt;a href=&quot;https://inria.hal.science/hal-04776900/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Using Guix for managing reproducible, flexible, and collaborative environments in a PhD thesis&lt;/a&gt;&lt;/p&gt;&lt;p&gt;[4] &lt;a href=&quot;https://doi.org/10.1101/29865o3&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Reproducible genomics analysis pipelines with GNU Guix&lt;/a&gt;&lt;/p&gt;&lt;p&gt;[5] &lt;a href=&quot;https://en.wikipedia.org/wiki/Replication_crisis&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Replication crisis&lt;/a&gt;&lt;/p&gt;</content></entry><entry><title>DebianGis anniversary and the power of being a community</title><id>https://lovergine.com/debiangis-anniversary-and-the-power-of-being-a-community.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2025-10-16T18:15:00Z</updated><link href="https://lovergine.com/debiangis-anniversary-and-the-power-of-being-a-community.html" rel="alternate" /><content type="html">&lt;p&gt;A few days before today, 21 years ago, I sent
&lt;a href=&quot;https://lists.debian.org/debian-devel-announce/2004/10/msg00007.html&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;this message&lt;/a&gt;
to the &lt;em&gt;debian-devel-announce&lt;/em&gt; mailing list to solicit helpers in packaging and
to oversee the geospatial software stack included in the main Debian archive.
After so many years, still there.&lt;/p&gt;&lt;p&gt;At that time, the Debian project had already been around for more than 10 years
and even had a few releases behind, but the typical maintenance of software was
still a one-person show, except for a few large and complex pieces of software.
Even the kernel was managed by a single developer, Herbert Xu. Each developer
was responsible for implementing updates, fixing bugs, and releasing updates.
Often jealous of such prerogatives and their stated possession of the package
or task. While there was already a quality assurance team and an orphaning
process for abandoned packages (or inactive developers), these processes were
not very common and were even relatively slow and imperfect.&lt;/p&gt;&lt;p&gt;Since that time, team-based maintenance has become the standard approach in the
Debian community for properly managing the most complicated software
collections. That's at least to ensure proper long-term maintenance,  because
the average developer can provide only a limited continuity to their efforts,
and real life is generally more complicated than the digital one. It is not
secondary that packaging tasks can be tedious in the long term, and it is easy
to lose motivation. The presence of a working team can reduce the risk of
burnout and allow each developer to step down when needed. The most effective
team should be small enough to coordinate more easily and avoid lagging in
changes and migrations, but not too small: presenting a one-person show as a
team work is not a great idea. But too many people in the same team is equally
not a great idea for the opposite reason.&lt;/p&gt;&lt;p&gt;In the specific case of Debian, the system is so modular and lacks many
interdependencies that it favors the creation of fully independent management
groups for hundreds of components and ecosystems of packages. It is not casual
that most of the bugs and inconsistencies are concentrated where too many parts
need to interact appropriately with perfectly aligned programming interfaces.&lt;/p&gt;&lt;p&gt;Talking about DebianGis and the geospatial software, the key motivation at the
time was the lack of a coordinated effort to build and collect together, with
consistency, a lot of different libraries and programs that were (and still
are) specialized enough and based on hundreds of dependencies, often not within
the perimeter of competence of the geodata user. At that time, piling up the
software stack of a typical geospatial application was not an easy task for the
faint of heart, and most (all?) of the Linux distributions lacked in one way or
another.&lt;/p&gt;&lt;p&gt;Anecdotally, I can remember about 15 years ago, the company that sold us a Suse
Enterprise-based solution for a geospatial information system that had so many
problems in completing the required setup that I finally created a chroot-based
environment with a Debian stable plain install to run a working PostGIS DBMS.
That was the time when containerized solutions were still far from being
supported, so the chroot environment was the most immediate solution to such a
problem.  A little win for a community-based distribution and its tiny
geospatial team, which provides a measure of the problems at the time. A giant
step for the whole FOSS concept.&lt;/p&gt;&lt;p&gt;I'm currently much less active in packaging tasks, but still seeing the current
team alive and capable of releasing well-supported products more than twenty
years later gives me reason to be proud of such a community.&lt;/p&gt;</content></entry><entry><title>Does HPC mean High-Pain Computing?</title><id>https://lovergine.com/does-hpc-mean-high-pain-computing.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2025-09-06T19:40:00Z</updated><link href="https://lovergine.com/does-hpc-mean-high-pain-computing.html" rel="alternate" /><content type="html">&lt;p&gt;Please, forgive the silly joke in the title of this semi-serious post, but
lately I have been thinking about the strange fate of an area of general
computing that I have spent more and more time in recently, as in the near and
far past. For my job, I have utilized a series of scientific HPC clusters
worldwide to solve multiple computing problems most efficiently by distributing
computation across numerous nodes. Over the last thirty years, all such
platforms have consistently shared the same common characteristics, which
invariably pose a problem in their use for the average scientist
(often a young/junior dedicated to a short-term project) in any
application domain.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;/images/high-pain-computing.jpg&quot; alt=&quot;HPC means high-pain computing&quot; /&gt;&lt;/p&gt;&lt;p&gt;To use Fred Brooks' definition, HPC technologies have both intrinsic and
incidental fallacies for such users category. The intrinsic one is due to the inner
complexity of creating a parallel and distributed solution to any problem,
possibly in a way that does not harm the final implementation due to the
increase in communication time among computational agents. This is already a
relevant problem &lt;em&gt;per se&lt;/em&gt;, which can often be out of the abilities, knowledge, and
interests of the average researcher in bioinformatics, physics, mathematics,
remote sensing, or whatever other research domain.&lt;/p&gt;&lt;p&gt;The incidental fallacy is instead always due to the accessibility of platforms and the
technologies used for their implementation. At large, all such HPC clusters are
a large pool of multi-core hosts with plenty of memory and connected with
multiple high-speed networks for implementing some sort of multi-tier
distributed POSIX file system and/or object storage.  Users can log in on a
limited number of such hosts that are connected to all others and run some type
of scheduling system (e.g., Slurm or HTcondor) where multiple computational nodes can
be reserved for a limited period of time to execute batch jobs or even an
interactive one (mainly for debugging). In most cases, such clusters can also be
used with some MPI/OpenMP implementations for proper parallel computational
modeling based on message passing among computing agents that run on multiple
cores and hosts, with or without multi-threading. Alternatively, GPUs can also
be reserved and exploited via Cuda/OpenCL. In many cases, such implementations
are vendor-oriented and trigger the need to adopt specific libraries and
compilers that add another layer of complexity to implementations.&lt;/p&gt;&lt;p&gt;The incidental problems start when the casual users discover that all such computing
nodes invariably run some legacy enterprise Linux distribution that is maintained
for a period of ten years or even more, until a full reinstallation of the whole
cluster. On top of such legacy systems (that are for
any practical use simply unusable as such) these scientific clusters give
essentially a few different mechanisms for creating a general computational
environment:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;a href=&quot;https://modules.readthedocs.io/en/latest/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Environment Modules&lt;/a&gt;&lt;/li&gt;&lt;li&gt;Containers (&lt;a href=&quot;https://sylabs.io/singularity/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Singularity&lt;/a&gt; or &lt;a href=&quot;https://apptainer.org/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Apptainer&lt;/a&gt;)&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://www.anaconda.com/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Anaconda/Miniconda&lt;/a&gt;-like environment (or free forks like &lt;a href=&quot;https://github.com/conda-forge/miniforge&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Miniforge&lt;/a&gt;)&lt;/li&gt;&lt;li&gt;Some specific software/application to run&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;But for containers, the other solutions are all binary-based hubs, which could
expose them to possible breakages when the application developed needs to access
exotic language bindings for extensions, and the poor users enter the mysterious
and dangerous world of ABI violations and a chain of broken dependencies. Even,
often, such hubs are not always consistent, and any upgrade by the admin team
exposes them to sudden breakages from night to day.&lt;/p&gt;&lt;p&gt;The final solution (or apparently so) nowadays is using containerization and a
target environment where the user code can find all and only the correct
dependencies and versions for the whole software stack of the application. This,
at least, until the third-party hubs of base distributions and languages ensure
complete consistency and retain past binaries and versions for any
medium/long-term need. Of course, a full source-based stack with proper version
tracking &lt;em&gt;a la&lt;/em&gt; &lt;a href=&quot;https://lovergine.com/tags/guix.html&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Guix&lt;/a&gt; would help to avoid
dependencies on external binary hubs and seems the way to go. Indeed, a small
group of interest in such a solution has existed for a few years, but I am
unaware of so many HPC clusters that consistently propose this kind of
implementation for users. That said, writing Guile Scheme descriptors for
preparing an execution environment may not be within the reach of the average
researcher in biochemistry or astrophysics.&lt;/p&gt;&lt;p&gt;Unfortunately, as I wrote
&lt;a href=&quot;https://lovergine.com/are-distributions-still-relevant.html&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;in a past post&lt;/a&gt;
on this digital site, this moves the
whole responsibility of a software stack maintenance onto the shoulders of the
final users, who are often the infamous junior profiles I mentioned before.
These are non-IT specialists who should adopt such HPC platform to implement
solutions as part of their daily job in their special scientific domain.&lt;/p&gt;&lt;p&gt;The result, to be honest, is that the average researcher simply tries to avoid
the whole thing as soon as possible because of the significant complexity that
the entire thing involves, while the private sector introduced specialistic
roles of data and software engineers to manage such problems properly (which is
the only reasonable approach, indeed).  Adding insult to injury, in some
academic areas, such interests in HPC are also viewed with contempt or as a
waste of time, if not openly discouraged.&lt;/p&gt;&lt;p&gt;All this explains why a roundabout in any of the significant HPC clusters
worldwide often guarantees hilarious experiences in terms of who is doing what
and how.&lt;/p&gt;&lt;p&gt;Sometimes, I almost feel like I can hear them swearing...&lt;/p&gt;</content></entry><entry><title>Guix for geeks</title><id>https://lovergine.com/guix-for-geeks.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2025-09-03T19:40:00Z</updated><link href="https://lovergine.com/guix-for-geeks.html" rel="alternate" /><content type="html">&lt;p&gt;In the last few months, I have installed and upgraded my second preferred
GNU/Linux system, GNU Guix, on multiple boxes. Regarding that system, I have
already &lt;a href=&quot;https://lovergine.com/tags/guix.html&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;written a few introductory posts&lt;/a&gt;
in the recent past. This is an update
about my experiences as a user and developer. I still think Guix is a giant
step forward in packaging and management, in comparison with Debian and other
distributions, for elegance and inner coherence.&lt;/p&gt;&lt;p&gt;On the negative side, I can confirm that the most important aspects to consider
in order to adopt Guix for daily use are as follows.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Guix is essentially a
roll-on type of distribution with a limited user base, so a grain of salt is
required at upgrade time because instabilities are somehow expected. The
integrated time machine can be used to step back in case of problems, anyway.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The development team and other support teams are tiny enough that I would avoid
using Guix for publicly exposed services. Also, it can lag on specific
applications and issues. Tasks and goals do not specifically structure teams,
so there is no proper security team, but I guess all maintainers can
participate to cover all required tasks.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Packages are not always up-to-date
with the latest upstream versions. Some packages are even older than in Debian
stable, so if you are looking for the latest and coolest products, that's not
the place.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Some choices for the core distribution are not mainstream, such as
using shepherd as its init system instead of systemd. This implies some delays
because patches are sometimes due for complex software, such as Gnome, which
introduced strict dependencies on systemd in recent years. This is the reason
why, as of the current date, Gnome is still at version 46, while Debian stable
is at version 48.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The geospatial tools and libraries available are still quite
limited, which may require a more in-depth analysis to estimate the level of
lag. See the section below.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The readiness for immediate use is not comparable
to the main distributions. For instance, if one absolutely needs a non-free
Linux kernel due to firmware constraints, the nonguix add-on repository is
available; however, one must build and install an ad hoc ISO installer by hand.
Nothing transcendental, but complicated enough to discourage most users. &lt;a href=&quot;https://github.com/fpl/guix-installer/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Here&lt;/a&gt;
is my fork of the &lt;a href=&quot;https://systemcrafters.net/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;System Crafters&lt;/a&gt; repository to prepare such an ISO image,
updated for use of the current main Guix (now hosted on Codeberg) and NonGuix
channels. That could be easily prepared on any foreign distribution that
includes the &lt;code&gt;guix&lt;/code&gt; package management software, such as Debian.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;As of the current date, I have installed at least a few laptops and VMs,
including a Dell Inspiron 15, an Asus EEE 1215P, and an old Acer Travelmate. I
strongly suggest using at least a box with 8GB of RAM and a 512GB SSD, because
building from sources can be overwhelming. My dual-core Atom EEE Pc has only 2
GB of RAM, and it is slow enough, but it has the big advantage of being fully
supported by the official GNU Linux libre kernel.&lt;/p&gt;&lt;p&gt;Additionally, I believe that having a local build host for substitutions within
the home/work LAN is the most efficient solution. That is because the
officially provided worldwide substitution hosts could be generally heavily
loaded and occasionally fail (when that happens, the time of recovery is on a
best effort basis).&lt;/p&gt;&lt;p&gt;Of course, it is also entirely possible to use Guix only as a package
management system on a foreign distribution, having most of the advantages of a
reproducible environment, as well as within a container or a virtual machine.
In that case, one has to consider that the host system is typically based on
the systemd init system, which can cause a few headaches when porting existing
package descriptions for services taken from the Guix repository. Probably the
best compromise is still using a foreign distribution on the physical box and a
Guix-based container to prepare an execution environment, something I already
experimented as explained in previous posts.&lt;/p&gt;&lt;h2 id=&quot;guix-and-geospatial-software&quot;&gt;Guix and geospatial software&lt;/h2&gt;&lt;p&gt;In 2004, Paolo Cavallini and I started a subproject within the Debian community
of developers and users to improve the status of geospatial software in Debian.
That was the birth of what is today the
&lt;a href=&quot;https://www.debian.org/blends/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;DebianGIS pure blend&lt;/a&gt;, with a mildly
coordinated team of developers and maintainers that work together in Debian
(and derivative distributions) on a set of essential libraries and tools for
the geospatial community. For people interested in the history of FOSS, I wrote
a &lt;a href=&quot;http://atti.asita.it/Asita2009/Pdf/069.pdf&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;contribution to the ASITA
conference in 2009 about that&lt;/a&gt;. That was and still is grunt work, often
neglected and at the time perceived as a secondary effort by upstream
developers. Indeed, that is an essential task instead, because collecting
together properly and ensuring that you have a complete and well-built stack of
software for a geospatial application is not trivial at all.&lt;/p&gt;&lt;p&gt;One of such base libraries is GDAL, which can be built with multiple optional
dependencies and plugins. Like other platforms, some of those optional
dependencies are missing in the Guix package, and that should be taken into
consideration. Some tools are instead totally missing, such as MapServer and
others. Of course, some packages could be patched here and there in Debian, or
can require a Guix-specific patching (because of the peculiarities of such a
system, which does not respect the Linux FHS), and that definitely could be a
reason to have a package in good shape or not. That's to say that the Guix
ecosystem would need a
serious help for packaging such niche packages, but where most people can see a
lack, I see an opportunity in that regard. After all, when I started my life in
the Debian project, it was because I did not trust an operating environment
that I could not develop and adjust personally for whatever reason.  That's the
way and Guix seems a brilliant example of a vibrant community for that.&lt;/p&gt;</content></entry><entry><title>The Guix system, take two</title><id>https://lovergine.com/the-guix-system-take-two.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2024-09-02T20:00:00Z</updated><link href="https://lovergine.com/the-guix-system-take-two.html" rel="alternate" /><content type="html">&lt;p&gt;Let's give a second look at &lt;code&gt;Guix-the-system&lt;/code&gt; the main GNU Project distribution
I dealt with in &lt;a href=&quot;http://lovergine.com/an-initial-dive-into-guix.html&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;a previous
post&lt;/a&gt;.  This post is not
specifically limited to the distribution, it is also of interest when using Guix
in a foreign distribution, even if some configuration details change.&lt;/p&gt;&lt;h2 id=&quot;substitutes-and-grafts&quot;&gt;Substitutes and grafts&lt;/h2&gt;&lt;p&gt;As said previously, the daily use of a store-based rolling distribution adds
some overhead to the system at both upgrade and installation time. This pain
(that remembers me the old times of the source-based &lt;a href=&quot;https://gentoo.org&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Gentoo
distribution&lt;/a&gt; with its &lt;code&gt;emerge&lt;/code&gt; tool) is specifically
alleviated by the use of the so-called &lt;code&gt;substitutes servers&lt;/code&gt;, which provide
pre-built binaries for the base system, as well as alternative/unofficial
packages, too.&lt;/p&gt;&lt;p&gt;The &lt;em&gt;fall-back&lt;/em&gt; alternative is based on regular build-from-sources on the host
system, which could imply long times for both installations and distribution
upgrades. The official Guix system comes with a couple of official substitutes
(i.e., &lt;code&gt;ci.guix.gnu.org&lt;/code&gt; and &lt;code&gt;bordeaux.guix.gnu.org&lt;/code&gt;) but others can be added,
including possibly any suitable user's server in the LAN.&lt;/p&gt;&lt;p&gt;Another trick in Guix for alleviating the need for long &lt;em&gt;local&lt;/em&gt; rebuilds is the
use of &lt;em&gt;grafts&lt;/em&gt; that are sort of &lt;em&gt;in-place&lt;/em&gt; replacements for binary
dependencies, expressed at the source level. To explain in brief, if a package &lt;code&gt;A@1&lt;/code&gt;
has been replaced by &lt;code&gt;A@2&lt;/code&gt; and they both maintain the same ABI, any &lt;em&gt;reverse
dependency&lt;/em&gt; &lt;code&gt;B&lt;/code&gt; can avoid being rebuilt. This is called a &lt;em&gt;graft&lt;/em&gt; in Guix, and
greatly simplifies the long chains of forced rebuilds in many cases (for
instance, in case of security upgrades). Specifically, the &lt;code&gt;@&lt;/code&gt; in a Guix package
is the version separator.&lt;/p&gt;&lt;h2 id=&quot;a-monorepo-for-the-whole-distribution&quot;&gt;A monorepo for the whole distribution&lt;/h2&gt;&lt;p&gt;The Guix source archive is strictly based on &lt;code&gt;git&lt;/code&gt; and distributed development by
means of a &lt;a href=&quot;https://en.wikipedia.org/wiki/Monorepo&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;monorepo&lt;/a&gt;,
which, along with the need to represent the tree of dependencies for any
package and its updates at run-time require some operations that are specific
of the Guix package management approach:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;pulling&lt;/li&gt;&lt;li&gt;garbage collection&lt;/li&gt;&lt;li&gt;branching&lt;/li&gt;&lt;li&gt;using channels&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;The pull phase specifically could require ages (hours on my old low-end Asus
EEEpc), so it is an operation that should be applied typically in batch mode and quite often
in order to minimize execution times. It is not a potentially destructive
operation - similar to an &lt;code&gt;apt update&lt;/code&gt; with steroids - so it is better
to perform it quite often, every few days. At the end of the day, it does not
alter the status of the system, so it is safe enough for background execution.&lt;/p&gt;&lt;p&gt;The &lt;code&gt;gc&lt;/code&gt; operation works directly on the store to purge &lt;em&gt;obsolete&lt;/em&gt; (out of
the current status tree of dependencies) entries. If you jump to a past status,
the purge would impact bandwidth and CPU loads, of course.&lt;/p&gt;&lt;p&gt;The &lt;code&gt;branch&lt;/code&gt; specification has the same role as any sane distributed
environment organization of code. It is usable to pull from separate branches of
archives, instead of following the default one, &lt;code&gt;latest&lt;/code&gt;. This feature can be
paired with the Guix &lt;em&gt;time-machine&lt;/em&gt; to jump to any past tree of packages in the
chronology of the archive sources.&lt;/p&gt;&lt;p&gt;Finally, a &lt;em&gt;channel&lt;/em&gt; is simply an alternative archive of sources that is prepared
by third-party teams to integrate the official Guix one. For instance, a few independent
packages are offered by &lt;a href=&quot;https://www.inria.fr/en&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;INRIA&lt;/a&gt; and other institutions for
&lt;a href=&quot;https://hpc.guix.info/about/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;the HPC community&lt;/a&gt;, as well as an handful of non-free
packages hosted on &lt;a href=&quot;https://gitlab.com/nonguix/nonguix&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;the nonguix repository&lt;/a&gt; to
solve the well-known users-hostages' dilemma about using closed-source firmwares
and a few other proprietary stuff.&lt;/p&gt;&lt;h2 id=&quot;one-scheme-to-rule-them-all&quot;&gt;One Scheme to Rule Them All&lt;/h2&gt;&lt;p&gt;An exciting feature of Guix-the-System is the use of &lt;a href=&quot;https://www.gnu.org/software/guile/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Guile&lt;/a&gt;
to describe the whole system, including the core, all services, and even users.
In perspective, all installed packages could be also fully configured by using functional
Guile snippets of code. This is something currently done in Debian only in
a limited way, through &lt;code&gt;debconf templates&lt;/code&gt; and &lt;code&gt;dpkg selections&lt;/code&gt;.
In practice, today, one has to use &lt;code&gt;ansible&lt;/code&gt; or similar tools to declare
configurations in some &lt;em&gt;ad hoc&lt;/em&gt; DSL, using
tons of plugins and templates, case by case. In a word, it is a mess.&lt;/p&gt;&lt;p&gt;This is, at least for me, the most intriguing feature and open exciting possibilities.&lt;/p&gt;&lt;p&gt;Currently, any developer with a reasonably decent computer can easily use Guix to rebuild and customize
Guix itself by starting from a monorepo fork, changing its main configuration and
adding/modifying packages in a totally independent and self-consistent way. Something that
in traditional distributions is done by a plethora of tools and interfaces, written in multiple
generale-purpose or specific languages, and often not wholly documented and held together
with the glue.&lt;/p&gt;&lt;p&gt;Once added a layer of general-purpose configurators for common packages the
whole distribution generation could become fully autoconsistent and complete for
any host or set of boxes. Isn't that challenging?&lt;/p&gt;&lt;h2 id=&quot;guix-in-foreign-distributions&quot;&gt;Guix in foreign distributions&lt;/h2&gt;&lt;p&gt;Using Guix as an additional package manager in a foreign distribution
has more limitations, of course. First of all, one must
deal with &lt;code&gt;systemd&lt;/code&gt; as the typical init system. Therefore, the general
configuration of the host cannot be expressed in a synthetic way as a
Scheme script. That said, it is perfectly possible to use Guix as
a development environment for multiple languages ecosystems, thanks
to various Guile build modules. Even, it is possible to run Guix-the-system
in a &lt;em&gt;container&lt;/em&gt; (or a &lt;em&gt;virtual machine&lt;/em&gt;), to use the host system just as a basic
platform and create Guix-based services and applications on top.&lt;/p&gt;&lt;p&gt;But those will be the topics for other posts...&lt;/p&gt;&lt;h2 id=&quot;references&quot;&gt;References&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;a href=&quot;https://guix.gnu.org/manual/en/html_node/Substitutes.html&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;https://guix.gnu.org/manual/en/html_node/Substitutes.html&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://guix.gnu.org/en/blog/2020/grafts-continued/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;https://guix.gnu.org/en/blog/2020/grafts-continued/&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://guix.gnu.org/en/manual/devel/en/html_node/Managing-Patches-and-Branches.html&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;https://guix.gnu.org/en/manual/devel/en/html_node/Managing-Patches-and-Branches.html&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://guix.gnu.org/manual/en/html_node/Complex-Configurations.html&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;https://guix.gnu.org/manual/en/html_node/Complex-Configurations.html&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://www.futurile.net/2023/05/01/guix-publish-caching-substitution-server/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;https://www.futurile.net/2023/05/01/guix-publish-caching-substitution-server/&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;</content></entry><entry><title>An initial dive into Guix</title><id>https://lovergine.com/an-initial-dive-into-guix.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2024-08-18T19:00:00Z</updated><link href="https://lovergine.com/an-initial-dive-into-guix.html" rel="alternate" /><content type="html">&lt;p&gt;In the last few days, I got familiar with &lt;code&gt;Guix&lt;/code&gt;, which is both a modern package
management system and the main GNU Project distribution for Linux and Hurd (&lt;code&gt;the Guix system&lt;/code&gt;).
As a package management system, it can be installed on most &lt;em&gt;foreign distributions&lt;/em&gt;,
including Debian and any other, as an alternative/additional packaging system.&lt;/p&gt;&lt;p&gt;I both installed the Guix system natively on a small ancient laptop of mine
(an ASUS EEEpc 1215N), and
the Guix package manager on one of my Debian stable boxes. An interesting variant could
be installing the whole system under a container in a non-interactive
mode, but that may be a task for the future. Indeed, the last one could
be the most exciting application of Guix for reproducible deployment.&lt;/p&gt;&lt;h2 id=&quot;guix-the-package-manager&quot;&gt;Guix, the package manager&lt;/h2&gt;&lt;p&gt;Guix (the package management system) is the most interesting part. It is a &lt;em&gt;modern&lt;/em&gt;
system with multiple advanced features, inspired by &lt;a href=&quot;https://nixos.wiki/wiki/Nix_package_manager&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Nix&lt;/a&gt;,
a pre-existing system based on a JSON-based DSL.
Nix and Guix are both declarative and functional package managers with similar goals
for software maintenance.&lt;/p&gt;&lt;p&gt;Both of them declare to have the largest collection of FOSS packages in the world, but ok:
both currently have &lt;em&gt;hubs&lt;/em&gt; with tens of thousands of binary packages.
Maybe not the largest, but respectful.
Of course, Guix is an FSF project and therefore highly choosy about the software
that can be distributed within the Guix archives. That's not specifically different
from the Debian approach, but for the derogation that historically the Debian Project
considered for limited proprietary-but-distributable stuff (the non-free+contrib section).&lt;/p&gt;&lt;p&gt;One interesting aspect of Guix (at least for me) is that it is specifically
based on Guile, an extendable Scheme dialect which is the main DSL used in the GNU ecosystem.
All packages are expressed as small snippets of Guile functions that declare
pre-dependencies, as well as the build, installation and test phases of each software.&lt;/p&gt;&lt;p&gt;Anyone who has worked on software packaging in Debian from the beginning knows that
the mythical &lt;code&gt;debian/rules&lt;/code&gt; is essentially a &lt;em&gt;Makefile&lt;/em&gt; with steroids, accompanied
by a handful of declarative files, lately simplified by the use of some frameworks,
such as &lt;a href=&quot;https://joeyh.name/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Joey Hess's&lt;/a&gt; &lt;code&gt;debhelper&lt;/code&gt; or others commonly used
in the past. Maybe not the most elegant approach to packaging and configuring, let me say.
Probably at the time - 30 years ago or so - the most pragmatic one, for sure. And it worked even for a lot of years.&lt;/p&gt;&lt;p&gt;In respect with traditional system-wide packaging, the Nix/Guix approach has several
interesting features:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;transitional, per-user, multi-profile capabilities&lt;/li&gt;&lt;li&gt;a rolling-back capability at the system and user level&lt;/li&gt;&lt;li&gt;an all-in-one system of packing with dependencies&lt;/li&gt;&lt;li&gt;a single expressive way of defining software configurations at both system-wide and per-user levels&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Guix &lt;em&gt;per se&lt;/em&gt; adds the use of Guile to all the rest so that all configurations are
synthetically expressed in S-exp, without the need to learn yet another DSL to
describe the software chains of dependencies.&lt;/p&gt;&lt;h2 id=&quot;guix-the-system&quot;&gt;Guix, the system&lt;/h2&gt;&lt;p&gt;The Guix free-software-only system has some interesting characteristics, including the use
of &lt;code&gt;shepherd&lt;/code&gt; as an alternative Guile-based &lt;em&gt;init system&lt;/em&gt; and the rolling-on distribution
style. The non-FHS basic organization of the filesystem could also pose some problems
to install software that are strictly dependent on that, that's for sure a good reason
to use Guix-on-Debian instead of Guix-the-system only. That issue is also partially mitigated
by a combination of a container technology support, and an FHS emulation layer.&lt;/p&gt;&lt;p&gt;In my opinion, the whole thing is quite interesting for building development environments
and exploring reproducible deployment systems.&lt;/p&gt;&lt;h2 id=&quot;the-gnu-touch&quot;&gt;The GNU touch&lt;/h2&gt;&lt;p&gt;But for the FSF strictly free approach to collecting software (including the missing firmware blobs
for the Linux-libre software, as for Debian until version 12), the Guix system has some typical
geek-only pillars for its ecosystem and community:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;strictly read the reference manuals and info files (i.e., do your homework)&lt;/li&gt;&lt;li&gt;use mailing lists&lt;/li&gt;&lt;li&gt;use IRC dedicated channels&lt;/li&gt;&lt;li&gt;use your brain and experience to solve issues&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Nothing different from the traditional Debian project community and approach to personal computing.
Therefore, nowadays something for geeky folk mainly, I guess.&lt;/p&gt;&lt;h2 id=&quot;issues-and-challenges&quot;&gt;Issues and challenges&lt;/h2&gt;&lt;p&gt;For sure, the user workflow to install and run software changes radically and in a very
different manner. One has the need to get familiar the Guix CLI interface and mode of operations.
The Guix approach to deployment and maintenance adds an evident overhead to the system
(for both storage and CPUs), partially
mitigated by the use of substitutes/hubs to reduce building from source requirements.
Not the best thing for old boxes, I guess.&lt;/p&gt;&lt;p&gt;Anyway, it is possible to use local &lt;em&gt;substitutes&lt;/em&gt; to reduce the load for
average systems. As a rolling distribution and/or software hub, I found it reasonably updated, but
that largely depends on applications and domains. Nobody makes miracles in those regards,
&lt;em&gt;DebianGis&lt;/em&gt; packages in &lt;em&gt;testing/unstable&lt;/em&gt; are more up-to-date for geospatial apps,
and probably even more flexible. Sorry, no silver bullet, guys.
Even Guix does not have (still?) a dedicated security team, so I would recommend it currently
only for personal/development use, not for servers.&lt;/p&gt;&lt;p&gt;An important feature of Guix is its support for reproducible software deployment, as an
alternative/integration to the ubiquitous containers, an aspect to deeply explore.&lt;/p&gt;&lt;h2 id=&quot;references&quot;&gt;References&lt;/h2&gt;&lt;p&gt;The main resources about Guix are &lt;a href=&quot;https://guix.gnu.org/manual/en/html_node/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;the Reference Manual&lt;/a&gt;
and &lt;a href=&quot;https://guix.gnu.org/en/cookbook/en/html_node/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;The Cookbook&lt;/a&gt;, of course.
I found some interesting non-trivial articles about Guix and its internals on some &lt;em&gt;indie sites&lt;/em&gt;
here listed:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://systemcrafters.net/craft-your-system-with-guix/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;https://systemcrafters.net/craft-your-system-with-guix/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://www.futurile.net/archives.html&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;https://www.futurile.net/archives.html&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://github.com/techenthusiastsorg/awesome-guix&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;https://github.com/techenthusiastsorg/awesome-guix&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://hpc.guix.info/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;https://hpc.guix.info/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;</content></entry><entry><title>Are distributions still relevant?</title><id>https://lovergine.com/are-distributions-still-relevant.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2024-07-29T20:00:00Z</updated><link href="https://lovergine.com/are-distributions-still-relevant.html" rel="alternate" /><content type="html">&lt;p&gt;In principle and the traditional vision, the roles were clear enough. Upstream
developers had to create and support their own projects, including multiple
libraries, tools and modules, possibly for multiple operating systems.
Distribution maintainers had the responsibility of collecting a significant software
set, porting on various architectures, choosing versions that work well together
for each piece of software, patching for coherence and well-established
policies, eventually providing a build and installation system for the end
users. At the end of the day, a quite complicated and articulated work that
many people out there do for fun, others as a full-time job.&lt;/p&gt;&lt;p&gt;Some distributions have been around since the
beginning of the 90s and still release new versions regularly, including Debian GNU/Linux,
an ecosystem where I have lived and collaborated for almost 25 years.
That was an ideal workflow, managed differently and with diverse goals
by multiple non-profit associations and companies, including the Debian project.
Distribution aimed to have the most stable and affordable daily-use system,
especially for servers and enterprise ecosystems.&lt;/p&gt;&lt;p&gt;That was until 15 years ago or less, when virtual machines and later containers
changed the games and the whole cloud computing revolution started. In
prospective, that has not been the only driver for the change. Another important
aspect has been the great relevance that &lt;em&gt;dynamic languages&lt;/em&gt; and their ecosystems
assumed during the same period.&lt;/p&gt;&lt;h2 id=&quot;the-new-world-of-hubs&quot;&gt;The new world of hubs&lt;/h2&gt;&lt;p&gt;Hubs, hubs everywhere and for anyone. For programming languages, as well as for
containers and virtual machines. Starting from Perl and its CPAN archive, all
currently used languages have their own ecosystems of packages/modules hosted
on some third-party delivery networks. Most modern applications are based on
the distributed efforts of thousands of developers who create and maintain
thousands of tiny or large modules to solve some very specific goals, which
inevitably live in their respective language hubs.&lt;/p&gt;&lt;p&gt;Your latest applications, almost for sure, could only exist thanks to dozens - or
hundreds - of &lt;code&gt;include/require/use&lt;/code&gt; clauses written in some of the prime-time dynamic languages
that currently ride the wave of popularity. Sub-modules that are often developed by
small independent teams or even single developers, packages that are mostly
&lt;em&gt;open source&lt;/em&gt; and come with no warranties for their use and destination.&lt;/p&gt;&lt;h2 id=&quot;hubs-for-developers&quot;&gt;Hubs for developers&lt;/h2&gt;&lt;p&gt;In recent years, developers learned to distribute their own laptops with their applications: let me
simplify. Instead of creating minimal, well-engineered traditional packages for some target platforms
and eventually installers, we are now distributing container images on some cloud computing resources
with all required software piled up within them. If those images are based on Docker, Podman,
Lilipod or Apptainer it is absolutely secondary. Often, most applications are written
in a dynamic language and install gigs of dependency modules and libraries (often
in multiple versions) altogether. Giant software blobs stacked on top of some
very tiny operating system layer. All that started from &lt;em&gt;continuous integration&lt;/em&gt;
platforms for testing and development and exploded in a plethora of subsystems and tools used for
deployment to the end users.&lt;/p&gt;&lt;h2 id=&quot;hubs-for-end-users&quot;&gt;Hubs for end-users&lt;/h2&gt;&lt;p&gt;Talking about common users, thanks to new container-based systems thought for that - such as
flatpak, snap, or appimage - even ordinary users can now use programs that are not more
strictly dependent on distribution package managers. Those are the equivalent of Windows
installers in the Linux environments for end-users. Installing new programs or sub-systems is now
simplified: no building from sources, backporting and other more technical workflows. Just install
and update the latest product, kindly made available by multiple upstream teams, in terms
of some containerized image with only a minor runtime overhead.&lt;/p&gt;&lt;p&gt;It seems like a new ideal development and user world for Linux ecosystems, at least apparently.
Probably it is so for advanced and self-aware users, but ...&lt;/p&gt;&lt;p&gt;&lt;em&gt;I felt a great disturbance in the Force, as if millions of voices suddenly cried out in terror and
were suddenly silenced. I fear something terrible has happened (Obi-Wan Kenobi).&lt;/em&gt;&lt;/p&gt;&lt;h2 id=&quot;the-splintering-of-the-linux-ecosystems&quot;&gt;The splintering of the Linux ecosystems&lt;/h2&gt;&lt;p&gt;The Linux ecosystem has always been extremely fragmented from the point of view of a Windows (or Macos) user.
There are too many distros, too many desktop environments, and too many programs that often do almost the same things,
but differently in some regards, always in the name of freedom of choice. However, all this is nothing
in respect with the future. Distributions are going to be always less and less relevant for running
applications in the cloud and even on the user's desktop. For instance, in the case of Ubuntu and many derivatives distributions, even a big
part of the desktop system is now based on &lt;em&gt;snap&lt;/em&gt;, and its Ubuntu-specific containerized hub.
Soon, all distributions could become very compact core systems with most of the system applications moved
onto multiple external hubs with different frequencies of update.&lt;/p&gt;&lt;h2 id=&quot;who-is-responsible-of-the-whole-supply-chain&quot;&gt;Who is responsible of the whole supply chain?&lt;/h2&gt;&lt;p&gt;At the moment, one of the main challenges for the security of applications - that have become more and more
complicated and dependent on &lt;em&gt;a plethora&lt;/em&gt; of different third-party software - is certifying the whole
supply chain. A &lt;em&gt;software bill of materials&lt;/em&gt; is nowadays required in multiple contexts, but guess what?
A splintering of the whole software stack management responsibility among multiple third-party hubs, development
teams, and end-users is a game changer.
As a user or developer, you will be &lt;em&gt;directly&lt;/em&gt; responsible for updating all
your applications and keeping them stable. Not more your distribution, but you.
Most hubs do not have clear, well-established, and just-in-time policies for security updates.
In most cases, it is a task to be managed by every development team in order to re-collecting all
required pieces - in a consistent way - and rebuilding containerized images when needed.&lt;/p&gt;&lt;p&gt;Sure, there are &lt;em&gt;continuous integration&lt;/em&gt; workflows available in some cases. Still, I don't know so many
application teams that are seriously interested in timely reacting on security reports, starting from
some obscure CVEs, patching, updating multiple source trees and even ensuring to not break anything
by accurate testing platforms. A grunt work done by distribution secteams until now.&lt;/p&gt;&lt;p&gt;Even if basic container images were &lt;em&gt;bricks&lt;/em&gt; accurately managed and updated, the final build and
deployment of application images is a different matter, and while the current approach could
be helpful to have the most recent software installed for the end-users desktops, I'm pretty sure
that most of the thousands of embedded devices that now populate our homes are in pathetic
conditions for security. Many of them have opaque and short-term supports, and the use
of multi-hub sources can only render the whole thing worse: we all are potentially living with multiple
little time bombs permanently connected to the net.&lt;/p&gt;&lt;p&gt;This has been so since the very beginnig in the embedded environments, now we are changing the things
in the same way for desktops and cloud applications.&lt;/p&gt;&lt;p&gt;&lt;em&gt;Once you start down the dark path, forever will it dominate your destiny (Yoda)&lt;/em&gt;&lt;/p&gt;&lt;p&gt;We live in interesting times...&lt;/p&gt;</content></entry></feed>