Linux: Setting the default GDM login monitor in a multi-monitor setup using GNOME display settings

If you’re using a multi-monitor setup with GDM (GNOME Display Manager) and the login screen consistently appears on the wrong monitor, this article presents a tested solution that can be applied either manually or automated, ensuring that the GDM monitor configuration matches that of your primary user setup.

The issue

Consider this configuration: Two monitors connected to an Nvidia graphics card. Despite setting the primary monitor correctly in GNOME, the GDM login screen appears on the secondary monitor. Even if that monitor is turned off, GDM continues to display the login prompt there because it is using the wrong monitor as the default.

The solution

The login screen configuration for GDM can be influenced by copying the monitor layout from your GNOME user session.

This is done by copying the ~/.config/monitors.xml file from your user configuration to GDM’s configuration directory.

Step 1: Configure your display using GNOME

First, configure your display layout as desired using GNOME’s display settings:

gnome-control-center displayCode language: plaintext (plaintext)

Step 2: Copy the resulting monitor configuration to GDM’s configuration directory

Copy the resulting monitor configuration ~/.config/monitors.xml to GDM’s configuration directory:

sudo install -o root -m 644 ~/.config/monitors.xml ~gdm/.config/Code language: plaintext (plaintext)

(the shell will automatically interpret ~gdm as the home directory of the gdm user)

Step 3: Restart GDM

Restart GDM with the following command:

sudo systemctl restart gdmCode language: plaintext (plaintext)

How to automatically copy monitors.xml to the GDM configuration directory?

The copying of the monitors.xml file can be automated by defining a systemd service override for the GDM service.

First, create the following directory using:

sudo mkdir -p /etc/systemd/system/gdm.service.d/Code language: plaintext (plaintext)

Then, create the file /etc/systemd/system/gdm.service.d/override.conf with the following contents:

[Service]
ExecStartPre=/bin/sh -c 'install -o root -m 644 /home/YOUR_USER/.config/monitors.xml ~gdm/.config/monitors.xml || true'Code language: plaintext (plaintext)

Ensure to:

  • Replace YOUR_USER with your actual desktop user name.

Conclusion

Copying the GNOME display settings to the GDM configuration directory ensures that GDM adopts the same monitor layout as your GNOME session, causing the login screen to appear on your preferred monitor.

Configuring Linux on a ThinkPad T420s or T420 Laptop for Optimal Performance

ThinkPad T420s and T420 laptops, despite their age, remain reliable for web browsing, word processing, and various other tasks. Linux can breathe new life into such dated computers, allowing them to perform efficiently.

I configured one of these laptops for my son, and he is now able to do his homework using it. With proper configuration, ThinkPad laptops can achieve optimal performance and power management. This article provides a guide to configuring X11/Xorg, kernel parameters, firmware, fan control, and power management settings to optimize the ThinkPad t420s or t420 for modern use.

Some instructions in this article are specific to Debian/Ubuntu-based distributions, but they can easily be adapted to other distributions such as Red Hat, Fedora, Arch Linux, Gentoo, and others.

X11/Xorg

To ensure proper functionality of Intel graphics and avoid issues such as black screens after waking from sleep, use the Intel driver instead of modesetting.

Create the configuration file /etc/X11/xorg.conf.d/30-intel.conf:

Section "Device"
  Identifier "Intel Graphics"
  Driver "intel"

  Option "Backlight" "intel_backlight"
EndSectionCode language: plaintext (plaintext)

Ensure the intel driver is installed:

sudo apt-get install xserver-xorg-video-intelCode language: plaintext (plaintext)

IMPORTANT: Ensure that the integrated graphics are set as the default video card in the ThinkPad BIOS.

Kernel parameters

To ensure backlight control is working (increasing and decreasing brightness), add the following kernel parameter: acpi_backlight=native

On a Debian/Ubuntu based distribution, this can be appended to the kernel command line in the bootloader configuration, typically in /etc/default/grub (for GRUB users):

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_backlight=native"Code language: plaintext (plaintext)

After modifying this file, update GRUB with:

sudo update-grubCode language: plaintext (plaintext)

Fan control

Without software to control the fan, it may run at maximum speed. To enable fan control, create the file /etc/modprobe.d/thinkpad_acpi.conf:

options thinkpad_acpi fan_control=1Code language: plaintext (plaintext)

After that, install zcfan. On a Debian/Ubuntu based distribution:

sudo apt-get install zcfanCode language: plaintext (plaintext)

Packages

Ensure the necessary firmware for Atheros and Realtek network devices is installed. On Debian/Ubuntu-based distributions, install the following packages:

sudo apt install firmware-atheros firmware-realtekCode language: plaintext (plaintext)

It is also recommended to install essential packages for hardware encoding and decoding, including intel-microcode for the latest processor updates, which improve system stability and performance:

sudo apt-get install intel-microcode intel-media-va-driver-non-free i965-va-driverCode language: plaintext (plaintext)

TLP

TLP is a power management tool that optimizes battery life. Install and configure it as follows:

sudo apt install tlpCode language: plaintext (plaintext)

Create the configuration file /etc/tlp.d/00-base.conf:

DEVICES_TO_DISABLE_ON_BAT="bluetooth wwan"
DEVICES_TO_ENABLE_ON_STARTUP="wifi"
DEVICES_TO_DISABLE_ON_LAN_CONNECT="wifi wwan"

TLP_DEFAULT_MODE=BAT

CPU_SCALING_GOVERNOR_ON_AC=performance
CPU_SCALING_GOVERNOR_ON_BAT=schedutil

# PCIe Active State Power Management (ASPM):
PCIE_ASPM_ON_AC=performance

# Set Intel CPU performance: 0..100 (%). Limit the
# max/min to control the power dissipation of the CPU.
# Values are stated as a percentage of the available
# performance.
CPU_MIN_PERF_ON_AC=70
CPU_MAX_PERF_ON_AC=100

# [default] performance powersave powersupersave. on=disable
RUNTIME_PM_ON_AC=on
RUNTIME_PM_ON_BAT=powersupersaveCode language: Python (python)

Conclusion

The ThinkPad T420s and T420, though older models, remain reliable machines for everyday tasks. With the right configuration, these laptops can be revitalized, making them well-suited for modern use.

jc-dotfiles – A collection of configuration files for UNIX/Linux systems

The jamescherti/jc-dotfiles repository is a collection of configuration files. You can either install them directly or use them as inspiration your own dotfiles. These dotfiles provide configurations for various tools, enhancing productivity and usability. This article will explore the contents of the jc-dotfiles repository, highlight the main configurations, and guide you through the installation process.

In addition to the jc-dotfiles repository, the author also recommends the following repositories:

  • jamescherti/jc-firefox-settings: Provides the user.js file, which holds settings to customize the Firefox web browser to enhance the user experience and security.
  • jamescherti/jc-xfce-settings: A script that can customize the XFCE desktop environment programmatically, including window management, notifications, keyboard settings, and more, to enhance the user experience.
  • jamescherti/jc-gnome-settings: A script that can customize the GNOME desktop environment programmatically, including window management, notifications, and more.
  • jamescherti/bash-stdops: A collection of Bash helper shell scripts.
  • jamescherti/jc-gentoo-portage: Provides configuration files for customizing Gentoo Linux Portage, including package management, USE flags, and system-wide settings.
  • jamescherti/minimal-emacs.d: A lightweight and optimized Emacs base (init.el and early-init.el) that gives you full control over your configuration. It provides better defaults, an optimized startup, and a clean foundation for building your own vanilla Emacs setup.
  • jamescherti/jc-vimrc: The jc-vimrc project is a customizable Vim base that provides better Vim defaults, intended to serve as a solid foundation for a Vim configuration.

The jc-dotfiles Repository Overview

The repository includes configurations for several components of a UNIX/Linux system, ranging from shell settings to terminal multiplexer and input customizations. Here’s a breakdown of the key configurations included:

1. Shell Configuration (.bashrc, .profile, .bash_profile)

The shell configuration files are optimized for efficient Bash shell settings:

  • Efficient Command Execution: Predefined aliases, functions, and environment variables to speed up common tasks.
  • Interactive Sessions: Custom prompt configurations and other interactive settings that improve the overall shell experience.

2. Terminal Multiplexer Configuration (.tmux.conf)

Tmux is a tool for managing multiple terminal sessions, and this configuration enhances your Tmux setup. It includes:

  • Session Management: Custom keybindings and layout configurations for improved session switching.
  • Productivity Boosters: Enhanced usability and efficiency for handling multiple tasks within terminal windows.

3. Readline Configuration (.inputrc)

The Readline configuration improves interactive shell input by allowing additional keybindings:

  • Custom Keybindings: Use Alt-h, Alt-j, Alt-k, and Alt-l for cursor movement, enabling Vim-style HJKL navigation within the terminal.

4. Other Configurations

The repository also includes a variety of other configuration files and scripts:

  • .gitconfig: Custom Git settings to improve version control workflow. (Here is an article about .gitconfig: Optimized Git configuration ~/.gitconfig for performance and usability)
  • ranger: File manager configuration for enhanced navigation and file handling.
  • .fdignore: Ignore rules for fd, the fast search tool, to filter out unwanted files during searches.
  • .wcalcrc: Configuration for wcal, a command-line calculator.
  • mpv: Media player settings for a better viewing experience.
  • picom: Window compositor configuration for optimized display and transparency effects.
  • feh: Image viewer configuration with useful options and custom keybindings.

Conclusion

The jamescherti/jc-dotfiles configuration scripts can enhance your workflow. Whether you install them directly or use them as inspiration for your own dotfiles, you’ll gain access to optimized settings for your shell, terminal, Git, and various other tools, all aimed at boosting productivity and efficiency.

Running Large Language Models locally with Ollama (compatible with Linux, macOS, and Windows)

Running Large Language Models on your machine can enhance your projects, but the setup is often complex. Ollama simplifies this by packaging everything needed to run an Large Language Models. Here’s a concise guide on using Ollama to run LLMs locally.

Requirements

  • CPU: Aim for an CPU that supports AVX512, which accelerates the matrix multiplication operations essential for LLM AI models. (If your CPU does not support AVX, see Ollama Issue #2187: Support GPU runners on CPUs without AVX.)
  • RAM: A minimum of 16GB is recommended for a decent experience when running models with 7 billion parameters.
  • Disk Space: A practical minimum of 40GB of disk space is advisable.
  • GPU: While a GPU is not mandatory, it is recommended for enhanced performance in model inference. Refer to the list of GPUs that are compatible with Ollama. For running quantized models, GPUs that support 4-bit quantized formats can handle large models more efficiently, with VRAM requirements as follows: ~4 GB VRAM for 7B model, ~8 GB VRAM for 13B model, ~16 GB VRAM 30B model, and ~32 GB VRAM for 65B model.
  • For NVIDIA GPUs: Ollama requires CUDA, a parallel computing platform and API developed by NVIDIA. You can find the instructions to install CUDA on Debian here. Similar instructions can be found in your Linux distribution’s wiki.

Step 1: Install Ollama

Download and install Ollama for Linux using:

curl -fsSL https://ollama.com/install.sh | shCode language: plaintext (plaintext)

Step 2: Download a Large Language Model

Download a specific large language model using the Ollama command:

ollama pull gemma2:2bCode language: plaintext (plaintext)

The command above downloads the Gemma2 model by Google DeepMind. You can find other models by visiting the Ollama Library.

(Downloading “gemma2:2b” actually downloads “gemma2:2b-instruct-q4_0”, indicating that it retrieves a quantized version of the 2 billion parameter model specifically optimized for instruction-following tasks like chat-bots. This quantization process reduces the model’s precision from the original floating-point representation to a more compact format, such as float32, thereby significantly lowering memory usage and enhancing inference speed. However, this quantization can lead to a slight decrease in accuracy compared to the full-precision floating-point model.)

Step 3: Chat with the model

Run the large language model:

ollama run gemma2:2bCode language: plaintext (plaintext)

This launches an interactive REPL where you can interact with the model.

Step 4: Install open-webui (web interface)

Open-webui offers a user-friendly interface for interacting with large language models downloaded via Ollama. It enables users to run and customize models without requiring extensive programming knowledge.

It can be installed using pip within a Python virtual environment:

mkdir -p ~/.python-venv/open-webui
python -m venv ~/.python-venv/open-webui
source ~/.python-venv/open-webui/bin/activate
pip install open-webuiCode language: plaintext (plaintext)

Finally, execute the following command to start the open-webui server:

~/.python-venv/open-webui/bin/open-webui serveCode language: plaintext (plaintext)

You will also have to execute Ollama as a server simultaneously with open-webui:

ollama serve

Conclusion

With Ollama, you can quickly run Large Language Models (LLMs) locally and integrate them into your projects. Additionally, open-webui provides a user-friendly interface for interacting with these models, making it easier to customize and deploy them without extensive programming knowledge.

Links

  • Ollama Library: A collection of language models available for download through Ollama.
  • Ollama @Github: The Ollama Git repository.
  • Compile Ollama: For users who prefer to compile Ollama instead of using the binary.

Emulating Cherry MX Blue Mechanical Keyboard Sounds on Linux

For people nostalgic for the era of tactile and audible feedback from typing on a mechanical keyboard, Cherrybuckle allow simulating the sounds of a mechanical keyboard with Cherry MX Blue key switches.

Cherrybuckle operates as a background process within a computer system, capturing and emitting a sound for each key pressed and released. It is a fork of the Bucklespring project that adds Cherry MX sounds to the default Bucklespring keyboard sounds.

Installing dependencies

The dependencies can be installed on a Debian or Ubuntu system using the following commands:

sudo apt-get install build-essential git
sudo apt-get install libalure-dev libx11-dev libxtst-dev pkg-configCode language: plaintext (plaintext)

Compiling and running Cherrybuckle on Debian/Ubuntu

Retrieve the project source code for the Git repository:

git clone https://github.com/jamescherti/cherrybuckleCode language: plaintext (plaintext)

Change the current working directory to “cherrybuckle”:

cd cherrybuckleCode language: plaintext (plaintext)

Compile the source code into an executable program:

makeCode language: plaintext (plaintext)

Finally, execute Cherrybuckle:

./cherrybuckleCode language: plaintext (plaintext)

Creating and Restoring a Gzip Compressed Disk Image with dd on UNIX/Linux

Creating and restoring disk images are essential tasks for developers, system administrators, and users who want to safeguard their data or replicate systems efficiently. One useful tool for this purpose is dd, which allows for low-level copying of data. In this article, we will explore how to clone and restore a partition from a compressed disk image in a UNIX/Linux operating system.

IMPORTANT: There is a risk of data loss if a mistake is made. The dd command can be dangerous if not used carefully. Specifying the wrong input or output device can result in data loss. Users should exercise caution and double-check their commands before executing them.

Cloning a Partition into a Compressed Disk Image

To clone a partition into a compressed disk image, you can use the dd and gzip commands:

dd if=/dev/SOURCE conv=sync bs=64K | gzip --stdout > /path/to/file.gzCode language: plaintext (plaintext)

This command copies the content of the block device /dev/SOURCE to the compressed file /path/to/file.gz, 64 kilobytes at a time.

Restoring a Partition from a Compressed Disk Image

To restore a partition from a file containing a compressed disk image, use the following command:

gunzip --stdout /path/to/file.gz | dd of=/dev/DESTINATION bs=64K
Code language: plaintext (plaintext)

This command decompresses the content of the compressed file located at /path/to/file.gz and copies it to the block device /dev/DESTINATION, 64 kilobytes at a time.

More information about the dd command options

Here are additional details about the dd command options:

  • The status=progress option makes dd display transfer statistics progressively.
  • The conv=noerror option instructs dd to persist despite encountering errors. However, ignoring errors might result in data corruption in the copied image. The image could be incomplete or corrupted, especially if errors occur in critical parts of the data. This option can be added to the conv option as follows: conv=noerror
  • The conv=sync option makes dd wait for both the data and the metadata to be physically written to the storage media before proceeding to the next operation. In situations where data integrity is less critical, using conv=sync can help restore as much data as possible, even from a source with occasional errors.
  • Finally, the bs=64K option instructs dd to read or write up to the specified bytes at a time (in this case, 64 kilobytes). The default value is 512 bytes, which is relatively small. It is advisable to consider using 64K or even the larger 128K. However, it’s important to note that while a larger block size speeds up the transfer, a smaller block size enhances transfer reliability.

Ensuring Data Integrity

Although the dd command automatically verifies that the input and output block sizes match during each block copy operation, it is prudent to further confirm the integrity of the copied data after completing the dd operation.

To achieve this, follow these steps:

Generate the md5sum of the source block device:

dd if=/dev/SOURCE | md5sumCode language: plaintext (plaintext)

Next, generate the md5sum of the gzip-compressed file:

gunzip --stdout /path/to/file.gz | md5sumCode language: plaintext (plaintext)

Ensure that the two md5sum fingerprints are equal. This additional verification step adds an extra layer of assurance regarding the accuracy and integrity of the copied data.

Installing Debian onto a separate partition without using the Debian installer

There are various scenarios in which one might need to install a Debian-based system (e.g., Debian, Ubuntu, etc.) from another distribution (e.g., Arch Linux, Gentoo, Debian/Ubuntu distributions, Fedora, etc.). One common reason is when a user wants to set up a Debian-based system alongside an existing distribution. This could be for the purpose of testing software compatibility, development, or simply to have a dual-boot.

A Debian-based distribution can be installed from any other distribution using debootstrap. The debootstrap command-line tool allows installing a Debian or Ubuntu base system within a subdirectory of an existing, installed system. Unlike traditional installation methods using a CD or a USB Key, debootstrap only requires access to a Debian repository.

There are several reasons why this approach is advantageous:

  • No need for a bootable USB/CD: Install Debian without external installation media.
  • Dual-boot without reinstalling the host OS: Easily add Debian alongside another Linux system.
  • Minimal and customizable installation: Install only essential packages for a lightweight system (the installer sometimes installs more than necessary).
  • Remote server installations: Install Debian on a remote machine without physical access.
  • System recovery: Reinstall or repair a broken Debian system from another Linux distribution.
  • Automated and scripted deployments: Useful for mass deployments in enterprise environments.
  • Maintaining a multi-distro workflow: Run both a stable Debian system and a rolling-release distribution.

Step 1: Create a new LVM partition, format it, and mount it

# Create the root LVM partition
lvcreate  -L 20G -n debian_root VOL_NAME

# Format the partition
mkfs.ext4 /dev/VOL_NAME/debian_root

# Mount the partition
mkdir /mnt/debian_root
mount /dev/VOL_NAME/debian_root /mnt/debian_rootCode language: plaintext (plaintext)

Step 2: Install the debootstrap command-line tool

On Arch Linux, debootstrap can be installed using:

pacman -Sy debian-archive-keyring debootstrapCode language: plaintext (plaintext)

On Gentoo, it can be installed using:

emerge -a dev-util/debootstrapCode language: plaintext (plaintext)

On Debian/Ubuntu based distributions:

apt-get install debootstrapCode language: JavaScript (javascript)

Step 3: Install the Debian base system

Use the debootstrap command to install Debian into the target directory:

debootstrap  --arch=amd64 stable /mnt/debian_root http://deb.debian.org/debianCode language: plaintext (plaintext)

You can replace stable with another Debian release like testing or unstable if desired. You can also add the flag --force-check-gpg to force checking Release file signatures.

In the above example, it will install the Debian-based system from the repository http://deb.debian.org/debian into the local directory /mnt/debian_root.

Step 4: Chroot into the Debian system

Since you are installing a Debian-based system inside another distribution (Arch Linux, Gentoo, etc.), you’ll need to ensure that the directory where the Debian system is mounted is ready. You can achieve this by mounting certain directories and chrooting into the Debian system:

sudo mount --bind /dev /mnt/debian_root/dev
sudo mount --bind /proc /mnt/debian_root/proc
sudo mount --bind /sys /mnt/debian_root/sys
sudo mount --bind /boot /mnt/debian_root/boot
sudo cp /etc/resolv.conf /mnt/debian_root/etc/resolv.conf
sudo cp /etc/fstab /mnt/debian_root/etc/fstab
sudo chroot /mnt/debian_root /bin/bash -lCode language: plaintext (plaintext)

The chroot command will open a new shell in the Debian environment.

Step 5: Configure the Debian-based system

Now that you’re inside the Debian-based system, you can configure it as desired. You can install packages, modify configurations, set up users, etc.

Here is an example:

apt-get update

# Install the Linux Kernel
apt-get install linux-image-amd64 firmware-linux-free firmware-misc-nonfree 

# Install cryptsetup if you are using a LUKS encrypted partition
apt-get install cryptsetup cryptsetup-initramfs

# Install misc packages
apt-get install console-setup vim lvm2 sudo

# Reconfigure locales
dpkg-reconfigure locales

# Configure the host name and the time zone
echo yourhostname > /etc/hostname
ln -sf /usr/share/zoneinfo/America/New_York /etc/localtime
Code language: plaintext (plaintext)

Do not forget to:

  • Modify /mnt/debian_root/etc/fstab (The mount point “/” has to point to the Debian system)
  • Modify /mnt/debian_root/etc/crypttab (If you are using a LUKS encrypted partition)

How to boot into the newly installed system?

You can, for example, configure GRUB to boot your newly configured operating system.

You can either use the new Debian-based system’s GRUB as the default (replace the existing one) or configure additional GRUB entries in an existing system. For instance, if your base system is Debian-based, you can add your entry using /etc/grub.d/40_custom:

#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.
menuentry 'Debian 2' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'debian_fallback' {
        insmod part_gpt
        insmod fat
        search --no-floppy --fs-uuid --set=root 00000000-000a-00a0-a0a0-000000a0000a
        linux /backup/vmlinuz-6.12.12+bpo-amd64 root=/dev/MY_LVM_VOLUME/debian ro fsck.mode=auto fsck.repair=yes nowatchdog apparmor=1 acpi_backlight=native
        initrd /initrd.img-6.12.12+bpo-amd64Code language: plaintext (plaintext)

(Replace /dev/mapper/MY_LVM_VOLUME_debian with your root partition and 00000000-000a-00a0-a0a0-000000a0000a with your actual UUID that you can find using the command: lsblk -o +UUID)

In my case, I am using bootctl, which I installed using Gentoo. I simply added /boot/loader/entries/debian.conf with the following configuration:

title Debian
linux /vmlinuz-6.12.12+bpo-amd64
initrd /initrd.img-6.12.12+bpo-amd64
options rw root=/dev/volume1/debianCode language: plaintext (plaintext)

Congratulations! You have successfully installed a Debian-based system using debootstrap from another distribution such as Arch Linux, Gentoo, etc.

Gentoo: How to Speed Up emerge ‐‐sync

Synchronizing with the Gentoo Portage ebuild repository using emerge --sync can be slow when utilizing the rsync protocol. However, an effective solution exists that can greatly improve the synchronization speed: Configuring emerge --sync to synchronize using Git instead.

In this article, we will explore how to set up emerge to synchronize from the official Gentoo ebuild Git repository and save valuable time during the synchronizing process.

Step 1: Install Git using the following command:

sudo emerge -a dev-vcs/gitCode language: plaintext (plaintext)

Step 2: Remove any file from the directory /etc/portage/repos.conf/ that configures the emerge command to use rsync.

Step 3: Create the file /etc/portage/repos.conf/gentoo.conf containing:

[DEFAULT]
main-repo = gentoo

[gentoo]

# The sync-depth=1 option speeds up initial pull by fetching 
# only the latest Git commit and its immediate ancestors, 
# reducing the amount of downloaded Git history.
sync-depth = 1

sync-type = git
auto-sync = yes
location = /var/db/repos/gentoo
sync-git-verify-commit-signature = yes
sync-openpgp-key-path = /usr/share/openpgp-keys/gentoo-release.asc
sync-uri = https://github.com/gentoo-mirror/gentoo.gitCode language: plaintext (plaintext)

Step 4: Finally, run the following command to synchronize with the Gentoo ebuild repository using Git:

sudo emerge --sync

The initial download of the entire Git repository will cause the first emerge --sync command to take some time. However, subsequent synchronizations will be significantly quicker, taking only a few seconds.

Using Git can be a great way to speed up synchronization with the Gentoo ebuild repository. By following the steps outlined in this article, you can clone the Portage repository to your local machine and keep it up-to-date with the latest changes using Git. This can save you a lot of time when syncing your local repository.

A Docker container for Oddmuse, a Wiki engine that does not require a database

Oddmuse is a wiki engine. Unlike other wiki engines that rely on local or remote databases to store and modify content, Oddmuse utilizes the local file system. This means that users can create and manage Wiki pages on their local machine and easily transfer them to other locations or servers. By leveraging the local file system, Oddmuse eliminates the need for complex and costly database setups. Oddmuse is used by many websites around the world, including the website emacswiki.org.

To make it even easier to use Oddmuse, I have created a Docker container that includes the wiki engine and all of its dependencies. This container can be downloaded and run on any machine that has Docker installed.

Pull and run the Oddmuse Docker container

The Oddmuse Docker container can be pulled from the Docker hub repository jamescherti/oddmuse using the following command:

docker pull jamescherti/oddmuseCode language: Bash (bash)

And here is an example of how to run the Docker container:

docker run --rm \
  -v /local/path/oddmuse_data:/data \
  -p 8080:80 \
  --env ODDMUSE_URL_PATH=/wiki \
  jamescherti/oddmuseCode language: Bash (bash)

Once the container up and running, you can start using Oddmuse to create and manage your own wiki pages.

Alternative method: Compile the Oddmuse Docker container

Alternatively, you can build the Oddmuse Docker container using the Dockerfile that is hosted in the GitHub repository jamescherti/docker-oddmuse:

git clone https://github.com/jamescherti/docker-oddmuse
docker build -t jamescherti/oddmuse docker-oddmuseCode language: Bash (bash)

The Oddmuse Docker container is a convenient and efficient way to use the Oddmuse Wiki engine without the need for complex setups.

Arch Linux: Preserving the kernel modules of the currently running kernel during and after an upgrade

One potential issue when upgrading the Arch Linux kernel is that the modules of the currently running kernel may be deleted. This can lead to a number of problems, including unexpected behavior, system crashes, or the inability to mount certain file systems (e.g. the kernel fails to mount a vfat file system due to the unavailability of the vfat kernel module).

The Arch Linux package linux-keep-modules (also available on AUR: linux-keep-modules @AUR), written by James Cherti, provides a solution to ensure that the modules of the currently running Linux kernel remain available until the operating system is restarted. Additionally, after a system restart, the script automatically removes any unnecessary kernel modules that might have been left behind by previous upgrades (e.g. the kernel modules that are not owned by any Arch Linux package and are not required by the currently running kernel).

The linux-keep-modules package keeps your system running smoothly and maintains stability even during major Linux kernel upgrades.

Make and install the linux-keep-modules package

Clone the repository and change the current directory to ‘archlinux-linux-keep-modules/’:

$ git clone https://github.com/jamescherti/archlinux-linux-keep-modules.git
$ cd archlinux-linux-keep-modules/Code language: plaintext (plaintext)

Use makepkg to make linux-keep-modules package:

$ makepkg -fCode language: plaintext (plaintext)

Install the linux-keep-modules package:

$ sudo pacman -U linux-keep-modules-*-any.pkg.tar.*Code language: plaintext (plaintext)

Finally, enable the cleanup-linux-modules service:

$ sudo systemctl enable cleanup-linux-modulesCode language: plaintext (plaintext)

(The cleanup-linux-modules service will delete the Linux kernel modules that are not owned by any a package at boot time)

The linux-keep-modules Arch Linux package offers a solution to preserve kernel modules during and after upgrades, ensuring that the necessary modules for the currently running kernel remain present in the system even after the kernel is upgraded. This solution keeps your system running smoothly and maintains stability even during major upgrades.

Links related to the pacman package linux-keep-modules

Helper script to upgrade Arch Linux

In this article, we will be sharing a Python script, written by James Cherti, that can be used to upgrade Arch Linux. It is designed to make the process of upgrading the Arch Linux system as easy and efficient as possible.

The helper script to upgrade Arch Linux can:

  • Delete the ‘/var/lib/pacman/db.lck’ when pacman is not running,
  • upgrade archlinux-keyring,
  • upgrade specific packages,
  • download packages,
  • upgrade all packages,
  • remove from the cache the pacman packages that are no longer installed.

The script provides a variety of options and is perfect for those who want to automate the process of upgrading their Arch Linux system (e.g. execute it from cron) and ensure that their system is always up to date.

Requirements: psutil
Python script name: archlinux-update.py

#!/usr/bin/env python
# Author: James Cherti
# License: MIT
# URL: https://www.jamescherti.com/script-update-arch-linux/
"""Helper script to upgrade Arch Linux."""

import argparse
import logging
import os
import re
import subprocess
import sys
import time

import psutil


class ArchUpgrade:
    """Upgrade Arch Linux."""

    def __init__(self, no_refresh: bool):
        self._download_package_db = no_refresh
        self._keyring_and_pacman_upgraded = False
        self._delete_pacman_db_lck()

    @staticmethod
    def _delete_pacman_db_lck():
        """Delete '/var/lib/pacman/db.lck' when pacman is not running."""
        pacman_running = False
        for pid in psutil.pids():
            try:
                process = psutil.Process(pid)
                if process.name() == "pacman":
                    pacman_running = True
                    break
            except psutil.Error:
                pass

        if pacman_running:
            print("Error: pacman is already running.", file=sys.stderr)
            sys.exit(1)

        lockfile = "/var/lib/pacman/db.lck"
        if os.path.isfile(lockfile):
            os.unlink(lockfile)

    def upgrade_specific_packages(self, package_list: list) -> list:
        """Upgrade the packages that are in 'package_list'."""
        outdated_packages = self._outdated_packages(package_list)
        if outdated_packages:
            cmd = ["pacman", "--noconfirm", "-S"] + outdated_packages
            self.run(cmd)

        return outdated_packages

    def _outdated_packages(self, package_list: list) -> list:
        """Return the 'package_list' packages that are outdated."""
        outdated_packages = []
        try:
            output = subprocess.check_output(["pacman", "-Qu"])
        except subprocess.CalledProcessError:
            output = b""

        for line in output.splitlines():
            line = line.strip()
            pkg_match = re.match(r"^([^\s]*)\s", line.decode())
            if not pkg_match:
                continue

            pkg_name = pkg_match.group(1)
            if pkg_name in package_list:
                outdated_packages += [pkg_name]

        return outdated_packages

    @staticmethod
    def upgrade_all_packages():
        """Upgrade all packages."""
        ArchUpgrade.run(["pacman", "--noconfirm", "-Su"])

    def download_all_packages(self):
        """Download all packages."""
        self.download_package_db()
        self.run(["pacman", "--noconfirm", "-Suw"])

    def download_package_db(self):
        """Download the package database."""
        if self._download_package_db:
            return

        print("[INFO] Download the package database...")
        ArchUpgrade.run(["pacman", "--noconfirm", "-Sy"])
        self._download_package_db = True

    def upgrade_keyring_and_pacman(self):
        self.download_package_db()

        if not self._keyring_and_pacman_upgraded:
            self.upgrade_specific_packages(["archlinux-keyring"])
            self._keyring_and_pacman_upgraded = True

    def clean_package_cache(self):
        """Remove packages that are no longer installed from the cache."""
        self.run(["pacman", "--noconfirm", "-Scc"])

    @staticmethod
    def run(cmd, *args, print_command=True, **kwargs):
        """Execute the command 'cmd'."""
        if print_command:
            print()
            print("[RUN] " + subprocess.list2cmdline(cmd))

        subprocess.check_call(
            cmd,
            *args,
            **kwargs,
        )

    def wait_download_package_db(self):
        """Wait until the package database is downloaded."""
        successful = False
        minutes = 60
        hours = 60 * 60
        seconds_between_tests = 15 * minutes
        for _ in range(int((10 * hours) / seconds_between_tests)):
            try:
                self.download_package_db()
            except subprocess.CalledProcessError:
                minutes = int(seconds_between_tests / 60)
                print(
                    f"[INFO] Waiting {minutes} minutes before downloading "
                    "the package database...",
                    file=sys.stderr,
                )
                time.sleep(seconds_between_tests)
                continue
            else:
                successful = True
                break

        if not successful:
            print("Error: failed to download the package database...",
                  file=sys.stderr)
            sys.exit(1)


def parse_args():
    """Parse the command-line arguments."""
    usage = "%(prog)s [--option] [args]"
    parser = argparse.ArgumentParser(description=__doc__.splitlines()[0],
                                     usage=usage)
    parser.add_argument("packages",
                        metavar="N",
                        nargs="*",
                        help="Upgrade specific packages.")

    parser.add_argument(
        "-u",
        "--upgrade-packages",
        default=False,
        action="store_true",
        required=False,
        help="Upgrade all packages.",
    )

    parser.add_argument(
        "-d",
        "--download-packages",
        default=False,
        action="store_true",
        required=False,
        help="Download the packages that need to be upgraded.",
    )

    parser.add_argument(
        "-c",
        "--clean",
        default=False,
        action="store_true",
        required=False,
        help=("Remove packages that are no longer installed from "
              "the cache."),
    )

    parser.add_argument(
        "-n",
        "--no-refresh",
        default=False,
        action="store_true",
        required=False,
        help=("Do not download the package database (pacman -Sy)."),
    )

    parser.add_argument(
        "-w",
        "--wait-refresh",
        default=False,
        action="store_true",
        required=False,
        help=("Wait for a successful download of the package database "
              "(pacman -Sy)."),
    )

    return parser.parse_args()


def command_line_interface():
    """The command-line interface."""
    logging.basicConfig(level=logging.INFO, stream=sys.stdout,
                        format="%(asctime)s %(name)s: %(message)s")

    if os.getuid() != 0:
        print("Error: you cannot perform this operation unless you are root.",
              file=sys.stderr)
        sys.exit(1)

    nothing_to_do = True
    args = parse_args()
    upgrade = ArchUpgrade(no_refresh=args.no_refresh)

    if args.wait_refresh:
        upgrade.wait_download_package_db()
        nothing_to_do = False

    if args.packages:
        print("[INFO] Upgrade the packages:", ", ".join(args.packages))
        upgrade.upgrade_keyring_and_pacman()
        if not upgrade.upgrade_specific_packages(args.packages):
            print()
            print("[INFO] The following packages are already up-to-date:",
                  ", ".join(args.packages))
        nothing_to_do = False

    if args.download_packages:
        print("[INFO] Download all packages...")
        upgrade.download_all_packages()
        nothing_to_do = False

    if args.upgrade_packages:
        print("[INFO] Upgrade all packages...")
        upgrade.upgrade_keyring_and_pacman()
        upgrade.upgrade_all_packages()

        nothing_to_do = False

    if args.clean:
        print("[INFO] Remove packages that are no longer installed "
              "from the cache...")
        upgrade.clean_package_cache()
        nothing_to_do = False

    if nothing_to_do:
        print("Nothing to do.")
        print()

    sys.exit(0)


def main():
    try:
        command_line_interface()
    except subprocess.CalledProcessError as err:
        print(f"[ERROR] Error {err.returncode} returned by the command: "
              f"{subprocess.list2cmdline(err.cmd)}",
              file=sys.stderr)
        sys.exit(1)


if __name__ == '__main__':
    main()Code language: Python (python)

Gentoo Linux: Unlocking a LUKS Encrypted LVM Root Partition at Boot Time using a Key File stored on an External USB Drive

Gentoo can be configured to use a key file stored on an external USB drive to unlock a LUKS encrypted LVM root partition.

We will explore in this article the general steps involved in configuring Gentoo to use an external USB drive as a key file to unlock a LUKS encrypted LVM root partition.

1. Create a key file on the USB stick and add it to the LUKS encrypted partition

Generate a key file on a mounted ext4 or vfat partition of a USB stick, which will be used by initramfs to unlock the LUKS partition:

dd if=/dev/urandom of=/PATH/TO/USBSTICK/keyfile bs=1024 count=4Code language: plaintext (plaintext)

Ensure that the partition on the USB drive has a label, as the initramfs will use this label to find where the key file is located.

Afterward, add the key file to the LUKS partition to enable decryption of the partition using that key file:

cryptsetup luksAddKey /dev/PART1 /PATH/TO/USBSTICK/keyfile

In this example, “/dev/PART1” is the partition where the LUKS encryption is enabled, and “/PATH/TO/USBSTICK/keyfile” is the location of the keyfile.

2 – Find the UUID of the encrypted partition and the label of the USB drive

Use the lsblk command to find the UUID of the encrypted partition and the label of the USB drive:

lsblk -o +UUID,LABEL

3. Configure the boot loader (such as Systemd-boot, GRUB, Syslinux…)

Add to the boot loader configuration the following initramfs kernel parameters:

  • crypt_root=UUID=A1111111-A1AA-11A1-AAAA-111AA11A1111
  • root=/dev/LVMVOLUME/root
  • root_keydev=/dev/disk/by-label/LABELNAME
  • root_key=keyfile

Here is an example for Systemd-boot:

options dolvm crypt_root=UUID=A1111111-A1AA-11A1-AAAA-111AA11A1111 root=/dev/LVMVOLUME/root root_keydev=/dev/disk/by-label/LABELNAME root_key=keyfileCode language: plaintext (plaintext)

To ensure proper setup:

  • Customize the initramfs options for LVMVOLUME, LABELNAME, and UUID=A1111111-A1AA-11A1-AAAA-111AA11A1111 to match your specific case.
  • Verify that the ext4 or vfat partition of the USB drive that is labeled “LABELNAME” contains a file named “keyfile”.
  • Make sure that the modules “dm_mod” and “usb_storage” are included in the initramfs.

This method offers a convenient way to unlock a LUKS encrypted root LVM partition. The implementation process is well-documented, making it a suitable choice for those looking to secure their Gentoo Linux systems.

Gentoo Linux: Printer driver for the Brother QL-1110NWB

Installing the printer driver for the Brother QL-1110NWB on Gentoo Linux can be a bit tricky, but thanks to a helpful ebuild written by James Cherti, the process becomes a breeze. The ebuild automates the whole process of downloading and installing the appropriate driver for the Brother QL-1110NWB on Gentoo Linux.

Brother QL-111NWB Driver installation on Gentoo

Create the file /etc/portage/repos.conf/motley-overlay.conf containing:

[motley-overlay]
location = /usr/local/portage/motley-overlay
sync-type = git
sync-uri = https://github.com/jamescherti/motley-overlay
priority = 9999Code language: plaintext (plaintext)

Update the repository:

emerge --sync motley-overlayCode language: plaintext (plaintext)

Install the Brother QL-1110NWB printer driver:

emerge -av net-print/brother-ql1110nwb-binCode language: plaintext (plaintext)

The ebuild will automatically download the necessary driver package from Brother and install it on your system.

Finally, restart CUPS with:

systemctl restart cupsCode language: plaintext (plaintext)

You can now register your new printer using the web interface at: http://localhost:631/

(Please add a star to the Git repository jamescherti/motley-overlay to support the project!)

Configure XFCE 4 programmatically with the help of watch-xfce-xfconf

Configuring XFCE 4 programmatically is useful if you wish to have the same XFCE 4 settings on several computers.

The command-line tool jamescherti/watch-xfce-xfconf (previously named: monitor-xfconf-changes) will help you to create shell scripts that can configure XFCE 4 programmatically. It will display the xfconf-query commands of all the Xfconf settings that are being modified by XFCE 4 programs (like xfce4-settings-manager, thunar, ristretto, etc.).

Please star watch-xfce-xfconf on GitHub to support the project!

How to use the command-line tool monitor-xfconf-changes?

The watch-xfce-xfconf command-line tool can be installed locally, in ~/.local/bin/watch-xfce-xfconf, using pip:

pip install --user watch-xfce-xfconf

Run xfce4-settings-manager in the background:

xfce4-settings-manager &

Execute watch-xfce-xfconf:

~/.local/bin/watch-xfce-xfconfCode language: plaintext (plaintext)

The xfconf-query commands will be displayed by watch-xfce-xfconf on the terminal every time an XFCE 4 setting is changed (with xfce4-settings-manager, thunar, ristretto, etc.).

You can then use the xfconf-query commands to configure XFCE 4 programmatically.

Links related to watch-xfce-xfconf