Configuring Linux on a ThinkPad T420s or T420 Laptop for Optimal Performance

ThinkPad T420s and T420 laptops, despite their age, remain reliable for web browsing, word processing, and various other tasks. Linux can breathe new life into such dated computers, allowing them to perform efficiently.

I configured one of these laptops for my son, and he is now able to do his homework using it. With proper configuration, ThinkPad laptops can achieve optimal performance and power management. This article provides a guide to configuring X11/Xorg, kernel parameters, firmware, fan control, and power management settings to optimize the ThinkPad t420s or t420 for modern use.

Some instructions in this article are specific to Debian/Ubuntu-based distributions, but they can easily be adapted to other distributions such as Red Hat, Fedora, Arch Linux, Gentoo, and others.

X11/Xorg

To ensure proper functionality of Intel graphics and avoid issues such as black screens after waking from sleep, use the Intel driver instead of modesetting.

Create the configuration file /etc/X11/xorg.conf.d/30-intel.conf:

Section "Device"
  Identifier "Intel Graphics"
  Driver "intel"

  Option "Backlight" "intel_backlight"
EndSectionCode language: plaintext (plaintext)

Ensure the intel driver is installed:

sudo apt-get install xserver-xorg-video-intelCode language: plaintext (plaintext)

IMPORTANT: Ensure that the integrated graphics are set as the default video card in the ThinkPad BIOS.

Kernel parameters

To ensure backlight control is working (increasing and decreasing brightness), add the following kernel parameter: acpi_backlight=native

On a Debian/Ubuntu based distribution, this can be appended to the kernel command line in the bootloader configuration, typically in /etc/default/grub (for GRUB users):

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_backlight=native"Code language: plaintext (plaintext)

After modifying this file, update GRUB with:

sudo update-grubCode language: plaintext (plaintext)

Fan control

Without software to control the fan, it may run at maximum speed. To enable fan control, create the file /etc/modprobe.d/thinkpad_acpi.conf:

options thinkpad_acpi fan_control=1Code language: plaintext (plaintext)

After that, install zcfan. On a Debian/Ubuntu based distribution:

sudo apt-get install zcfanCode language: plaintext (plaintext)

Packages

Ensure the necessary firmware for Atheros and Realtek network devices is installed. On Debian/Ubuntu-based distributions, install the following packages:

sudo apt install firmware-atheros firmware-realtekCode language: plaintext (plaintext)

It is also recommended to install essential packages for hardware encoding and decoding, including intel-microcode for the latest processor updates, which improve system stability and performance:

sudo apt-get install intel-microcode intel-media-va-driver-non-free i965-va-driverCode language: plaintext (plaintext)

TLP

TLP is a power management tool that optimizes battery life. Install and configure it as follows:

sudo apt install tlpCode language: plaintext (plaintext)

Create the configuration file /etc/tlp.d/00-base.conf:

DEVICES_TO_DISABLE_ON_BAT="bluetooth wwan"
DEVICES_TO_ENABLE_ON_STARTUP="wifi"
DEVICES_TO_DISABLE_ON_LAN_CONNECT="wifi wwan"

TLP_DEFAULT_MODE=BAT

CPU_SCALING_GOVERNOR_ON_AC=performance
CPU_SCALING_GOVERNOR_ON_BAT=schedutil

# PCIe Active State Power Management (ASPM):
PCIE_ASPM_ON_AC=performance

# Set Intel CPU performance: 0..100 (%). Limit the
# max/min to control the power dissipation of the CPU.
# Values are stated as a percentage of the available
# performance.
CPU_MIN_PERF_ON_AC=70
CPU_MAX_PERF_ON_AC=100

# [default] performance powersave powersupersave. on=disable
RUNTIME_PM_ON_AC=on
RUNTIME_PM_ON_BAT=powersupersaveCode language: Python (python)

Conclusion

The ThinkPad T420s and T420, though older models, remain reliable machines for everyday tasks. With the right configuration, these laptops can be revitalized, making them well-suited for modern use.

jc-dotfiles – A collection of configuration files for UNIX/Linux systems

The jamescherti/jc-dotfiles repository is a collection of configuration files. You can either install them directly or use them as inspiration your own dotfiles. These dotfiles provide configurations for various tools, enhancing productivity and usability. This article will explore the contents of the jc-dotfiles repository, highlight the main configurations, and guide you through the installation process.

In addition to the jc-dotfiles repository, the author also recommends the following repositories:

  • jamescherti/jc-firefox-settings: Provides the user.js file, which holds settings to customize the Firefox web browser to enhance the user experience and security.
  • jamescherti/jc-xfce-settings: A script that can customize the XFCE desktop environment programmatically, including window management, notifications, keyboard settings, and more, to enhance the user experience.
  • jamescherti/jc-gnome-settings: A script that can customize the GNOME desktop environment programmatically, including window management, notifications, and more.
  • jamescherti/bash-stdops: A collection of Bash helper shell scripts.
  • jamescherti/jc-gentoo-portage: Provides configuration files for customizing Gentoo Linux Portage, including package management, USE flags, and system-wide settings.
  • jamescherti/minimal-emacs.d: A lightweight and optimized Emacs base (init.el and early-init.el) that gives you full control over your configuration. It provides better defaults, an optimized startup, and a clean foundation for building your own vanilla Emacs setup.
  • jamescherti/jc-vimrc: The jc-vimrc project is a customizable Vim base that provides better Vim defaults, intended to serve as a solid foundation for a Vim configuration.

The jc-dotfiles Repository Overview

The repository includes configurations for several components of a UNIX/Linux system, ranging from shell settings to terminal multiplexer and input customizations. Here’s a breakdown of the key configurations included:

1. Shell Configuration (.bashrc, .profile, .bash_profile)

The shell configuration files are optimized for efficient Bash shell settings:

  • Efficient Command Execution: Predefined aliases, functions, and environment variables to speed up common tasks.
  • Interactive Sessions: Custom prompt configurations and other interactive settings that improve the overall shell experience.

2. Terminal Multiplexer Configuration (.tmux.conf)

Tmux is a tool for managing multiple terminal sessions, and this configuration enhances your Tmux setup. It includes:

  • Session Management: Custom keybindings and layout configurations for improved session switching.
  • Productivity Boosters: Enhanced usability and efficiency for handling multiple tasks within terminal windows.

3. Readline Configuration (.inputrc)

The Readline configuration improves interactive shell input by allowing additional keybindings:

  • Custom Keybindings: Use Alt-h, Alt-j, Alt-k, and Alt-l for cursor movement, enabling Vim-style HJKL navigation within the terminal.

4. Other Configurations

The repository also includes a variety of other configuration files and scripts:

  • .gitconfig: Custom Git settings to improve version control workflow. (Here is an article about .gitconfig: Optimized Git configuration ~/.gitconfig for performance and usability)
  • ranger: File manager configuration for enhanced navigation and file handling.
  • .fdignore: Ignore rules for fd, the fast search tool, to filter out unwanted files during searches.
  • .wcalcrc: Configuration for wcal, a command-line calculator.
  • mpv: Media player settings for a better viewing experience.
  • picom: Window compositor configuration for optimized display and transparency effects.
  • feh: Image viewer configuration with useful options and custom keybindings.

Conclusion

The jamescherti/jc-dotfiles configuration scripts can enhance your workflow. Whether you install them directly or use them as inspiration for your own dotfiles, you’ll gain access to optimized settings for your shell, terminal, Git, and various other tools, all aimed at boosting productivity and efficiency.

bash-stdops – A collection of useful Bash Shell Scripts

The jamescherti/bash-stdops project is a collection of helpful Bash scripts that simplify various operations, including file searching, text replacement, and content modification.

I use these scripts in conjunction with text editors like Emacs and Vim to automate tasks, including managing Tmux sessions, replacing text across a Git repository, securely copying and pasting from the clipboard by prompting the user before executing commands in Tmux, fix permissions, among other operations.

The bash-stdops Script Collection Overview

Files, Paths, and Strings

  • walk: Recursively lists files from the specified directory.
  • walk-run: Executes a command on all files.
  • sre: Replaces occurrences of a specified string or regular expression pattern, with support for case-insensitive matching and regular expressions.
  • git-sre: Executes sre at the root of a Git repository to replace text within files.
  • path-tr, path-uppercase, path-lowercase: Processes a file path to convert the filename to uppercase or lowercase.
  • autoperm: Sets appropriate permissions for files or directories (e.g., 755 for directories).
  • path-is: Prints the path and exits with status 0 if the file is binary or text.

Git

  • git-dcommit: Automates the process of adding, reviewing, and committing changes in Git.
  • git-squash: Squashes new Git commits between the current branch and a specified branch.

SSH

  • esa: Manages the SSH agent, including starting, stopping, adding keys, and executing commands.
  • sshwait: Waits for an SSH server to become available on a specified host.

Tmux

  • tmux-cbpaste: Pastes clipboard content into the current tmux window with user confirmation.
  • tmux-run: Executes a command in a new tmux window. If inside tmux, it adds a new window to the current session; otherwise, it creates a window in the first available tmux session.
  • tmux-session: Attaches to an existing tmux session or creates a new one if it doesn’t exist.

X11/Wayland

  • xocrshot: Captures a screenshot, performs OCR on it, displays the extracted text, and copies it to the clipboard.

Misc

  • haide: Uses AIDE to monitor the file integrity of the user’s home directory.
  • cbcopy, cbpaste, cbwatch: Manages clipboard content by copying, pasting, or monitoring for changes.
  • outonerror: Redirects the command output to stderr only if the command fails.
  • over: Displays a notification once a command completes execution.
  • largs: Executes a command for each line of input from stdin, replacing {} with the line.

Conclusion

The jamescherti/bash-stdops scripts provide a variety of tasks, including file manipulation, Git management, and SSH automation, improving efficiency and usability.

Running Large Language Models locally with Ollama (compatible with Linux, macOS, and Windows)

Running Large Language Models on your machine can enhance your projects, but the setup is often complex. Ollama simplifies this by packaging everything needed to run an Large Language Models. Here’s a concise guide on using Ollama to run LLMs locally.

Requirements

  • CPU: Aim for an CPU that supports AVX512, which accelerates the matrix multiplication operations essential for LLM AI models. (If your CPU does not support AVX, see Ollama Issue #2187: Support GPU runners on CPUs without AVX.)
  • RAM: A minimum of 16GB is recommended for a decent experience when running models with 7 billion parameters.
  • Disk Space: A practical minimum of 40GB of disk space is advisable.
  • GPU: While a GPU is not mandatory, it is recommended for enhanced performance in model inference. Refer to the list of GPUs that are compatible with Ollama. For running quantized models, GPUs that support 4-bit quantized formats can handle large models more efficiently, with VRAM requirements as follows: ~4 GB VRAM for 7B model, ~8 GB VRAM for 13B model, ~16 GB VRAM 30B model, and ~32 GB VRAM for 65B model.
  • For NVIDIA GPUs: Ollama requires CUDA, a parallel computing platform and API developed by NVIDIA. You can find the instructions to install CUDA on Debian here. Similar instructions can be found in your Linux distribution’s wiki.

Step 1: Install Ollama

Download and install Ollama for Linux using:

curl -fsSL https://ollama.com/install.sh | shCode language: plaintext (plaintext)

Step 2: Download a Large Language Model

Download a specific large language model using the Ollama command:

ollama pull gemma2:2bCode language: plaintext (plaintext)

The command above downloads the Gemma2 model by Google DeepMind. You can find other models by visiting the Ollama Library.

(Downloading “gemma2:2b” actually downloads “gemma2:2b-instruct-q4_0”, indicating that it retrieves a quantized version of the 2 billion parameter model specifically optimized for instruction-following tasks like chat-bots. This quantization process reduces the model’s precision from the original floating-point representation to a more compact format, such as float32, thereby significantly lowering memory usage and enhancing inference speed. However, this quantization can lead to a slight decrease in accuracy compared to the full-precision floating-point model.)

Step 3: Chat with the model

Run the large language model:

ollama run gemma2:2bCode language: plaintext (plaintext)

This launches an interactive REPL where you can interact with the model.

Step 4: Install open-webui (web interface)

Open-webui offers a user-friendly interface for interacting with large language models downloaded via Ollama. It enables users to run and customize models without requiring extensive programming knowledge.

It can be installed using pip within a Python virtual environment:

mkdir -p ~/.python-venv/open-webui
python -m venv ~/.python-venv/open-webui
source ~/.python-venv/open-webui/bin/activate
pip install open-webuiCode language: plaintext (plaintext)

Finally, execute the following command to start the open-webui server:

~/.python-venv/open-webui/bin/open-webui serveCode language: plaintext (plaintext)

You will also have to execute Ollama as a server simultaneously with open-webui:

ollama serve

Conclusion

With Ollama, you can quickly run Large Language Models (LLMs) locally and integrate them into your projects. Additionally, open-webui provides a user-friendly interface for interacting with these models, making it easier to customize and deploy them without extensive programming knowledge.

Links

  • Ollama Library: A collection of language models available for download through Ollama.
  • Ollama @Github: The Ollama Git repository.
  • Compile Ollama: For users who prefer to compile Ollama instead of using the binary.

Making ‘cron’ notify the user about a failed command by redirecting its output to stderr only when it fails (non-zero exit code)

Monitoring the success or failure of cron jobs can be challenging, especially when multiple jobs cause cron to send emails even when they don’t fail. A more effective approach to handling cron job errors is to use a Bash script that directs the output to stderr only if the job fails, causing cron to notify the user about the error only when the script fails.

Script overview

The script provided below ensures that any command passed to it will redirect its output to a temporary file. If the command fails (i.e., returns a non-zero exit code), the script sends the content of the temporary file to stderr, which causes cron to notify the user.

Here is the complete script to be installed in /usr/local/bin/outonerror:

#!/usr/bin/env bash
# Description:
# Redirect the command's output to stderr only if the command fails (non-zero
# exit code). No output is shown when the command succeeds.
#
# Author: James Cherti
# License: MIT
# URL: https://www.jamescherti.com/cron-email-output-failed-commands-only/

set -euf -o pipefail

if [[ "$#" -eq 0 ]]; then
  echo "Usage: $0 <command> [args...]" >&2
  exit 1
fi

cleanup() {
  if [[ "$OUTPUT_TMP" != "" ]] && [[ -f "$OUTPUT_TMP" ]]; then
    rm -f "$OUTPUT_TMP"
  fi

  OUTPUT_TMP=""
}

OUTPUT_TMP=$(mktemp --suffix=.outonerror)
trap 'cleanup' INT TERM EXIT

ERRNO=0
"$@" >"$OUTPUT_TMP" 2>&1 || ERRNO="$?"
if [[ "$ERRNO" -ne 0 ]]; then
  cat "$OUTPUT_TMP" >&2
  echo "$0: '$*' exited with status $ERRNO" >&2
fi

cleanup
exit "$ERRNO"
Code language: Bash (bash)

To use this script with a cron job, save it as /usr/local/bin/outonerror and make it executable by runnning:

chmod +x /usr/local/bin/outonerrorCode language: plaintext (plaintext)

Integration with Cron

Cron sends an email by default whenever a job produces output, whether standard output or error output. This is typically configured through the MAILTO environment variable in the crontab file. If MAILTO is not set, cron sends emails to the user account under which the cron job runs. Cron will only be able to send emails if the mail transfer agent (e.g., Postfix, Exim, Sendmail) is configured properly.

Here is how to schedule the cron job to use the outonerror script:

MAILTO="your-email@example.com"
* * * * * /usr/local/bin/outonerror your_command_hereCode language: plaintext (plaintext)

With this setup, the cron job will execute your_command_here and only send an email if it fails, thanks to cron’s default behavior of emailing stderr output to the user.

Conclusion

This script is a simple yet effective solution for improving cron jobs by ensuring that the user is notified only when something goes wrong. It reduces unnecessary notifications for successful executions and provides clear error messages when failures occur.

Ansible: Installing and configuring Gitolite using Ansible for secure Git repository management

EDIT: The latest version of the code is available in the repository:
jamescherti/ansible-role-gitolite

Gitolite provides a way to manage Git repositories, control access to those repositories, and maintain a central configuration using simple configuration files and SSH keys.

Automating Gitolite Installation with Ansible

The Ansible tasks outlined in this article are designed to simplify the installation and configuration of Gitolite on your server. These tasks can automatically handle the entire setup process, including prerequisites like installing necessary packages and configuring system users and groups.

This automation significantly reduces the risk of human error and ensures a consistent setup across different environments.

The Ansible tasks:

---
# Automating Gitolite Installation with Ansible
# License: MIT
# Author: James Cherti
# URL: https://www.jamescherti.com/ansible-install-gitolite-linux/

- name: Install Gitolite
  block:
    - name: Check if the Operating System is supported
      fail:
        msg: "Operating System family is not supported: {{ ansible_os_family }}"
      when: ansible_os_family not in ["Debian", "RedHat"]

    - name: Install Gitolite on Debian-based Systems
      apt:
        name: gitolite3
      when: ansible_os_family == "Debian"

    - name: Install Gitolite on RedHat-based Systems
      yum: name=gitolite3
      when: ansible_os_family == "RedHat"

    - name: Create Gitolite system group
      group:
        name: "{{ gitolite_group }}"
        system: true

    - name: Create Gitolite system user
      user:
        name: "{{ gitolite_user }}"
        group: "{{ gitolite_group }}"
        home: "{{ gitolite_home }}"
        shell: "{{ gitolite_shell }}"
        create_home: true
        system: true

    - name: Ensure Gitolite home directory exists with proper permissions
      file:
        state: directory
        path: "{{ gitolite_home }}"
        owner: "{{ gitolite_user }}"
        group: "{{ gitolite_group }}"
        mode: 0700

- name: Configure Gitolite SSH key
  block:
    - name: Generate Gitolite SSH key pair if it does not exist
      become: true
      become_user: "{{ gitolite_user }}"
      command: ssh-keygen -t rsa -b 4096 -f {{ gitolite_ssh_key_path | quote }} -N ""
      args:
        creates: "{{ gitolite_ssh_key_path }}"

    - name: Set permissions for the Gitolite .ssh directory
      file:
        path: "{{ gitolite_ssh_directory }}"
        owner: "{{ gitolite_user }}"
        group: "{{ gitolite_user }}"
        mode: 0700

    - name: Set permissions for the SSH public key
      file:
        path: "{{ gitolite_ssh_key_path }}.pub"
        owner: "{{ gitolite_user }}"
        group: "{{ gitolite_user }}"
        mode: 0644

    - name: Set permissions for the SSH private key
      file:
        path: "{{ gitolite_ssh_key_path }}"
        owner: "{{ gitolite_user }}"
        group: "{{ gitolite_user }}"
        mode: 0600

- name: Setup Gitolite
  block:
    - name: Initialize Gitolite with the admin public key
      become: true
      become_user: "{{ gitolite_user }}"
      command:
        argv:
          - "gitolite"
          - "setup"
          - "-pk"
          - "{{ gitolite_ssh_public_key_path }}"
      args:
        creates: /var/lib/gitolite/repositories/gitolite-admin.gitCode language: YAML (yaml)

The required Ansible variables:

---
# Automating Gitolite Installation with Ansible
# License: MIT
# Author: James Cherti
# URL: https://www.jamescherti.com/ansible-install-gitolite-linux/

gitolite_user: gitolite
gitolite_group: gitolite
gitolite_shell: /bin/bash
gitolite_home: "/var/lib/{{ gitolite_user }}"
gitolite_ssh_directory: "{{ gitolite_home }}/.ssh"
gitolite_ssh_key_path: "{{ gitolite_ssh_directory }}/id_rsa"
gitolite_ssh_public_key_path: "{{ gitolite_ssh_directory }}/id_rsa.pub"Code language: YAML (yaml)

Related links

Ansible: Reintegrating /etc/rc.local in Linux systems that use Systemd as their init system

For years, /etc/rc.local has been a staple in Linux administration, providing a straightforward means to execute scripts or commands automatically upon system startup. However, with the transition to newer init systems like systemd, the /etc/rc.local script is no longer executed at boot time.

Ansible tasks that restore the /etc/rc.local script

The following Ansible tasks will create and configure /etc/rc.local and also ensure its execution by Systemd at boot time.

---
# Description: Reintegrate /etc/rc.local in Linux systems that use Systemd 
#              as their init system.
# Author: James Cherti
# License: MIT
# URL: https://www.jamescherti.com/ansible-config-etc-rc-local-linux-systemd/

- name: Check if /etc/rc.local exists
  stat:
    path: "/etc/rc.local"
  register: etc_rc_local_file

- name: Create the file /etc/rc.local should it not already exist
  copy:
    dest: /etc/rc.local
    owner: root
    group: root
    mode: 0750
    content: |
      #!/usr/bin/env bash
  when: not etc_rc_local_file.stat.exists

- name: Create the systemd service rc-local.service
  register: rc_local
  copy:
    dest: /etc/systemd/system/rc-local.service
    owner: root
    group: root
    mode: 0644
    content: |
      [Unit]
      Description=/etc/rc.local compatibility

      [Service]
      Type=oneshot
      ExecStart=/etc/rc.local
      TimeoutSec=0
      RemainAfterExit=yes
      SysVStartPriority=99

      [Install]
      WantedBy=multi-user.target

- name: Reload systemd daemon
  systemd:
    daemon_reload: yes
  when: rc_local.changed|bool

- name: Enable rc-local.service
  systemd:
    name: rc-local
    enabled: true
Code language: YAML (yaml)

Emulating Cherry MX Blue Mechanical Keyboard Sounds on Linux

For people nostalgic for the era of tactile and audible feedback from typing on a mechanical keyboard, Cherrybuckle allow simulating the sounds of a mechanical keyboard with Cherry MX Blue key switches.

Cherrybuckle operates as a background process within a computer system, capturing and emitting a sound for each key pressed and released. It is a fork of the Bucklespring project that adds Cherry MX sounds to the default Bucklespring keyboard sounds.

Installing dependencies

The dependencies can be installed on a Debian or Ubuntu system using the following commands:

sudo apt-get install build-essential git
sudo apt-get install libalure-dev libx11-dev libxtst-dev pkg-configCode language: plaintext (plaintext)

Compiling and running Cherrybuckle on Debian/Ubuntu

Retrieve the project source code for the Git repository:

git clone https://github.com/jamescherti/cherrybuckleCode language: plaintext (plaintext)

Change the current working directory to “cherrybuckle”:

cd cherrybuckleCode language: plaintext (plaintext)

Compile the source code into an executable program:

makeCode language: plaintext (plaintext)

Finally, execute Cherrybuckle:

./cherrybuckleCode language: plaintext (plaintext)

Creating and Restoring a Gzip Compressed Disk Image with dd on UNIX/Linux

Creating and restoring disk images are essential tasks for developers, system administrators, and users who want to safeguard their data or replicate systems efficiently. One useful tool for this purpose is dd, which allows for low-level copying of data. In this article, we will explore how to clone and restore a partition from a compressed disk image in a UNIX/Linux operating system.

IMPORTANT: There is a risk of data loss if a mistake is made. The dd command can be dangerous if not used carefully. Specifying the wrong input or output device can result in data loss. Users should exercise caution and double-check their commands before executing them.

Cloning a Partition into a Compressed Disk Image

To clone a partition into a compressed disk image, you can use the dd and gzip commands:

dd if=/dev/SOURCE conv=sync bs=64K | gzip --stdout > /path/to/file.gzCode language: plaintext (plaintext)

This command copies the content of the block device /dev/SOURCE to the compressed file /path/to/file.gz, 64 kilobytes at a time.

Restoring a Partition from a Compressed Disk Image

To restore a partition from a file containing a compressed disk image, use the following command:

gunzip --stdout /path/to/file.gz | dd of=/dev/DESTINATION bs=64K
Code language: plaintext (plaintext)

This command decompresses the content of the compressed file located at /path/to/file.gz and copies it to the block device /dev/DESTINATION, 64 kilobytes at a time.

More information about the dd command options

Here are additional details about the dd command options:

  • The status=progress option makes dd display transfer statistics progressively.
  • The conv=noerror option instructs dd to persist despite encountering errors. However, ignoring errors might result in data corruption in the copied image. The image could be incomplete or corrupted, especially if errors occur in critical parts of the data. This option can be added to the conv option as follows: conv=noerror
  • The conv=sync option makes dd wait for both the data and the metadata to be physically written to the storage media before proceeding to the next operation. In situations where data integrity is less critical, using conv=sync can help restore as much data as possible, even from a source with occasional errors.
  • Finally, the bs=64K option instructs dd to read or write up to the specified bytes at a time (in this case, 64 kilobytes). The default value is 512 bytes, which is relatively small. It is advisable to consider using 64K or even the larger 128K. However, it’s important to note that while a larger block size speeds up the transfer, a smaller block size enhances transfer reliability.

Ensuring Data Integrity

Although the dd command automatically verifies that the input and output block sizes match during each block copy operation, it is prudent to further confirm the integrity of the copied data after completing the dd operation.

To achieve this, follow these steps:

Generate the md5sum of the source block device:

dd if=/dev/SOURCE | md5sumCode language: plaintext (plaintext)

Next, generate the md5sum of the gzip-compressed file:

gunzip --stdout /path/to/file.gz | md5sumCode language: plaintext (plaintext)

Ensure that the two md5sum fingerprints are equal. This additional verification step adds an extra layer of assurance regarding the accuracy and integrity of the copied data.

Gentoo: How to Speed Up emerge ‐‐sync

Synchronizing with the Gentoo Portage ebuild repository using emerge --sync can be slow when utilizing the rsync protocol. However, an effective solution exists that can greatly improve the synchronization speed: Configuring emerge --sync to synchronize using Git instead.

In this article, we will explore how to set up emerge to synchronize from the official Gentoo ebuild Git repository and save valuable time during the synchronizing process.

Step 1: Install Git using the following command:

sudo emerge -a dev-vcs/gitCode language: plaintext (plaintext)

Step 2: Remove any file from the directory /etc/portage/repos.conf/ that configures the emerge command to use rsync.

Step 3: Create the file /etc/portage/repos.conf/gentoo.conf containing:

[DEFAULT]
main-repo = gentoo

[gentoo]

# The sync-depth=1 option speeds up initial pull by fetching 
# only the latest Git commit and its immediate ancestors, 
# reducing the amount of downloaded Git history.
sync-depth = 1

sync-type = git
auto-sync = yes
location = /var/db/repos/gentoo
sync-git-verify-commit-signature = yes
sync-openpgp-key-path = /usr/share/openpgp-keys/gentoo-release.asc
sync-uri = https://github.com/gentoo-mirror/gentoo.gitCode language: plaintext (plaintext)

Step 4: Finally, run the following command to synchronize with the Gentoo ebuild repository using Git:

sudo emerge --sync

The initial download of the entire Git repository will cause the first emerge --sync command to take some time. However, subsequent synchronizations will be significantly quicker, taking only a few seconds.

Using Git can be a great way to speed up synchronization with the Gentoo ebuild repository. By following the steps outlined in this article, you can clone the Portage repository to your local machine and keep it up-to-date with the latest changes using Git. This can save you a lot of time when syncing your local repository.

Arch Linux: Preserving the kernel modules of the currently running kernel during and after an upgrade

One potential issue when upgrading the Arch Linux kernel is that the modules of the currently running kernel may be deleted. This can lead to a number of problems, including unexpected behavior, system crashes, or the inability to mount certain file systems (e.g. the kernel fails to mount a vfat file system due to the unavailability of the vfat kernel module).

The Arch Linux package linux-keep-modules (also available on AUR: linux-keep-modules @AUR), written by James Cherti, provides a solution to ensure that the modules of the currently running Linux kernel remain available until the operating system is restarted. Additionally, after a system restart, the script automatically removes any unnecessary kernel modules that might have been left behind by previous upgrades (e.g. the kernel modules that are not owned by any Arch Linux package and are not required by the currently running kernel).

The linux-keep-modules package keeps your system running smoothly and maintains stability even during major Linux kernel upgrades.

Make and install the linux-keep-modules package

Clone the repository and change the current directory to ‘archlinux-linux-keep-modules/’:

$ git clone https://github.com/jamescherti/archlinux-linux-keep-modules.git
$ cd archlinux-linux-keep-modules/Code language: plaintext (plaintext)

Use makepkg to make linux-keep-modules package:

$ makepkg -fCode language: plaintext (plaintext)

Install the linux-keep-modules package:

$ sudo pacman -U linux-keep-modules-*-any.pkg.tar.*Code language: plaintext (plaintext)

Finally, enable the cleanup-linux-modules service:

$ sudo systemctl enable cleanup-linux-modulesCode language: plaintext (plaintext)

(The cleanup-linux-modules service will delete the Linux kernel modules that are not owned by any a package at boot time)

The linux-keep-modules Arch Linux package offers a solution to preserve kernel modules during and after upgrades, ensuring that the necessary modules for the currently running kernel remain present in the system even after the kernel is upgraded. This solution keeps your system running smoothly and maintains stability even during major upgrades.

Links related to the pacman package linux-keep-modules

Helper script to upgrade Arch Linux

In this article, we will be sharing a Python script, written by James Cherti, that can be used to upgrade Arch Linux. It is designed to make the process of upgrading the Arch Linux system as easy and efficient as possible.

The helper script to upgrade Arch Linux can:

  • Delete the ‘/var/lib/pacman/db.lck’ when pacman is not running,
  • upgrade archlinux-keyring,
  • upgrade specific packages,
  • download packages,
  • upgrade all packages,
  • remove from the cache the pacman packages that are no longer installed.

The script provides a variety of options and is perfect for those who want to automate the process of upgrading their Arch Linux system (e.g. execute it from cron) and ensure that their system is always up to date.

Requirements: psutil
Python script name: archlinux-update.py

#!/usr/bin/env python
# Author: James Cherti
# License: MIT
# URL: https://www.jamescherti.com/script-update-arch-linux/
"""Helper script to upgrade Arch Linux."""

import argparse
import logging
import os
import re
import subprocess
import sys
import time

import psutil


class ArchUpgrade:
    """Upgrade Arch Linux."""

    def __init__(self, no_refresh: bool):
        self._download_package_db = no_refresh
        self._keyring_and_pacman_upgraded = False
        self._delete_pacman_db_lck()

    @staticmethod
    def _delete_pacman_db_lck():
        """Delete '/var/lib/pacman/db.lck' when pacman is not running."""
        pacman_running = False
        for pid in psutil.pids():
            try:
                process = psutil.Process(pid)
                if process.name() == "pacman":
                    pacman_running = True
                    break
            except psutil.Error:
                pass

        if pacman_running:
            print("Error: pacman is already running.", file=sys.stderr)
            sys.exit(1)

        lockfile = "/var/lib/pacman/db.lck"
        if os.path.isfile(lockfile):
            os.unlink(lockfile)

    def upgrade_specific_packages(self, package_list: list) -> list:
        """Upgrade the packages that are in 'package_list'."""
        outdated_packages = self._outdated_packages(package_list)
        if outdated_packages:
            cmd = ["pacman", "--noconfirm", "-S"] + outdated_packages
            self.run(cmd)

        return outdated_packages

    def _outdated_packages(self, package_list: list) -> list:
        """Return the 'package_list' packages that are outdated."""
        outdated_packages = []
        try:
            output = subprocess.check_output(["pacman", "-Qu"])
        except subprocess.CalledProcessError:
            output = b""

        for line in output.splitlines():
            line = line.strip()
            pkg_match = re.match(r"^([^\s]*)\s", line.decode())
            if not pkg_match:
                continue

            pkg_name = pkg_match.group(1)
            if pkg_name in package_list:
                outdated_packages += [pkg_name]

        return outdated_packages

    @staticmethod
    def upgrade_all_packages():
        """Upgrade all packages."""
        ArchUpgrade.run(["pacman", "--noconfirm", "-Su"])

    def download_all_packages(self):
        """Download all packages."""
        self.download_package_db()
        self.run(["pacman", "--noconfirm", "-Suw"])

    def download_package_db(self):
        """Download the package database."""
        if self._download_package_db:
            return

        print("[INFO] Download the package database...")
        ArchUpgrade.run(["pacman", "--noconfirm", "-Sy"])
        self._download_package_db = True

    def upgrade_keyring_and_pacman(self):
        self.download_package_db()

        if not self._keyring_and_pacman_upgraded:
            self.upgrade_specific_packages(["archlinux-keyring"])
            self._keyring_and_pacman_upgraded = True

    def clean_package_cache(self):
        """Remove packages that are no longer installed from the cache."""
        self.run(["pacman", "--noconfirm", "-Scc"])

    @staticmethod
    def run(cmd, *args, print_command=True, **kwargs):
        """Execute the command 'cmd'."""
        if print_command:
            print()
            print("[RUN] " + subprocess.list2cmdline(cmd))

        subprocess.check_call(
            cmd,
            *args,
            **kwargs,
        )

    def wait_download_package_db(self):
        """Wait until the package database is downloaded."""
        successful = False
        minutes = 60
        hours = 60 * 60
        seconds_between_tests = 15 * minutes
        for _ in range(int((10 * hours) / seconds_between_tests)):
            try:
                self.download_package_db()
            except subprocess.CalledProcessError:
                minutes = int(seconds_between_tests / 60)
                print(
                    f"[INFO] Waiting {minutes} minutes before downloading "
                    "the package database...",
                    file=sys.stderr,
                )
                time.sleep(seconds_between_tests)
                continue
            else:
                successful = True
                break

        if not successful:
            print("Error: failed to download the package database...",
                  file=sys.stderr)
            sys.exit(1)


def parse_args():
    """Parse the command-line arguments."""
    usage = "%(prog)s [--option] [args]"
    parser = argparse.ArgumentParser(description=__doc__.splitlines()[0],
                                     usage=usage)
    parser.add_argument("packages",
                        metavar="N",
                        nargs="*",
                        help="Upgrade specific packages.")

    parser.add_argument(
        "-u",
        "--upgrade-packages",
        default=False,
        action="store_true",
        required=False,
        help="Upgrade all packages.",
    )

    parser.add_argument(
        "-d",
        "--download-packages",
        default=False,
        action="store_true",
        required=False,
        help="Download the packages that need to be upgraded.",
    )

    parser.add_argument(
        "-c",
        "--clean",
        default=False,
        action="store_true",
        required=False,
        help=("Remove packages that are no longer installed from "
              "the cache."),
    )

    parser.add_argument(
        "-n",
        "--no-refresh",
        default=False,
        action="store_true",
        required=False,
        help=("Do not download the package database (pacman -Sy)."),
    )

    parser.add_argument(
        "-w",
        "--wait-refresh",
        default=False,
        action="store_true",
        required=False,
        help=("Wait for a successful download of the package database "
              "(pacman -Sy)."),
    )

    return parser.parse_args()


def command_line_interface():
    """The command-line interface."""
    logging.basicConfig(level=logging.INFO, stream=sys.stdout,
                        format="%(asctime)s %(name)s: %(message)s")

    if os.getuid() != 0:
        print("Error: you cannot perform this operation unless you are root.",
              file=sys.stderr)
        sys.exit(1)

    nothing_to_do = True
    args = parse_args()
    upgrade = ArchUpgrade(no_refresh=args.no_refresh)

    if args.wait_refresh:
        upgrade.wait_download_package_db()
        nothing_to_do = False

    if args.packages:
        print("[INFO] Upgrade the packages:", ", ".join(args.packages))
        upgrade.upgrade_keyring_and_pacman()
        if not upgrade.upgrade_specific_packages(args.packages):
            print()
            print("[INFO] The following packages are already up-to-date:",
                  ", ".join(args.packages))
        nothing_to_do = False

    if args.download_packages:
        print("[INFO] Download all packages...")
        upgrade.download_all_packages()
        nothing_to_do = False

    if args.upgrade_packages:
        print("[INFO] Upgrade all packages...")
        upgrade.upgrade_keyring_and_pacman()
        upgrade.upgrade_all_packages()

        nothing_to_do = False

    if args.clean:
        print("[INFO] Remove packages that are no longer installed "
              "from the cache...")
        upgrade.clean_package_cache()
        nothing_to_do = False

    if nothing_to_do:
        print("Nothing to do.")
        print()

    sys.exit(0)


def main():
    try:
        command_line_interface()
    except subprocess.CalledProcessError as err:
        print(f"[ERROR] Error {err.returncode} returned by the command: "
              f"{subprocess.list2cmdline(err.cmd)}",
              file=sys.stderr)
        sys.exit(1)


if __name__ == '__main__':
    main()Code language: Python (python)

A tool to Execute a Command in a new Tmux Window

The Python script tmux-run.py allows executing a command in a new tmux window. A tmux window is similar to a tab in other software.

If the script is executed from within a tmux session, it creates a tmux window in the same tmux session. However, if the script is executed from outside of a tmux session, it creates a new tmux window in the first available tmux session.

(Requirement: libtmux)

The Python script: tmux-run.py

#!/usr/bin/env python
# License: MIT
# Author: James Cherti
# URL: https://www.jamescherti.com/python-script-run-command-new-tmux-window/
"""Execute a command in a new tmux window.

This script allows executing a command in a new tmux window (a tmux window is
similar to a tab in other software).

- If it is executed from within a tmux session, it creates a tmux window
in the same tmux session.
- However, if the script is executed from outside of a tmux
session, it creates a new tmux window in the first available tmux session.

"""

import os
import shlex
import shutil
import sys

import libtmux


SCRIPT_NAME = os.path.basename(sys.argv[0])


def parse_args():
    if len(sys.argv) < 2:
        print(f"Usage: {SCRIPT_NAME} <command> [args...]",
              file=sys.stderr)
        sys.exit(1)

    args = sys.argv[1:]
    args[0] = shutil.which(args[0])
    if args[0] is None:
        print(f"{SCRIPT_NAME}: no {args[0]} in "
              f"({os.environ.get('PATH', '')})", file=sys.stderr)
        sys.exit(1)

    return args


def get_tmux_session():
    tmux_server = libtmux.Server()
    if not tmux_server.sessions:
        print(f"{SCRIPT_NAME}: the tmux session was not found",
              file=sys.stderr)
        sys.exit(1)

    tmux_session_id = os.environ["TMUX"].split(",")[-1]
    if tmux_session_id:
        try:
            return tmux_server.sessions.get(id=f"${tmux_session_id}")
        except Exception:  # pylint: disable=broad-except
            pass

    return tmux_server.sessions[0]


def run_in_tmux_window():
    try:
        command_args = parse_args()
        tmux_session = get_tmux_session()
        command_str = shlex.join(command_args)
        tmux_session.new_window(attach=True, window_shell=command_str)
    except libtmux.exc.LibTmuxException as err:
        print(f"Error: {err}.", file=sys.stderr)
        sys.exit(1)


if __name__ == '__main__':
    run_in_tmux_window()


Code language: Python (python)

Gentoo Linux: Unlocking a LUKS Encrypted LVM Root Partition at Boot Time using a Key File stored on an External USB Drive

Gentoo can be configured to use a key file stored on an external USB drive to unlock a LUKS encrypted LVM root partition.

We will explore in this article the general steps involved in configuring Gentoo to use an external USB drive as a key file to unlock a LUKS encrypted LVM root partition.

1. Create a key file on the USB stick and add it to the LUKS encrypted partition

Generate a key file on a mounted ext4 or vfat partition of a USB stick, which will be used by initramfs to unlock the LUKS partition:

dd if=/dev/urandom of=/PATH/TO/USBSTICK/keyfile bs=1024 count=4Code language: plaintext (plaintext)

Ensure that the partition on the USB drive has a label, as the initramfs will use this label to find where the key file is located.

Afterward, add the key file to the LUKS partition to enable decryption of the partition using that key file:

cryptsetup luksAddKey /dev/PART1 /PATH/TO/USBSTICK/keyfile

In this example, “/dev/PART1” is the partition where the LUKS encryption is enabled, and “/PATH/TO/USBSTICK/keyfile” is the location of the keyfile.

2 – Find the UUID of the encrypted partition and the label of the USB drive

Use the lsblk command to find the UUID of the encrypted partition and the label of the USB drive:

lsblk -o +UUID,LABEL

3. Configure the boot loader (such as Systemd-boot, GRUB, Syslinux…)

Add to the boot loader configuration the following initramfs kernel parameters:

  • crypt_root=UUID=A1111111-A1AA-11A1-AAAA-111AA11A1111
  • root=/dev/LVMVOLUME/root
  • root_keydev=/dev/disk/by-label/LABELNAME
  • root_key=keyfile

Here is an example for Systemd-boot:

options dolvm crypt_root=UUID=A1111111-A1AA-11A1-AAAA-111AA11A1111 root=/dev/LVMVOLUME/root root_keydev=/dev/disk/by-label/LABELNAME root_key=keyfileCode language: plaintext (plaintext)

To ensure proper setup:

  • Customize the initramfs options for LVMVOLUME, LABELNAME, and UUID=A1111111-A1AA-11A1-AAAA-111AA11A1111 to match your specific case.
  • Verify that the ext4 or vfat partition of the USB drive that is labeled “LABELNAME” contains a file named “keyfile”.
  • Make sure that the modules “dm_mod” and “usb_storage” are included in the initramfs.

This method offers a convenient way to unlock a LUKS encrypted root LVM partition. The implementation process is well-documented, making it a suitable choice for those looking to secure their Gentoo Linux systems.

How to make Vim edit/diff files from outside of Vim? (e.g. from a shell like Bash, Zsh, Fish..)

The Vim editor offers the ability to connect to a Vim server and make it perform various tasks from outside of Vim. The command-line tools vim-client-edit, vim-client-diff and the vim_client Python module, written by James Cherti, can be used to easily find and connect to a Vim server and make it perform the following tasks:

  • Edit files or directories in new tabs (The command-line tool vim-client-edit),
  • Diff/Compare up to eight files (The command-line tool vim-client-diff),
  • Evaluate expressions and return their result (The Python module vim_client),
  • Send commands and expressions to Vim (The Python module vim_client).

The command-line tools vim-client-edit and vim-client-diff are especially useful when a quick edit or comparison needs to be performed on a file from outside of Vim (e.g. from a shell like Bash, Zsh, Fish, etc.).

Additionally, the vim_client Python module allows running expressions on a Vim server and retrieving their output, which can be useful for automating tasks or scripting. For example, you can use vim-client to run a search and replace operation on a file or directory, or to perform a complex diff operation between two files.

Overall, vim-client is a powerful tool for interacting with Vim from the vim-client-edit and vim-client-diff command-line tools. The vim_client Python module can also be used to run and retrieve the output of Vim expressions, which can help automate various tasks.

Please star vim-client on GitHub to support the project!

Requirements

To use vim-client, you will need to have Vim and Python installed on your system.

Installation

The vim-client package can be installed with pip:

$ sudo pip install vim-clientCode language: Bash (bash)

Execute Vim server

The Vim editor must be started with the option “–servername”, which enables the Vim server feature that allows clients to connect and send commands to Vim:

$ vim --servername SERVERNAMECode language: plaintext (plaintext)

Make Vim server edit multiple files in tabs

Editing a list of files in new tabs:

$ vim-client-edit file1 file2 file3 

Make Vim server diff files (like vimdiff)

Comparing/diff up to eight files:

$ vim-client-diff file1 file2

Useful ~/.bashrc aliases:

Adding the following aliases to ~/.bashrc is recommended as it makes it easy to execute the command-line tools vim-client-edit and vim-client-diff:

alias gvim=vim-client-edit
alias vim=vim-client-edit
alias vi=vim-client-edit
alias vimdiff=vim-client-diff

Links related to vim-client