Sunday, March 17, 2024

New APT-offline-GUI icon


5/5/24 (see Notes)

===================

Warning

There are potential security risks involved in connecting a phone (or for that matter any device with wireless capability) to an "air gap" PC. Even if the phone's wireless capability is supposedly disabled via software, it's possible that malware could enable it, such as in burst-mode, and even if it remained disabled, there would still be the possibility of malware on the phone accessing data on the PC. To be certain that hackers and malware can't access the PC via the phone, a "dumb" storage device such as a flash drive or SD-card should be used to relay data between the PC and the phone. Many if not most smartphones can interface directly with external drives, and if it's not convenient to move or copy files or folders with your phone's default file manager, you can install a dual-pane manager such as X-plore. Another possibility would be to use an MX-Linux installation just to make changes and to make duplicate installations for accessing/processing data (details below), so that there would be no data on the installation being changed, and you could connect the phone directly to the PC.

====================

My original APT-offline-GUI icon was too abstract and had a crude appearance by icon-standards, so I devised a replacement which expresses the essential aspects of using APT-offline with a phone as the internet-access device, and I used a combination of standard icons and shapes to give it a more polished appearance. It's more complex than the typical icon, but APT-offline is so unique that it can't be lumped in with anything else and symbolized by a simple icon. The details are too small to be seen in a small icon, but they could be seen by increasing the icon-size, which is an option wherever icons are used.

So, in the unlikely case that the icon were incorporated into APT-offline-GUI, and APT-offline were included in a Linux release, users could examine the icon and get a basic idea of what APT-offline does. They might realize that it would allow them to create a totally offline, totally secure installation, and to make changes to it without compromising its security. [1]   

In order to automatically appear in a panel or dock when the GUI is launched, and to be pinned in place to create a launcher in the panel or dock, the icon would have to be assigned an official name to avoid conflicts with other icon-names, the name would have to be incorporated into the APT-offline-GUI source code, and the icon would have to be placed in the appropriate icon-directory as part of the APT-offline-GUI installation-process. As it is, a generic icon is used.

In the new icon, the black arrow symbolizes the exchange of data between the PC and phone. This data consists of a) "signature files" (SFs), which contain lists of URLs of files to download, and information which allows the downloaded files to be validated [2]; and b) the resulting downloaded files, which are downloaded, validated, and stored in a folder reserved for the files listed in the corresponding SF, by the phone's installation of APT-offline running in "get" mode, perhaps after taking the phone to a suitable access-point, such as an area with 5G coverage or a fast wifi hotspot.

Error messages about files not being found are common during the download-process, but they're typically insignificant because signature files contain redundant listings to compensate for differences in server-organization, and if a file isn't found in one location on a particular server, it's found in another.

The downloaded files consist of software modules/packages, and/or "package index" files. A complete set of package index files is known as a package index, which contains information on all of the packages in the relevant online repositories, including which packages are required by each app. In some types of Linux, certain subsets of the online package index can be downloaded and installed, to reduce the amount of data required for an update, although I would caution against de-selecting any sections from the default selections unless you really know what you're doing, because you might end up needing the de-selected sections to make the desired changes to the PC. For example, I once de-selected the updates-sections (an option in Ubuntu at the time), because I was just installing apps, and it caused a problem because one of the apps I was installing required packages which were listed in one or more of the updates-sections. So, I decided to restore the updates-sections, without knowing whether it would solve the problem, and fortunately it did.

In Linux, a software-installation cycle consists of downloading package index files, then installing them, then specifying the desired software (such as apps or a software-update, which updates the OS and apps, and can require hundreds of MB), which causes the software/package manager to refer to the "local" (installed) package index and generate a list of packages (software modules) which must be downloaded in order to install the specified software. Some of the modules might have already been installed, and wouldn't have to be installed again, which is partly why the list must be generated on the installation of interest, and there are many other things which the package manager takes into consideration when generating this list. Once the list is completed, the packages would be downloaded and installed.

With a direct high-speed internet connection, it's easy to complete this cycle on a single day, while the package index reflects the latest state of the repository. But when using APT-offline, the interval between downloading package-index files and downloading packages might exceed a week, and the package index used for generating the list of packages to download would no longer reflect the state of the repository when the packages are downloaded. So, in previous revisions, I warned of a potential scenario in which packages might be deleted from the repository during the interval between downloading package index files, and using an SF based on those package index files in an attempt to download packages. But then it occurred to me that software modules are probably retained in the repository for at least a few weeks after being replaced by new revisions and de-listed from the package index, to eliminate the potential for such a scenario as long as the installation-cycle is performed in a reasonable amount of time.

In case you're wondering, the online package index can't be used directly, for various reasons, which would be very difficult to explain because APT (Advanced Packaging Tool, the software/package manager) and APT-offline are the result of years of development during which many complex factors had to be considered. Various ways to accomplish the same thing (installing software on a Debian-based system without a direct internet connection) have been tried, and the combination of APT and APT-offline was found to be the optimal approach.

After downloading the files, the phone (or the storage device such as an SD-card being used for passing data between the phone and the PC - see warning above) would be connected to the PC, and the PC's installation of APT-offline-GUI would be used in its install-mode to navigate to the aforementioned folder reserved for the files listed in the corresponding SF, and to initiate an install-op on those files. (There are other ways to use APT-offline to accomplish the same thing, but this is the most convenient and efficient.) This would completely install the files, except for app-packages being installed for the first time on that installation, in which case APT-offline places them in /var/cache/apt/archives, where packages are placed for installation during a normal installation process. To completely install them, the PC's software manager (Advanced Packaging Tool, or APT) would then be given a command to install the apps for which the packages were obtained, such as by entering "sudo apt install <app name>" for each app. This ensures that nothing gets installed which isn't part of an app, such a a piece of malware that somehow got slipped in with with the requested files, which is unlikely although you should protect the downloaded files until they're installed, and install them at the first opportunity. To make it easy to install the apps, it might help to keep a list of the apps which were requested during the set-operation which ultimately produced the downloaded files. (The name of the corresponding CNF [2] is a logical place for such a list, in an abbreviated form.) You could sift through the downloaded files to remember which apps you had requested, but there might be hundreds of files.

The sky-blue arrows in the icon represent internet connections, although the PC's has a negation-symbol superimposed to indicate the lack of an internet connection, or at least one with sufficient bandwidth to download the amount of data typically required to make a change to a Linux installation (updating the package index alone requires a download of 25-80MB, and installing some large apps can require over 100MB).

The list-icon on the phone-icon symbolizes a signature file stored on the phone. The lock-icon symbolizes APT-offline's "bullet-proof" level of security guaranteed to prevent corrupted files or malware from being installed [3].

An icon for Debian packages is used as the icon's background, to symbolize the ultimate goal, which is to install Debian packages.


Notes

Revisions

5/5/24 - Added an instruction to take a photo of the wi-fi card before removing it, and deleted previous revision-notes.


[1] To create an "air gap" (EM gap) PC, get a barebones mini-pc with a wifi card, take a photo of the wi-fi card in case you ever decide to re-install it, and remove the card. (I assume that if the wifi is simply disabled via software, there is some way to activate it with malware and use the wifi in burst-mode. Call me paranoid, but it doesn't hurt to be paranoid when it comes to data security.) Use long-nose pliers to pull the tiny gold-plated RF connectors off (assuming that the wireless card is horizontal, you would pull the connectors straight up), and cover the connectors with tape to prevent any potential shorting. Install memory, but don't install any internal storage (use external storage devices with encrypted and/or unencrypted partitions for storage and backup), and avoid any peripherals with wireless capability, because hackers might be able to use them to get into the OS. Never connect it to the internet when using an installation which has been used for accessing sensitive data, just to be on the safe side. However, a nonpersistent "live" installation which has been shut down probably wouldn't have any session-data on it, and would probably be safe to use. For the keyboard, I use an E-SDS Waterproof Industrial Keyboard with Touchpad, and to share my keyboard and monitor with up to four PCs, I used a CKLau-64H2ua KM switch. Neither has wireless capability.

Hackers want us to believe that they can spy on air-gap PCs via the power lines, but this is BS intended to prevent us from using air-gap PCs, because they can't be hacked, at least without taking extraordinary measures which require physical access to the PC. There is no way that any useful data can make it out to the power lines. There isn't even any useful data on the power supply pins of any of the chips in a PC. Even if there were, it wouldn't make it through the power supply out to the power line, which is typically noisy.

Authentication-key updates

Each installation contains decryption keys to allow digital signatures on InRelease files (which contain checksums for package-index files) to be authenticated, and the keys are updated periodically (because the encryption keys are changed periodically), which takes place during package index updates with a direct internet connection. However, key-updates aren't strictly required, as explained in the relevant entry of this blog. If you use a recent release to create an installation, the keys will probably be current for a while, and if you install all of the software right after creating the installation, you might be able to avoid the "cannot authenticate" messages which appear when you install software when the keys aren't up-to-date.

Types of installations for air-gap PCs

You could run the air-gap PC with a "full" installation (preferably encrypted to protect the data which it would retain) on a flash drive, and conceal the drive when not in use. (However, even if a full installation were encrypted, any data on it would be accessible to hackers when it's running, and the warning at the top of this page would apply.) To create a full installation, select a log-in password (or two if you intend to create an encrypted installation), boot the PC from a live installation, run its installation program (which would include an encryption-option), and use a freshly-formatted USB3 flash drive as the destination for the full installation, because full installations require USB3 drives, and some installers won't overwrite a previous installation. In case there's malware on the drive, perform a slow-format to overwrite the drive, which can take hours for large flash drives, and should be done at your convenience before performing the installation-process. In my experience, Sandisk Ultra-Fits and Ultra-Luxes are the only flash drives that consistently work well for full installations, although I tried just a few major brands. Another option would be to use a small nonvolatile memory modules, as long as it's easy to conceal when it's not in use.

MX-Linux, Snapshots, and NP "live" installations

I prefer to use MX-Linux (which is Debian-based, and thus compatible with APT-offline), because it has a "killer app" known as Snapshot, which makes it easy to turn an MX-Linux installation, with all of its settings and added software, into an ISO, which can then be used to create duplicate installations of various types. So, the full installation could be used as a "master," which would be used only for making changes and Snapshots, and the Snapshots could be used for creating nonpersistent flash-drive installations (NPFDIs), a.k.a non-persistent "live" installations, which are ideal for running air-gap PCs, because when they're shut down, they don't retain any trace of data accessed with them. Another advantage of this type of installation is that they're easy to replace if they fail, or get lost or stolen. I put them on USB2 drives, because they're adequate for this purpose, they're cheap, and they run cool, indicating low power consumption.

For storage, I use separate USB3 flash drives with encrypted and unencrypted partitions, and several backups scattered around. My system consists of a) a low-security drive (used on online and offline PCs) with a large unencrypted partition mainly for downloads, and a small encrypted one for text files that don't contain sensitive information; and b) a high-security drive (used on air-gap PCs only) with an encrypted partition. The system also includes a primary backup for a) and one for b), and secondary backups which contain a) copies of the low- and high-security drives (but not all downloads - only the ones which I consider to be reference material and want to archive); and b) a copy of my latest Snapshot-ISO in some cases, or c) a nonpersistent live installation of my latest Snapshot-ISO (which requires the entire drive). The primary backups are used for backing-up everything as soon as it's downloaded, created or revised. The secondary backups are updated weekly where appropriate.

The high-security drives contain a "heap"-folder (named "disseminate") which contains a) a copy of everything which has been added to the high-security drives since the last time the heap-folder was cleared, as described below, and b) a weekly-updated folder which contains a copy of the encrypted partition on the low-security drives. After updating b), the heap-folder is copied to each of the secondary backups, after deleting the previous week's copy of the heap-folder.

When the heap-folder becomes too large and takes too long to copy to the secondary backups on a weekly basis, the heap folder on the main high-security drive is cleared by disseminating its contents to suitable locations in the high-security drive's main file system, and by deleting what I no longer consider to be worth saving. The main high-security drive is then copied to each of the secondary backups. Assuming that it's been over a year since doing this, I also reformat all of the drives in my storage/backup system and refresh/update their contents. (I've considered using SD-cards instead of flash drives, to speed this up.) Each user would adapt this system to their needs.

Creating Snapshot-ISOs and nonpersistent live installations

When I'm running a full MX-Linux installation on my air-gap PC and using the MX-Linux Snapshot tool to create an ISO of the full installation, I use an SD-card to store the ISO, which saves considerable time, because Snapshot ISOs are typically huge, and SD cards have a very fast write-speed. Even so, creating a Snapshot takes about ten minutes. Creating NPFDIs is just a matter of copying an ISO to a flash drive in a special format, using a USB-installer program such as Etcher or MX Live USB Maker, an MX Tool on MX-Linux. This process is simple and takes about ten minutes.

The disadvantage of using NPFDIs to run the PC is that changing a setting, and making it persistent, requires changing the setting on the full installation, then creating a new ISO and a new NPFDI. So, I usually wait until I have to make a few changes, or an important one, before going through this process.

MX-Linux "master" installations: no data to hack

Another advantage of using the MX-Linux full installation as a "master," i.e. for nothing except making changes and Snapshot-ISOs, is that it can be connected directly to the internet to update the package index and install software, without having to worry about compromising any data. Using a direct connection to update the package index also ensures that the installation's decryption keys (which ultimately authenticate packages) are up-to-date, although this isn't strictly necessary if a major server is used, since any problems with it would be discovered quickly because many others would be constantly connecting to it directly, and the chances that someone would be able to spoof a repository are negligible.

If a master-installation fails (or if it's inadvertently used to run the PC and might contain sensitive data, and you want to be able to connect it directly to the internet), it could be re-created easily from the latest Snapshot-ISO. The new installation wouldn't have a package index, but it wouldn't need one until installing more software, by which time it might need to be updated anyways.

I also keep a record of every change (including every setting) that I make to the MX-Linux master installation, to minimize the number of revision-cycles (i.e. make changes to the master installation, then generate a Snapshot-ISO, then create a working-installation, and then use it for a while and realize that more changes are required) before arriving at a stable installation with a new MX-Linux release.

MX-Linux PERSISTENT installations

MX-Linux uses the "Antix live system," which allows for the creation of PERSISTENT live installations which could probably be used for running an air-gap PC without compromising security, because they can be set up to have "root" (software) persistence (so that you could make persistent changes to the installation), without "home" (data) persistence (so that the installation wouldn't retain data after being shut down), and they can be encrypted.

However, persistent live MX installations are difficult to use (although better written instructions/explanations, and experience, would help), and if a persistent live installation were being used for running the PC all the time, it would be more likely to fail than would a full installation being used just for making changes and Snapshots, and if you wanted a current backup, you'd still have to make a Snapshot every time you made changes. You'd avoid the step of making a new NP live installation, but making NP live installations is simple and takes only about ten minutes. So, I'm sticking with my system of using a full installation just to make changes and Snapshots, and NP live installations made from the Snapshots to run the PC. But for certain applications, the flexibility of persistent MX live installations is a big advantage.

[2] APT-offline signature files (SFs) are text files, but with a .sig filename extension, which are generated by the PC's installation of APT-offline running in "set" mode. In this mode, APT-offline uses the PC's software/package manager (Advanced Packaging Tool, or APT) to perform the most complex aspects of the set-process. The information which allows the downloaded files to be validated consists of reference checksums and files sizes (both obtained from the "local" or installed package index), and names to assign to files if they are found to be valid. For packages, whose names aren't changed when they are found to be valid, the replacement name is the original name, but by using the original name as the replacement name, APT-offline can simply replace the original name with the replacement name when a file is found to be valid, regardless of whether the file in question is a package-index file or a package.

APT-offline names every SF "apt-offline.sig" by default. As part of a system I devised to organize and simplify the overall process of using APT-offline, the default name is used for every signature file, but each SF is placed, along with a user-created folder named DDF (download-destination folder), which is intended to receive the files listed in the SF as they are downloaded and validated, and is reserved for the files listed in the SF - in a user-created "change-name folder" (CNF). The purpose of CNFs is to identify the purposes of the corresponding SF/DDF pair, and to separate SF/DDF pairs from each other, since they're all named alike and would clash if placed in the same directory. (The reason for using generic names for SFs and DDFs will become clear later.) A CNF's name might include a description of the target-installation, the date, and an abbreviated description of the changes which the folder's contents are intended to produce. For example, a CNF might be named "CNF-Brix-Xub2404-dd-mm-yy-update," or "CNF-Brix-Xub2404-dd-mm-yy-gimp-marble-update," where Brix is the name of a mini-PC, Xub2404 is the Xubuntu ("zoo-boon-tu") 24.04 installation on the Brix (some PCs have multiple installations on them), and GIMP and Marble are large popular apps. (GIMP is a powerful image-processor, and Marble is virtual-globe app like Google Earth, although not as sophisticated, but it can be used without an internet connection.)

The basic idea behind my system is to reduce the process of initiating a get-op to a series of screen-taps which can be performed on a phone even in a distracting environment, and to be able to consecutively perform multiple get-ops in a single download session, while keeping track of each SF/DDF-pair's purpose. So, at home, the phone would be connected to the PC, and the PC's file manager would be used to move each CNF into a separate get-op "hopper"-directory (a get-process directory, or GPD) on the phone. Then, each SF/DDF pair would be moved out of its CNF and directly into the corresponding GPD. This allows a fixed get-command (which translated to English is "go to GPDn, get the files listed in apt-offline.sig, and put them in DDF") to be used for initiating a get-op on the SF/DDF pair in the corresponding GPD (GPD1, GPD2, etc.). If the SF/DDF pair were left in the CNF, or if the SF and DDF had unique names, a new command would be required for every get-op, which would be highly impractical. So, if you have three PCs, you might want to perform three get-ops in a single download-session, and you would have three GPDs, each with a fixed get-command.

================

Note

Getting the path for the get-commands is tricky. You might know of some handy technique. I ended up installing a file manager named X-plore on the phone, because when X-plore is used for navigating to some directory in the phone's file system, X-plore displays the path and allows it to be copied and pasted. There are other file managers that do this, and by the time you try it, the PC's file manager or the phone's pre-installed file manager might be able to do this. I pasted the path into a text file on the phone and moved the file to the PC to process the text to create the get-commands, because I couldn't do it on the phone, but you might be able to do it on your phone. So, take the following instructions as a suggested approach, and feel free to modify them if you have a better approach.

================

To create the get-commands, you would use a PC to create a text-file named "get-command-1.txt" containing the basic get-command ("apt-offline get path/GPD1/apt-offline.sig -d path/GPD1/DDF," where "path" would be left empty for the time being), and move the file to the phone. Then, on the phone, you would install a file-manager app, such as X-plore, which allows you navigate to a directory and copy its path so that it can be pasted where required. Then you would navigate to a certain directory (specified in the main article in this blog) which is the optimal location for GPDs, copy its path, paste it into the aforementioned text file, and move the text file back to the PC, because it's far easier to perform the next steps on a PC, than on a phone, or at least it was for me when using my phone. Then you would open the text file on the PC, insert the path into the command in two places, delete everything besides the command from the file, and make two copies of the text file (one named "get-command-2.txt" and one named "get-command-3.txt"), so that there's one file for each command, to make it easy to copy each command to a clipboard-app on the phone (I tried selecting individual commands from a single text-file on my phone, and gave up). Next, you would substitute the correct number into each occurrence of GPD1 in each of the copies, and you would have the required get-commands.

Then you would copy the get-command files to the phone, install a clipboard-app on the phone, open each text file (one at a time), and copy the command, which would enter the command into the clipboard-app, and repeat this for each text file containing a get-command. Then each command could be copied into the terminal with a few taps, and then entered. (The terminal is part of the Userland app, which must be installed on the phone in order to install APT-offline on the phone. This site's main article provides the details.)

After the download-session, the downloaded files would be installed by using the PC's installation of APT-offline-GUI in install-mode to navigate to the DDF of interest and to initiate an install-op. If the changes are being made to an MX-Linux installation which is being used as a "master" installation (explained elsewhere in this article), the phone could be connected directly to the PC without creating security risks, but it might be necessary to use the phone's file manager to move the relevant SF/DDF pair back into its CNF, so that the PC's installation of APT-offline can find the DDF. But otherwise, the DDF containing the downloaded files, along with the associated SF, would be returned to their CNF, using the phone's file manager, and the CNF would then be copied to a flash drive or SD-card, which would then be transferred to the PC, to eliminate the security risks.

This might sound like too much work, but once the system is set up, it's very easy to use.

If you want to install software, and it's been a while since the package index was updated, you would generate an SF for a package-index update, then use the SF in a get-op to get the package-index files. Then you would install them, generate an SF for the desired apps, and use that SF in a get-op. You might want to consider downloading package-index files on a weekly basis, if convenient, just to have fairly fresh ones on hand in case you need to install them.

[3] Never create an installation from an ISO which you have obtained from an outside source, without first authenticating the ISO. This is done by calculating the ISO's checksum (a function built into some file managers, sometimes initiated by right-clicking on the ISO and selecting the calculate-checksum function in the resulting menu), and comparing the calculated checksum to the reference checksum. I use SHA256 checksums, which are the highest-security reference checksums typically provided. If the checksums are different, the difference won't be subtle. In my experience, the easiest way to find the reference checksum of the latest release of some type of Linux is to visit the official website, but if you need a checksum for an older release, I suggest Googling "checksum Linux <type version>." Distrowatch has checksums for older releases, but it can be difficult to find them.