secluded/blog.org

72 KiB
Raw Blame History

Meta   @Meta

TODO Focus intentionally

I am too easily distracted. Sitting at my desk on an average day, I have movies, TV shows, and YouTube videos, about a million different chat applications, my email client, Steam, and a web browser with yet more chat apps plus social media all within a couple keystrokes' reach. Along with whatever primary task I've set out to do, I also have a really chaotic brain and about a million other tasks bouncing around in it.

Putting all of this together results in terrible productivity and incessant procrastination If my primary task is something I'm less-than-motivated to accomplish.

In a recent episode of The Art of Manliness, Brett McKay interviews Dr. BJ Fogg about his new book, Tiny Habits: The Small Changes That Change Everything. One of Dr. Fogg's statements in this episode stuck with me; when training yourself to adopt new behaviours, there are three things that must factors that determine your success: motivation, ability, and a prompt. A prompt is just something that reminds you of the behaviour you're trying to adopt. The act of brushing your teeth might be the prompt for flossing. The act of flossing might be the prompt for making coffee. In my situation, the prompt is just needing to get work done so we'll ignore that factor. Ability refers to how simple you find the task and motivation is how motivated you are to accomplish it. These last two must balance each other out; if your motivation to complete the task is low, your ability must be high — the task must be easy, while if your ability is low, your motivation must be rather high.

Notes

  • Close primary browser with a million tabs open and use Epiphany
  • Close all the chat apps
  • Disable notifications/enable Do Not Disturb mode on both your phone and computer
  • Take off your smart/fitness watch
  • If you listen to music, make it something calming, not eclectic

TODO Email can be pleasant, but like all good things, it takes work

Technology   @Technology

DONE (Ab)using mesh networks for easy remote support   Mesh__networking Open__source Remote__support

CLOSED: [2021-11-01 Mon 02:51]

One of the things many of us struggle with when setting friends and family up with Linux is remote support. Commercial solutions like RealVNC and RustDesk do exist and function very well, but are often more expensive than we would like for answering the odd "I can't get Facebook open!" support call. I've been on the lookout for suitable alternatives for a couple years but nothing has been satisfying. Because of this, I have held off on setting others up with any Linux distribution, even the particularly user-friendly options such as Linux Mint and elementary OS; if I'm going drop someone in an unfamiliar environment, I want to be able to help with any issue within a couple hours, not days and certainly not weeks.

Episode 421 of LINUX Unplugged gave me an awesome idea to use Nebula, a networking tool created by Slack, X11vnc, a very minimal VNC server, and Remmina, a libre remote access tool available in pretty much every Linux distribution, to set up a scalable, secure, and simple setup reminiscent of products like RealVNC.

Nebula

The first part of our stack is Nebula, the tool that creates a network between all of our devices. With traditional VPNs, you have a client with a persistent connection to a central VPN server and other clients can communicate with the first by going through that central server. This works wonderfully in most situations, but there are a lot of latency and bandwidth restrictions that would make remote support an unpleasant experience. Instead of this model, what we want is a mesh network, where each client can connect directly to one another without going through a central system and slowing things down. This is where Nebula comes in.

In Nebula's terminology, clients are referred to as nodes and central servers are referred to as lighthouses, so those are the terms I'll use going forward.

Mesh networks are usually only possible when dealing with devices that have static IP addresses. Each node has to know how to connect with the other nodes; John can't meet up with Bob when Bob moves every other day without notifying anyone of his new address. This wouldn't be a problem if Bob phoned Jill and told her where he was moving; John would call Jill, Jill would tell him where Bob is, and the two would be able to find each other

With Nebula, nodes are Bob and John and Jill is a lighthouse. Each node connects to a lighthouse and the lighthouse tells the nodes how to connect with one another when they ask. It facilitates the P2P connection then backs out of the way so the two nodes can communicate directly with each other.

It allows any node to connect with any other node on any network from anywhere in the world, as long as one lighthouse is accessible that knows the connection details for both peers.

Getting started

The best resource is the official documentation, but I'll describe the process here as well.

After installing the required packages, make sure you have a VPS with a static IP address to use as a lighthouse. If you want something dirt cheap, I would recommend one of the small plans from BuyVM. I do have a referral link if you want them to kick me a few dollars for your purchase. Hetzner (referral: ckGrk4J45WdN) or netcup (referral: 36nc15758387844) would also be very good options; I've used them all and am very comfortable recommending them.

Creating a Certificate Authority

After picking a device with a static IP address, it needs to be set up as a lighthouse. This is done by first creating a Certificate Authority (CA) that will be used for signing keys and certificates that allow our other devices into the network. The .key file produced by the following command is incredibly sensitive; with it, anyone can authorise a new device and give it access to your network. Store it in a safe, preferably encrypted location.

  nebula-cert ca -name "nebula.example.com"

I'll explain why we used a Fully-Qualified Domain Name (FQDN) as the CA's name in a later section. If you have your own domain, feel free to use that instead; it doesn't really matter what domain is used as long as the format is valid.

Generating lighthouse credentials

Now that we have the CA's .crt and .key files, we can create and sign keys and certificates for the lighthouse.

  nebula-cert sign -name "buyvm.lh.nebula.example.com" -ip "192.168.100.1/24"

Here, we're using a FQDN for the same reason as we did in the CA. You can use whatever naming scheme you like, I just prefer <vps-host>.lh.nebula... for my lighthouses. The IP address can be on any of the following private IP ranges, I just happened to use 192.168.100.X for my network.

IP Range Number of addresses
10.0.0.0 10.255.255.255 16 777 216
172.16.0.0 172.31.255.255 10 48 576
192.168.0.0 192.168.255.255 65 536
Creating a config file

The next step is creating our lighthouse's config file. The reference config can be found in Nebula's repo. We only need to change a few of the lines for the lighthouse to work properly. If I don't mention a specific section here, I've left the default values.

The section below is where we'll define certificates and keys. ca.crt will remain ca.crt when we copy it over but I like to leave the node's cert and key files named as they were when generated; this makes it easy to identify nodes by their configs. Once we copy everything over to the server, we'll add the proper paths to the cert and key fields.

  pki:
    ca: /etc/nebula/ca.crt
    cert: /etc/nebula/
    key: /etc/nebula/

The next section is for identifying and mapping your lighthouses. This needs to be present in all of the configs on all nodes, otherwise they won't know how to reach the lighthouses and will never actually join the network. Make sure you replace XX.XX.XX.XX with whatever your VPS's public IP address is. If you've used a different private network range, those changes need to be reflected here as well.

  static_host_map:
    "192.168.100.1": ["XX.XX.XX.XX:4242"]

Below, we're specifying how the node should behave. It is a lighthouse, it should answer DNS requests, the DNS server should listen on all interfaces on port 53, it sends its IP address to lighthouses every 60 seconds (this option doesn't actually have any effect when am_lighthouse is set to true though), and this lighthouse should not send reports to other lighthouses. The bit about DNS will be discussed later.

  lighthouse:
    am_lighthouse: true
    serve_dns: true
    dns:
      host: 0.0.0.0
      port: 53
    interval: 60
    hosts:

The next bit is about hole punching, also called NAT punching, NAT busting, and a few other variations. Make sure you read the comments for better explanations than I'll give here. punch: true enables hole punching. I also like to enable respond just in case nodes are on particularly troublesome networks; because we're using this as a support system, we have no idea what networks our nodes will actually be connected to. We want to make sure devices are available no matter where they are.

  punchy:
    punch: true
    respond: true
    delay: 1s

cipher is a big one. The value must be identical on all nodes and lighthouses. chachapoly is more compatible so it's used by default. The devices I want to connect to are all x86 Linux, so I can switch to aes and benefit from a small performance boost. Unless you know for sure that you won't need to work with anything else, I recommend leaving it set to chachapoly.

  cipher: chachapoly

The last bit I modify is the firewall section. I leave most everything default but remove the bits after port: 443. I don't need the laptop and home groups (groups will be explained later) to access port 443 on this node, so I shouldn't include the statement. If you have different needs, take a look at the comment explaining how the firewall portion works and make those changes.

Again, I remove the following bit from the config.

    - port: 443
      proto: tcp
      groups:
        - laptop
        - home
Setting the lighthouse up

We've got the config, the certificates, and the keys. Now we're ready to actually set it up. After SSHing into the server, grab the latest release of Nebula for your platform, unpack it, make the nebula binary executable, then move it to /usr/local/bin (or some other location fitting for your platform).

  wget https://github.com/slackhq/nebula/releases/download/vX.X.X/nebula-PLATFORM-ARCH.tar.gz
  tar -xvf nebula-*
  chmod +x nebula
  mv nebula /usr/local/bin/
  rm nebula-*

Now we need a place to store our config file, keys, and certificates.

  mkdir /etc/nebula/

The next step is copying the config, keys, and certificates to the server. I use rsync but you can use whatever you're comfortable with. The following four files need to be uploaded to the server.

  • config.yml
  • ca.crt
  • buyvm.lh.nebula.example.com.crt
  • buyvm.lh.nebula.example.com.key

With rsync, that would look something like this. Make sure rsync is also installed on the VPS before attempting to run the commands though; you'll get an error otherwise.

  rsync -avmzz ca.crt user@example.com:
  rsync -avmzz config.yml user@example.com:
  rsync -avmzz buyvm.lh.* user@example.com:

SSH back into the server and move everything to /etc/nebula/.

  mv ca.crt /etc/nebula/
  mv config.yml /etc/nebula/
  mv buyvm.lh* /etc/nebula/

Edit the config file and ensure the pki: section looks something like this, modified to match your hostnames of course.

  pki:
    ca: /etc/nebula/ca.crt
    cert: /etc/nebula/buyvm.lh.nebula.example.com.crt
    key: /etc/nebula/buyvm.lh.nebula.example.com.key

Run the following command to make sure everything works properly.

  nebula -config /etc/nebula/config.yml

The last step is daemonizing Nebula so it runs every time the server boots. If you're on a machine using systemd, dropping the following snippet into /etc/systemd/system/nebula.service should be sufficient. If you're using something else, check the the examples directory for more options.

  [Unit]
  Description=nebula
  Wants=basic.target
  After=basic.target network.target
  Before=sshd.service

  [Service]
  SyslogIdentifier=nebula
  ExecReload=/bin/kill -HUP $MAINPID
  ExecStart=/usr/local/bin/nebula -config /etc/nebula/config.yml
  Restart=always

  [Install]
  WantedBy=multi-user.target

We're almost done!

Setting individual nodes up

This process is almost exactly the same as setting lighthouses up. All you'll need to do is generate a couple of certs and keys then tweak the configs a bit.

The following command creates a new cert/key for USER's node with the IP address 192.168.100.2. The resulting files would go on the remote node not yours. Replace HOST and USER with fitting values.

  nebula-cert sign -name "HOST.USER.nebula.example.com" -ip "192.168.100.2/24"

The following command will create a similar cert/key but it will be part of the support group. The files resulting from this should go on your nodes. With the config we'll create next, nodes in the support group will be able to VNC and SSH into other nodes. Your nodes need to be in the support group so you'll have access to the others.

  nebula-cert sign -name "HOST.USER.nebula.example.com" -ip "192.168.100.2/24" -groups "support"

On to the config now. This tells the node that it is not a lighthouse, it should not resolve DNS requests, it should ping the lighthouses and tell them its IP address every 60 seconds, and the node at 192.168.100.1 is one of the lighthouses it should report to and query from. If you have more than one lighthouse, add them to the list as well.

  lighthouse:
    am_lighthouse: false
    #serve_dns: false
    #dns:
      #host: 0.0.0.0
      #port: 53
    interval: 60
    hosts:
      - "192.168.100.1"

The other bit that should be modified is the firewall: section and this is where the groups we created earlier are important. Review its comments and make sure you understand how it works before proceeding.

We want to allow inbound connections on ports 5900, the standard port for VNC, and 22, the standard for SSH. Additionally, we only want to allow connections from nodes in the support group. Any other nodes should be denied access.

Note that including this section is not necessary on your nodes, those in the support group. It's only necessary on the remote nodes that you'll be connecting to. As long as the outbound: section in the config on your node allows any outbound connection, you'll be able to access other nodes.

    - port: 5900
      proto: tcp
      groups:
      - support

    - port: 22
      proto: tcp
      groups:
      - support

The certs, key, config, binary, and systemd service should all be copied to the same places on all of these nodes as on the lighthouse.

X11vnc

Alright. The hardest part is finished. Now on to setting x11vnc up on the nodes you'll be supporting.

All you should need to do is install x11vnc using the package manager your distro ships with, generate a 20 character password with pwgen -s 20 1, run the following command, paste the password, wait for x11vnc to start up, make sure it's running correctly, press Ctrl + C, then add the command to the DE's startup applications!

  x11vnc --loop -usepw -listen <nebula-ip> -display :0

--loop tells x11vnc to restart once you disconnect from the session. -usepw is pretty self-explanatory. -listen <nebula-ip> is important; it tells x11vnc to only listen on the node's Nebula IP address. This prevents randos in a coffee shop from seeing an open VNC port and trying to brute-force the credentials. -display :0 just defines which X11 server display to connect to.

Some distributions like elementaryOS and those that use KDE and GNOME will surface a dialogue for managing startup applications if you just press the Windows (Super) key and type startup. If that doesn't work, you'll have to root around in the settings menus, consult the distribution's documentation, or ask someone else that might know.

After adding it to the startup application, log out and back in to make sure it's running in the background.

Remmina

Now that our network is functioning properly and the VNC server is set up, we need something that connects to the VNC server over the fancy mesh network. Enter Remmina. This one goes on your nodes.

Remmina is a multi-protocol remote access tool available in pretty much ever distribution's package archive as remmina. Install it, launch it, add a new connection profile in the top left, give the profile a friendly name (I like to use the name of the person I'll be supporting), assign it to a group, such as Family or Friends, set the Protocol to Remmina VNC Plugin, enter the node's Nebula IP address in the Server field, then enter their username and the 20 character password you generated earlier. I recommend setting the quality to Poor, but Nebula is generally performant enough that any of the options are suitable. I just don't want to have to disconnect and reconnect with a lower quality if the other person happens to be on a slow network.

Save and test the connection!

If all goes well and you see the other device's desktop, you're done with the VNC section! Now on to SSH.

SSH

First off, make sure openssh-server is installed on the remote node; openssh-client would also be good to have, but from what I can tell, it's not strictly necessary. You will need openssh-client on your node, however. If you already have an SSH key, copy it over to ~/.ssh/authorized_keys on the remote node. If you don't, generate one with ssh-keygen -t ed25519. This will create an Ed25519 SSH key pair. Ed25519 keys are shorter and faster than RSA and more secure than ECDSA or DSA. If that means nothing to you, don't worry about it. Just note than this key might not interact well with older SSH servers; you'll know if you need to stick with the default RSA. Otherwise, Ed25519 is the better option. After key generation has finished, copy ~/.ssh/id_ed25519.pub (note the .pub extension) from your node to ~/.ssh/authorized_keys on the remote node. The file without .pub is your private key. Like the Nebula CA certificate we generated earlier, this is extremely sensitive and should never be shared with anyone else.

Next is configuring SSH to only listen on Nebula's interface; as with x11vnc, this prevents randos in a coffee shop from seeing an open SSH port and trying to brute-force their way in. Set the ListenAddress option in /etc/ssh/sshd_config to the remote node's Nebula IP address. If you want to take security a step further, search for PasswordAuthentication and set it to no. This means your SSH key is required for gaining access via SSH. If you mess up Nebula's firewall rules and accidentally give other Nebula devices access to this machine, they still won't be able to get in unless they have your SSH key. I personally recommend disabling password authentication, but it's not absolutely necessary. After making these changes, run systemctl restart sshd to apply them.

Now that the SSH server is listening on Nebula's interface, it will actually fail to start when the machine (re)boots. The SSH server starts faster than Nebula does, so it will look for the interface before Nebula has even had a chance to connect. We need to make sure systemd waits for Nebula to start up and connect before it tells SSH to start; run systemctl edit --full sshd and add the following line in the [Unit] section, above [Service].

  After=nebula.service

Even now, there's still a bit of a hiccup. Systemd won't start SSH until Nebula is up and running, which is good. Unfortunately, even after Nebula has started, it still takes a minute to bring the interface up, causing SSH to crash. To fix this, add the following line directly below [Service].

  ExecStartPre=/usr/bin/sleep 30

If the sleep executable is stored in a different location, make sure you use that path instead. You can check by running which sleep.

When the SSH service starts up, it will now wait an additional 30 seconds before actually starting the SSH daemon. It's a bit of a hacky solution but it works™. If you come up with something better, please send it to me and I'll include it in the post! My contact information is at the bottom of this site's home page.

After you've made these changes, run systemctl daemon-reload to make sure systemd picks up on the modified service file, then run systemctl restart sshd. You should be able to connect to the remote node from your node using the following command.

  ssh USER@<nebula-ip>

If you want to make the command a little simpler so you don't have to remember the IP every time, create ~/.ssh/config on your node and add these lines to it.

  Host USER
    Hostname <nebula-ip>
    User USER

Now you can just run ssh USER to get in. If you duplicate the above block for all of the remote nodes you need to support, you'll only have to remember the person's username to SSH into their machine.

Going further with Nebula

This section explains why we used FQDNs in the certs and why the DNS resolver is enabled on the lighthouse.

Nebula ships with a built-in resolver meant specifically for mapping Nebula node hostnames to their Nebula IP addresses. Running a public DNS resolver is very much discouraged because it can be abused in terrible ways. However, the Nebula resolver mitigates this risk because it only answers queries for Nebula nodes. It doesn't forward requests to any other servers nor does it attempt to resolve any domain other than what was defined in its certificate. If you use the example I gave above, that would be nebula.example.com; the lighthouse will attempt to resolve any subdomain of nebula.example.com but it will just ignore example.com, nebula.duckduckgo.com, live.secluded.site, etc.

Taking advantage of this resolver requires setting it as your secondary resolver on any device you want to be able to resolve hostnames from. If you were to add the lighthouse's IP address as your secondary resolver on your PC, you could enter host.user.nebula.example.com in Remmina's server settings instead of 192.168.1.2.

But how you do so is beyond the scope of this post!

If you're up for some more shenanigans later on down the line, you could set up a Pi-Hole instance backed by Unbound and configure Nebula as Unbound's secondary resolver. With this setup, you'd get DNS-level ad blocking and the ability to resolve Nebula hostname. Pi-Hole would query Unbound for host.user.nebula.example.com, Unbound would receive no answer from the root servers because the domain doesn't exist outside of your VPN, Unbound would fall back to Nebula, Nebula would give it an answer, Unbound would cache the answer, tell Pi-Hole, Pi-Hole would cache the answer, tell your device, then your device would cache the answer, and you can now resolve any Nebula host!

Exactly how you do that is definitely beyond the scope of this post :P

If you set any of this up, I would be interested to hear how it goes! As stated earlier, my contact information is at the bottom of the site's home page :)

TODO FreeBSD quirks on the Framework laptop   FreeBSD Framework

This is primarily intended for people new to FreeBSD. If you're already familiar with it, the wiki page will probably tell you everything you need. I had no idea what I was doing so I had no idea what I was looking for! I had been beating my head against a wall for about three hours before I decided to join #freebsd on Libera.Chat; the people there were friendly, helpful, and gave me tons of great advice. I highly recommend popping in if you have any issues!

The Handbook

Open the handbook. Follow the handbook. Read the whole handbook. The developers spend a lot of time making sure it's the best resource available for learning FreeBSD. In most cases, it will have an answer for any question related to FreeBSD.

That said, the Framework laptop is so new that it's not fully supported by the current stable release, so for now, we'll need to diverge a bit. This guide is really only applicable until the release of FreeBSD 13.1 and until drm-kmod hits version 5.5+. Once those two criteria are met, following the handbook should be entirely sufficient!

The Source

In section 2.5.3 of the handbook/installer, make sure you tick the src box to download the FreeBSD source code. It'll be necessary for building our graphics drivers later on.

The Graphics

This is where things are less-than-ideal at the moment. Usually, installing graphics/drm-kmod would be sufficient, but the version in both FreeBSD's package repos and in the ports tree is too old. At the time of writing, it's compatible with Linux kernel 5.4 while the Framework's drivers are in Linux kernel 5.5+. We'll need to clone the sources for graphics/drm-kmod, check out a more recent branch, build the drivers, and use those instead.

I'm not 100% certain whether the first step here is necessary but I don't feel like reinstalling to check.

  1. Install graphics/drm-kmod with pkg install drm-kmod
  2. Install devel/git with pkg install git
  3. Clone drm-kmod's source with

    git clone https://github.com/freebsd/drm-kmod
  4. Check out the 5.7-stable branch with

    git checkout -b 5.7-stable --track remotes/origin/5.7-stable
  5. Build the package with make
  6. Uninstall drm-kmod and all of its dependencies with pkg remove drm-kmod followed by pkg autoremove
  7. Install the more up-to-date drivers with make install
  8. Make sure the module works as expected with kldload /boot/modules/i915kms.ko
  9. If you suddenly see grey in your terminal, it works! Go ahead and add it to your boot config by appending the following line to /etc/rc.conf

    kld_load="/boot/modules/i915kms.ko"
  10. Reboot and you should be able to start Xorg as the handbook describes!

Again all of this information is available on the FreeBSD wiki page for the Framework laptop. The Graphics row in section 2 says requires DRM-KMOD 5.5 or higher. Fails to initialize with DRM-KMOD 5.4. That's in reference to the package we just built and installed.

Hope this helps!

TODO A perfect email setup (for me)   Email Workflow

I've never been satisfied with any of the email clients most people use. I've tried Thunderbird, Evolution, Mailspring, Mail.app, Roundcube, SOGo, Geary, and many more. None of them handle multiple accounts particularly well because all of the emails associated with that account are bound within it. Sure, you can make a new folder somewhere called TODO and move all of your actionable emails to that folder but, when you go to move actionable emails from another account into that folder, you'll likely find that the client simply doesn't let you. If it does, when you reply, it will likely be sent from the wrong account. This is a limitation of the IMAP protocol; everything is managed locally but changes are pushed to the remote server and mixing things the way I want leads to broken setups.

Before I go any further, these are a few characteristics of my ideal email tool.

  • Support for multiple accounts (obviously)
  • Native desktop application (not Electron)
  • Has stellar keyboard shortcuts
  • Doesn't require internet connectivity (other than downloading and sending of course)
  • Organisation can be done with tags

Why tags?

Because they're better. Hierarchies are useful for prose and code but not for files, emails, notes, or anything where an item may fit within multiple categories. Imagine you get an email from your Computer Science professor that includes test dates, homework, and information about another assignment. In that same email, he asks every student to reply with something they learned from the previous class as a form of attendance. In a hierarchy, the best place for this might just be a TODO folder even though it would also fit under School, CS, Dates, To read, and Homework. Maybe you have a few minutes and want to clear out some emails that don't require any interaction. In a tag-based workflow, this would be a good time to open To read, get that email out of the way, and remove the To read tag. It would still show up under the other tags so you can find it later and take the time to fully answer the professor's question, add those dates to your calendar, and add the homework assignments to your TODO list. Hierarchies can be quite cumbersome to work with, especially when one folder ends up getting all the data. Tags ensure that you only see what you want when you want it. Tags are more efficient and they will remain my organisation system of choice.

The tools

In short, the tools we will be using are…

  • mbsync to download our emails
  • notmuch, the primary way emails will be organised
  • afew to apply initial notmuch tags based on subject, sender, recipient, etc.
  • NeoMutt to interact with those emails, reply, compose, add/remove tags, etc.
  • msmtp for relaying our replies and compositions to our mail provider

Yes, it's a lot. Yes, it's time-consuming to set up. Yes, it's worth it (in my opinion).

mbsync

As I said above, IMAP is limiting; we need to use some other method of downloading our emails. There's an awesome piece of software called mbsync which is built for exactly this purpose. Its configuration can be rather daunting if you have as many accounts as I do (19) but it's not terrible.

The following sections are named Near, Far, and Sync. Near and Far are terms mbsync uses to profile how your emails are stored, where they're stored, and how to interact with them. In this guide, Far will our mail provider's IMAP server and Near will be our local Maildir.

Far
IMAPAccount amo_ema
Host imap.nixnet.email
CertificateFile /etc/ssl/certs/ca-certificates.crt
SSLType STARTTLS
User amolith@nixnet.email
PassCmd "secret-tool lookup Title amolith@nixnet.email"

IMAPStore amo_ema-remote
Account amo_ema
Near
MaildirStore amo_ema-local
SubFolders Verbatim
Path ~/new-mail/amo_ema/
Inbox ~/new-mail/amo_ema/INBOX/

In the first block, localrepository and remoterepository tell OfflineIMAP where to look for your emails. use_exa-local is an arbitrary naming scheme I use to differentiate between the various local and remote accounts. It can easily be swapped with something else.

Sync
Channel amo_ema
Far :amo_ema-remote:
Near :amo_ema-local:
SyncState *
Patterns *
Create Both

The repository sections describe how the emails are stored or retrieved. In the local block, you'll notice that the type is Maildir. In this format, each email is given a unique filename and stored in a hierarchy of folders within your account. This is often how your emails are stored on your provider's mail server.

pythonfile is used here to authenticate with the remote server. This can be complicated and depends entirely on how you manage your passwords. I use KeePassXC and love it. When I set OfflineIMAP up, however, it didn't have libsecret compatibility. This would have made setup significantly easier but, as it already just works™, I don't really see a reason to change it.

This new feature allows libresecret-based applications to query KeePassXC for your passwords or store them there on your behalf. CLI/TUI applications that need a secure mechanism for background authentication can use secret-tool lookup Title "TITLE_OF_PASSWORD" as the password command. See the pull request for more details. Because this wasn't a feature when I first set it up, I put my passwords in plaintext files and encrypted them with the GPG key stored on my YubiKey. As long as my key is plugged in, OfflineIMAP can authenticate and download all my emails just fine. The process for using a GPG key not stored on a hardware token is pretty much the same and I'll talk about that process instead.

These are the contents of my ~/.offlineimap.py.

  #! /usr/bin/env python2
  from subprocess import check_output
  def get_pass(account):
      return check_output(["gpg", "-dq", f" ~/.mail_pass/{account}.gpg"]).strip("\n")

This runs gpg -dq ~/.mail_pass/use_exa.gpg then strips the newline character before returning it to OfflineIMAP. -d tells GPG that you're passing it a file you want decrypted and -q tells it not to give any output other than the file's contents. For a setup that works with this Python script, put your passwords in plaintext files with the account name as the file name (e.g. use_exa). You'll then encrypt it with gpg -er <YOUR_KEY_ID> use_exa. Running gpg -dq use_exa.gpg should display your password. Repeat for every account and store the resulting files in ~/.mail_pass/.

The other option, sync_deletes, is whether or not to delete remote emails that have been deleted locally. I enabled that because I want to have easy control over how much remote storage is used.

Here's the next block again so you don't have to scroll up:

[Repository use_exa-remote]
type = IMAP
remotehost = imap.example.com
starttls = yes
ssl = no
remoteport = 143
remoteuser = user@example.com
remotepasseval = get_pass("use_exa")
auth_mechanisms = GSSAPI, XOAUTH2, CRAM-MD5, PLAIN, LOGIN
maxconnections = 1
createfolders = True
sync_deletes = yes

This one's pretty self-explanatory. type, remotehost, starttls, ssl, and remoteport should all be somewhere in your provider's documentation. remoteuser is your email address and remotepasseval is the function that will return your password and allow OfflineIMAP to authenticate. You'll want enter the name of your password file without the .gpg extension; the script takes care of adding that. Leave auth_mechanisms alone and the same for maxconnections unless you know your provider won't rate limit you or something for opening multiple connections. sync_deletes is the same as in the previous block.

Copy those three blocks for as many accounts as you want emails downloaded from. I have 510 lines just for Account and Repository blocks due to the number of address I'm keeping track of.

notmuch

notmuch is a fast, global-search, and tag-based email system. This what does all of our organisation as well as what provides the "virtual" mailboxes NeoMutt will display later on. Configuration is incredibly simple. This file goes in ~/.notmuch-config.

  [database]
  path=/home/user/mail/

  [user]
  name=Amolith
  primary_email=user@example.com

  [new]
  tags=unread;new;
  ignore=Trash;

  [search]
  exclude_tags=deleted;spam;

  [maildir]
  synchronize_flags=true

First section is the path to where all of your archives are, the [user] section is where you list all of your accounts, [new] adds tags to mail notmuch hasn't indexed yet and ignores indexing the Trash folder, and [search] ignores mail tagged with deleted or spam. The final section tells notmuch to add maildir flags which correspond with notmuch tags. These flags will be synced to the remote server the next time OfflineIMAP runs and things will be somewhat organised in your webmail interface.

After creating the configuration file, run notmuch new and wait for all of your mail to be indexed. This could take a short amount of time or it could take minutes up to an hour, depending on how many emails you have. After it's finished, you'll be able to run queries and see matching emails:

  $ notmuch search from:user@example.com
  thread:0000000000002e9d  December 28 [1/1] Example User; Random subject that means nothing

This is not terribly useful in and of itself because you can't read it or reply to it or anything. That's where the Mail User Agent (MUA) comes in.

afew

afew is an initial tagging script for notmuch. After calling notmuch new, afew will add tags based on headers such as From:, To:, Subject:, etc. as well as handle killed threads and spam. The official quickstart guide is probably the best resource on getting started but I'll include a few tips here as well.

NeoMutt

msmtp

msmtp is what's known as a Mail Transfer Agent (MTA). You throw it an email and it will relay that to your mail provider's SMTP server so it can have the proper headers attached for authentication, it can be sent from the proper domain, etc. All the necessary security measures can be applied that prevent your email from going directly to spam or from being rejected outright.

msmtp's configuration is also fairly simple if a bit long, just like OfflineIMAP's.

  # Set default values for all following accounts.
  defaults

  # Use the mail submission port 587 instead of the SMTP port 25.
  port 587

  # Always use TLS.
  tls on

This section just sets the defaults. It uses port 587 (STARTTLS) for all SMTP servers unless otherwise specified and enables TLS.

  account user@example.com
  host smtp.example.com
  from user@example.com
  auth on
  user user@example.com
  passwordeval secret-tool lookup Title "user@example.com"

This section is where things get tedious. When passing an email to msmtp, it looks at the From: header and searches for a block with a matching from line. If it finds one, it will use those configuration options to relay the email. host is simply the SMTP server of your mail provider, sometimes this is mail.example.com, smtp.example.com, etc. I've already explained from, auth simply says that a username and password will have to be provided, user is that username, and passwordeval is a method to obtain the password.

When I got to configuring msmtp, KeePassXC had just released their libsecret integration and I wanted to try it. secret-tool is a command line tool used to store and retrieve passwords from whatever keyring you're using. I think KDE has kwallet and GNOME has gnome-keyring if you already have those set up and want to use them; the process should be quite similar regardless.

As mentioned above secret-tool stores and retrieves passwords. For retrieval, it expects the command to look like this.

secret-tool lookup {attribute} {value} ...

I don't know what kwallet and gnome-keyring's attributes are but this can be used with KeePassXC by specifying the Title attribute. If the password to your email account is stored in KeePassXC with the address as the entry title, you can retrieve it by simply running…

secret-tool lookup Title "user@example.com"

If you have a different naming system, you'll have to experiment and try different things; I don't know what KeePassXC's other attributes are so I can't give other examples.

passwordeval gpg -dq ~/.mail_pass/use_exa.gpg

Now that the whole block is assembled, copy/paste/edit for as many accounts as you want to send email from.

Summary

TODO Pong fluffy when finished

TODO Audacity and the telemetry pull request   Open__source__culture Audio__editing Music Drama

Five days ago at the time of writing, Dmitry Vedenko opened a Pull Request (PR) in Audacity's GitHub repository entitled Basic telemetry for the Audacity. About two days later, all hell broke loose. That PR now has over 3.3 thousand downvotes and more than one thousand comments from nearly 400 individuals. I started reading the posts shortly after they began and kept up with them over the following days, reading every single new post. I recognise that few people are going to feel like wading through over 1k comments so this is my attempt to provide a summary of the PR itself using the community's code reviews along with a summary of the various opinions conveyed in the comments.

When I reference comments, I'll provide a footnote that includes a link to the comment and a link to a screenshot just in case it's removed or edited in the future.

Audacity's acquisition

I haven't been able to find much information in this area so forgive me if I'm scant on details.

On 30 April, a company called Muse Group acquired Audacity. According to their website, Muse is the parent company behind many musical applications and tools. It was founded by Eugeny Naidenov just days before it acquired Audacity. Before all of this, Eugeny Naidenov founded Ultimate Guitar (UG) in 1998. The service grew rather quickly and now has over 300 million users. UG acquired Dean Zelinsky Guitars in 2012, Agile Partners in 2013, MuseScore in 2017, and Crescendo in 2018. Muse Group was established in 2021 and it seems as if all of the services UG acquired were (or will be) transferred to Muse Group, as well as UG itself. Immediately following its establishment, Muse not only acquired Audacity but also StaffPad.

I say 30 April because that's when Muse published their press release and when Martin Keary (Tantacrul) published a video entitled Im now in charge of Audacity. Seriously. According to his comment,1 Martin will help with proposing Audacity's roadmap and many of its future features as well as working with the community. This has been his role with MuseScore since he joined that project and he will be continuing it here.

-----BEGIN PERSONAL OPINION-----

Looking at his website, I also suspect he will play a large role in redesigning Audacity's interface. Considering that he was instrumental in designing the best mobile interface I've ever had the absolute pleasure of experiencing, I have high hopes that this is the case.

------END PERSONAL OPINION------

Telemetry implementation

Implementation Basics

A few days after the acquisition, a PR was opened that adds Basic telemetry for the Audacity. This implementation collects "application opened" events and sends those to Yandex to estimate the number of Audacity users. It also collects session start and end events, errors for debugging, file used for import and export, OS and Audacity versions, and the use of effects, generators, and analysis tools so they can prioritise future improvements. Sending this data would be optional and the user would be presented with a dialogue the first time they launch the application after installation or after they update to the including release. This description was mostly copied directly from the PR description itself.

Frontend Implementation

This is fairly straightforward and a pretty standard UI for prompting users to consent to analytics and crash logging. This section is included because the community has strong opinions regarding the language used and its design, but that will be discussed later. The screenshot below is copied directly from the PR.

/Amolith/secluded/media/commit/8837b32ac1221fdcf79539b85144f06741bb7efa/~/repos/sites/secluded/static/assets/pngs/audacity-pr/consentdialogue.png

Backend Implementation

Many of the code reviews include the reviewer's personal opinion so I will summarise the comment, provide the code block in question, and link directly to the comment in a footnote.2

  if (!inputFile.Write (wxString::FromUTF8 (ClientID + "\n")))
    return false;

Lines 199-200 of TelemetryManager.cpp save the user's unique client ID to a file.3 This allows the analytics tool (in this case, Google Analytics) to aggregate data produced by a single user.

  def_vars()

    set( CURL_DIR "${_INTDIR}/libcurl" )
    set( CURL_TAG "curl-7_76_0")

Lines 3-6 of CMakeLists.txt "vendor in" libcurl.4 This is when an application directly includes sources for a utility rather than making use utilities provided by the system itself.

  ExternalProject_Add(curl
     PREFIX "${CURL_DIR}"
     INSTALL_DIR "${CURL_DIR}"
     GIT_REPOSITORY https://github.com/curl/curl
     GIT_TAG ${CURL_TAG}
     GIT_SHALLOW Yes
     CMAKE_CACHE_ARGS ${CURL_CMAKE_ARGS}
  )

Lines 29-36 of CMakeLists.txt add curl as a remote dependency.5 This means that the machine building Audacity from its source code has to download curl during that build.

  S.Id (wxID_NO).AddButton (rejectButtonTitle);
  S.Id (wxID_YES).AddButton (acceptButtonTitle)->SetDefault ();

Lines 93-94 of TelemetryDialog.cpp add buttons to the dialogue asking the user whether they consent to data collection.6 SetDefault focuses the button indicating that the user does consent. This means that if the user doesn't really look at the dialogue and presses Spacebar or Enter, or if they do so accidentally by simply bumping the key, they unintentionally consent to data collection. If the user desires, this can later be changed in the settings menu. However, if they weren't aware what they were consenting to or that they did consent, they won't know to go back and opt out.

There are other problems with the code that include simple mistakes, styling that's inconsistent with the rest of the project, unhandled return values resulting in skewed data, use of inappropriate functions, and spelling errors in the comments. I believe these are less important than those above so they won't be discussed.

Community opinions

There were many strong opinions regarding both the frontend and backend implementations of this PR, from the wording of the dialogue and highlighting the consent button to devices running something other than Windows and macOS not being able to send telemetry and thus skewing the data that was collected.

Opinions on the frontend

Really, the only frontend here is the consent dialogue. However, there are many comments about it, the most common of which is probably that the wording is not only too vague7 but also inaccurate.8 The assertion that Google Analytics are not anonymous and any data sent can be trivially de-anonymised (or de-pseudonymised) is repeated many times over. Below are a few links to comments stating such. I searched for the term "anonymous", copied relevant links, and stopped when my scrollbar reached halfway down the page.

The next most pervasive comment is regarding the consent buttons at the bottom of the dialogue where users opt in or out.9 Many individuals call this design a dark pattern. Harry Brignull, a UX specialist focusing on deceptive interface practises, describes dark patterns as tricks used in websites and apps that make you do things that you didn't mean to. The dark pattern in this situation is the opt-in button being highlighted. Many community members assert that users will see the big blue button and click it without actually reading the dialogue's contents. They just want to record their audio and this window is a distraction that prevents them from doing so; it needs to get out of the way and the quickest way to dismiss it is clicking that blue button. Below is a list of some comments criticising this design.

Another issue that was brought up by a couple of individuals was the lack of a privacy policy.10 The consent dialogue links to one, but, at the time of writing, one does not exist at the provided URL. I have archived the state of the page in case that changes in the future.

Opinions on the backend
  if (!inputFile.Write (wxString::FromUTF8 (ClientID + "\n")))
    return false;

The issue many individuals take with this snippet is saving the ClientID. Say an individual has an odd file that causes Audacity to crash any time they try to open it. Say they attempt to open it a hundred times. Without giving the client a unique ID, it could look like there are 100 people having an issue opening a file instead of just the one. However, by virtue of each installation having an entirely unique ID, this telemetry is not anonymous. Anonymity would be sending statistics in such a way that connecting those failed attempts to a single user would be impossible. At best, this implementation is pseudonymous because the client is given a random ID, you don't have to sign in with an account or something.

  def_vars()

    set( CURL_DIR "${_INTDIR}/libcurl" )
    set( CURL_TAG "curl-7_76_0")

Timothe Litt's comment gives a good description of why "vendoring in" libcurl is a bad idea11 and Tyler True's comment gives a good overview of the pros and cons of doing so.12 Many people take issue with this specifically because it's libcurl. Security flaws in it are very common and Audacity's copy would need to be manually kept up to date with every upstream release to ensure none of its vulnerabilities can be leveraged to compromise users. If the Audacity team was going to stay on top of all of the security fixes, they would need to release a new version every week or so.

  ExternalProject_Add(curl
     PREFIX "${CURL_DIR}"
     INSTALL_DIR "${CURL_DIR}"
     GIT_REPOSITORY https://github.com/curl/curl
     GIT_TAG ${CURL_TAG}
     GIT_SHALLOW Yes
     CMAKE_CACHE_ARGS ${CURL_CMAKE_ARGS}
  )

The problem with downloading curl at build-time is that it's simply disallowed for many Linux- and BSD-based operation systems. When a distribution builds an application from source, its build dependencies are often downloaded ahead of time and, as a security measure, the build machine is cut off from the internet to prevent any interference. Because this is disallowed, the build will fail and the application won't be available on those operation systems.

Note, however, that these build machines would have the option to disable telemetry at build-time. This means the machine wouldn't attempt to download curl from GitHub and the build would succeed but, again, telemetry would be disabled for anyone not on Windows or macOS. This defeats the whole purpose of adding telemetry in the first place.

  S.Id (wxID_NO).AddButton (rejectButtonTitle);
  S.Id (wxID_YES).AddButton (acceptButtonTitle)->SetDefault ();

There was a lot of feedback about the decision to highlight the consent button but that was mentioned up in the frontend section; I won't rehash it here.

Broader and particularly well-structured comments

These are simply some comments I feel deserve particular attention.

From SndChaser…

The Audacity team's response


TODO Catchy title about Supernote being "the new paper"   Supernote Writing Productivity Organisation

I like writing things down. I like the feel of the pen (preferably a fountain pen) gliding smoothly over the paper, that nice solid feeling of the tip against the table, seeing the ink dry as it flows from the nib, accidentally swiping my hand through it before it's finished and smearing a bit of ink across the page, then cursing under my breath as I dab it up with a handkerchief or a napkin or something else nearby. I also love that writing things by hand has an impact on memory and improves retention.

The problem

Unfortunately, I don't love keeping up with that paper. Across many different classes, even with dedicated folders for each one, something important inevitably gets lost. Notebooks are also bulky and can take up a lot of space. I tried bullet journalling for about a month earlier this year and, while the process was enjoyable, the maintenance was not. My brain moves faster than my pen (even though I have terrible handwriting) and I inevitably forget letters or even whole words. This is a problem while writing in pen because white-out looks ugly and I dislike wasting whole pages because of a couple mistakes.

The obvious solution here is to get an iPad with an Apple Pen, right? Right??

Wrong because Apple bad.13

The solution

Enter the world of … what are they even called? E-ink notebooks? Paper tablets? E-R/W?14 Do they even have a "device category" yet? I don't know, but they solve my problem in a wonderful way.

As the names suggest, these are devices that can usually open and read e-books (EPUBs, PDFs, etc.), annotate them, and create standalone pages of notes as if they were full notebooks. The most well-known of these devices is likely the reMarkable. They had a hugely successful crowdfunding campaign and produced the reMarkable 1, followed by the reMarkable 2 in 2020. There are a few devices like these by now but we'll look at the reMarkable first.

The reMarkable

This device boasts all of the features I was looking for. It renders digital content, from books and manuals to comics and manga, allows you to mark those documents up as you would if they were physical media, create full notebooks of hand written text, organise them, search, and, if your handwriting is legible enough (mine certainly is not), perform OCR on your notes and email a transcription to yourself. It even runs Linux and the developers have opened SSH up so you can remote in and tinker with it as much as you like. Because of this, there's a pretty awesome community of people creating third-party tools and integrations that add even further functionality. My favourite is probably rMview, a really fast VNC client for the reMarkable that allows you to view your device's screen on any computer.

After watching all of MyDeepGuide's extensive playlist on the reMarkable, however, I decided to go with a different product.

Enter the Supernote A5X

The Supernote A5X has all of the basic features the reMarkable has: reading documents, writing notes, and organising your content. Its implementation, on the other hand, seems to be much more polished. It also lacks some features from the reMarkable while adding others.

Operating System

While the reMarkable runs Codex,15 a "custom Linux-based OS optimised for low-latency e-paper", the Supernote just runs Android. There are both benefits and detriments to this; on one hand, they're running all of Android, bloated that it is, on a very lightweight tablet. On the other, they don't have to develop and maintain a custom operating system. This allows them to focus on other aspects that are arguably more important so I don't actually mind that it runs Android.

The only place that Android stands out is in system operations; file transfer uses MTP and, when you swipe down from the top of the device, a small bar appears similar to what was in early Android. This lets you change WiFi networks, sync with the Supernote Cloud, take a screenshot, search, and access the system settings. Nothing else about the device really screams Android to me.

Community

I don't usually browse Reddit but the Supernote community there is fascinating. I haven't looked around enough to know exactly what his relationship is with the company, but one of the members, u/hex2asc, seems to represent Supernote in something of an official capacity. He's incredibly active and usually responds to posts and questions within a day or two.

Before I purchased a Supernote, I wrote a post asking about a couple of things that concerned me: sync targets, open document formats, and cross-note links. I don't really plan to write full documents on the device but having the option to do so would still be nice. The other features are absolutely killer for me as I would like to maintain a Zettelkasten (I wrote about using Vim to do so last year but didn't end up sticking with it) and manage document synchronisation with my own Nextcloud server. The community was quick to respond and confirm that Zettelkasten functionality would be implemented soon™. u/hex2asc responded the day after and said that WebDAV would be supported but not earlier than May (September update: it's still not supported), ODF would likely not be supported, and cross-note links were definitely a possibility. Another community member has been avidly following the subreddit and even put together an unofficial roadmap.

Interfaces

Home & Organisation
TODO Record very short video about home/organisation
Settings
TODO Record very short video about settings
Writing & Annotating

The following images are screenshots of the full page above with the possible UI variations while reading a book. This first one is default, with the editing bar at the top. It is exactly the same as what's displayed on the blank pages for hand writing full notes. From left to right is the Table of Contents toggle, the pen tools (fineliner, "fountain" pen,16 and highlighter), the erasers, lasso select tool, undo/redo, context menu, palm rejection toggle, previous page, goto page, next page, and exit.

/Amolith/secluded/media/commit/8837b32ac1221fdcf79539b85144f06741bb7efa/~/repos/sites/secluded/static/assets/pngs/supernote-reader-default.png

You can hold your finger on that bar and drag it down to detach it from the top. The default width exposes all the tools without whitespace. You can move it around the screen by dragging the circle with a straight line through the middle on the far left.

/Amolith/secluded/media/commit/8837b32ac1221fdcf79539b85144f06741bb7efa/~/repos/sites/secluded/static/assets/pngs/supernote-reader-medium.png

If you tap that circle, the width shrinks and everything except the pens, erasers, and undo/redo buttons are hidden. It can be dragged the same was as in the previous image and tapping that circle will expand the bar again.

/Amolith/secluded/media/commit/8837b32ac1221fdcf79539b85144f06741bb7efa/~/repos/sites/secluded/static/assets/pngs/supernote-reader-small.png

The last mode is with the bar completely hidden. You achieve this just by dragging it to the right edge of the screen. Once hidden, you can swipe right to left from the edge and it will be revealed flush with the right edge.

/Amolith/secluded/media/commit/8837b32ac1221fdcf79539b85144f06741bb7efa/~/repos/sites/secluded/static/assets/pngs/supernote-reader-minimal.png

Experience

Reading content

I love e-ink. I think it looks beautiful and would love to have an e-ink monitor.17 That said, the Supernote has an especially nice display with 226 PPI (pixels per inch). The image below was taken with my phone's camera so it's not very good. However, if you zoom in a bit, you can see that the curved edges of some letters are slightly pixellated. Viewing with my naked eye at a comfortable distance, it does look better to me than some of my print books, however.

/Amolith/secluded/media/commit/8837b32ac1221fdcf79539b85144f06741bb7efa/~/repos/sites/secluded/static/assets/pngs/supernote-resolution.png

At the moment, I am pretty disappointed with Table of Contents detection for ePUBs. A great many of my books seem to use a legacy ToC format that the Supernote sees and tries/fails to read before attempting to read the more up-to-date one. This is easily remedied by editing the ePUB in Calibre, going to Tools → Upgrade Book Internals → Remove the legacy Table of Contents in NCX format. You might need to make a small change to one of the HTML files and revert it before the save button is enabled. After that, just copy it back over to the Supernote and everything should work properly.

Writing notes

I write notes as often if not more often than I read and annotate books. It's the main reason I purchased the device and I love the experience. The Supernote doesn't really feel like paper despite what their marketing materials claim, though it doesn't feel bad either. It's hard to describe but I would say it's something like writing with a rollerball pen on high-quality paper with a marble counter underneath: incredibly smooth with but a little bit of texture so it doesn't feel like writing on a glass display.

While writing latency18 is noticeable, I really don't have a huge issue with it. I write very quickly but find that the slight latency actually makes writing more enjoyable. It sounds weird and I'm not sure why, but I really like writing on the Supernote; it's wonderfully smooth, pressure-sensitive, the latency makes things interesting, and the Heart of Metal pen feels good in my hand.

Surfacing Content

While organisation is done using a regular filesystem hierarchy, the Supernote does have other ways to search for and surface your notes. As you're writing, you can use the lasso select tool and encircle a word. A little dialogue pops up and gives you a few buttons for things you can do with that selection: copy, move to another page, cut, add it to the Table of Contents, or mark it as a key word. If you select the key word icon, the Supernote does some incredible OCR19 on it and displays a dialogue where you can add it to the note file as a tag. This dialogue allows you to edit the word before adding it just in case the OCR was wonky. Even with my terrible handwriting, I've found that it works very well and I rarely have to make edits.

TODO Ping Isi and Volpeon when finished

TODO Making yourself overly available

Notes

Get rid of information that isn't important
Escalate the info that is
Set clear boundaries for when you are available
Enforce those with automatic DnD rules or use timers
With groups…
Specialisation is good and should be encouraged
All of the above points apply with coworkers as well

TODO Pong Jake when finished

TODO Setting LXC up for local "cloud" development

TODO Stop using Gmail!

Education   @Education

TODO Homeschooling

Music   @Music

Pipe Smoking   @Pipe__Smoking

Dungeons & Dragons   @Dungeons__and__Dragons

Footnotes


2

Note that because I am not a C programmer, these reviews might not be entirely accurate and I wouldn't be able to catch the reviewer's error. I am relying on other community members to catch issues and comment on them; none of the reviews I link to have such comments so I'm assuming they are correct.

8

Link to the comment and the screenshot is the same as previous

16

It's not really a fountain pen even though that's what they call it; it's just pressure-sensitive.

17

There does seem to be a group of people interested in just such a thing: Challenges Building an Open-Source E Ink Laptop

15Taken from their support page about the reMarkable 2; search the page for operating system and it should show up.

13I dislike Apple's operating system, their hardware, business model, privacy practises, and much of what they stand for as a company. Don't @ me.

14E-R/W is a play on media commonly being labelled as R/W when you can read from it and write to it.

18

In this situation, latency refers to how long it takes for "ink" to show up on the "page" after writing something.

19

Optical Character Recognition: the program looks at your handwriting and tries to turn it into text.