44 KiB
- Meta
- Technology
- Education
- Music
- Pipe Smoking
- Dungeons & Dragons
- Footnotes
Meta @Meta
Technology @Technology
TODO Audacity and the telemetry pull request
Five days ago at the time of writing, Dmitry Vedenko opened a Pull Request (PR) in Audacity's GitHub repository entitled Basic telemetry for the Audacity. About two days later, all hell broke loose. That PR now has over 3.3 thousand downvotes and more than one thousand comments from nearly 400 individuals. I started reading the posts shortly after they began and kept up with them over the following days, reading every single new post. I recognise that few people are going to feel like wading through over 1k comments so this is my attempt to provide a summary of the PR itself using the community's code reviews along with a summary of the various opinions conveyed in the comments.
When I reference comments, I'll provide a footnote that includes a link to the comment and a link to a screenshot just in case it's removed or edited in the future.
Audacity's acquisition
I haven't been able to find much information in this area so forgive me if I'm scant on details.
On 30 April, a company called Muse Group acquired Audacity. According to their website, Muse is the parent company behind many musical applications and tools. It was founded by Eugeny Naidenov just days before it acquired Audacity. Before all of this, Eugeny Naidenov founded Ultimate Guitar (UG) in 1998. The service grew rather quickly and now has over 300 million users. UG acquired Dean Zelinsky Guitars in 2012, Agile Partners in 2013, MuseScore in 2017, and Crescendo in 2018. Muse Group was established in 2021 and it seems as if all of the services UG acquired were (or will be) transferred to Muse Group, as well as UG itself. Immediately following its establishment, Muse not only acquired Audacity but also StaffPad.
I say 30 April because that's when Muse published their press release and when Martin Keary (Tantacrul) published a video entitled I’m now in charge of Audacity. Seriously. According to his comment,1 Martin will help with proposing Audacity's roadmap and many of its future features as well as working with the community. This has been his role with MuseScore since he joined that project and he will be continuing it here.
-----BEGIN PERSONAL OPINION-----
Looking at his website, I also suspect he will play a large role in redesigning Audacity's interface. Considering that he was instrumental in designing the best mobile interface I've ever had the absolute pleasure of experiencing, I have high hopes that this is the case.
------END PERSONAL OPINION------
Telemetry implementation
Implementation Basics
A few days after the acquisition, a PR was opened that adds Basic telemetry for the Audacity. This implementation collects "application opened" events and sends those to Yandex to estimate the number of Audacity users. It also collects session start and end events, errors for debugging, file used for import and export, OS and Audacity versions, and the use of effects, generators, and analysis tools so they can prioritise future improvements. Sending this data would be optional and the user would be presented with a dialogue the first time they launch the application after installation or after they update to the including release. This description was mostly copied directly from the PR description itself.
Frontend Implementation
This is fairly straightforward and a pretty standard UI for prompting users to consent to analytics and crash logging. This section is included because the community has strong opinions regarding the language used and its design, but that will be discussed later. The screenshot below is copied directly from the PR.
Backend Implementation
Many of the code reviews include the reviewer's personal opinion so I will summarise the comment, provide the code block in question, and link directly to the comment in a footnote.2
if (!inputFile.Write (wxString::FromUTF8 (ClientID + "\n")))
return false;
Lines 199-200 of TelemetryManager.cpp save the user's unique client ID to a file.3 This allows the analytics tool (in this case, Google Analytics) to aggregate data produced by a single user.
def_vars()
set( CURL_DIR "${_INTDIR}/libcurl" )
set( CURL_TAG "curl-7_76_0")
Lines 3-6 of CMakeLists.txt "vendor in" libcurl.4 This is when an application directly includes sources for a utility rather than making use utilities provided by the system itself.
ExternalProject_Add(curl
PREFIX "${CURL_DIR}"
INSTALL_DIR "${CURL_DIR}"
GIT_REPOSITORY https://github.com/curl/curl
GIT_TAG ${CURL_TAG}
GIT_SHALLOW Yes
CMAKE_CACHE_ARGS ${CURL_CMAKE_ARGS}
)
Lines 29-36 of CMakeLists.txt add curl as a remote dependency.5 This means that the machine building Audacity from its source code has to download curl during that build.
S.Id (wxID_NO).AddButton (rejectButtonTitle);
S.Id (wxID_YES).AddButton (acceptButtonTitle)->SetDefault ();
Lines 93-94 of TelemetryDialog.cpp add buttons to the dialogue asking
the user whether they consent to data collection.6 SetDefault
focuses the button indicating that the user does consent. This means
that if the user doesn't really look at the dialogue and presses
Spacebar or Enter, or if they do so accidentally by simply bumping the
key, they unintentionally consent to data collection. If the user
desires, this can later be changed in the settings menu. However, if
they weren't aware what they were consenting to or that they did
consent, they won't know to go back and opt out.
There are other problems with the code that include simple mistakes, styling that's inconsistent with the rest of the project, unhandled return values resulting in skewed data, use of inappropriate functions, and spelling errors in the comments. I believe these are less important that those above so they won't be discussed.
Community opinions
There were many strong opinions regarding both the frontend and backend implementations of this PR, from the wording of the dialogue and highlighting the consent button to devices running something other than Windows and macOS not being able to send telemetry and thus skewing the data that was collected.
Opinions on the frontend
Really, the only frontend here is the consent dialogue. However, there are many comments about it, the most common of which is probably that the wording is not only too vague7 but also inaccurate8. The assertion that Google Analytics are not anonymous and any data sent can be trivially de-anonymised (or de-pseudonymised) is repeated many times over. Below are a few links to comments stating such. I searched for the term "anonymous", copied relevant links, and stopped when my scrollbar reached halfway down the page.
- r628156527
- 833969780
- 833969933
- r627995927
- 834358022
- 834377549
- 834382007
- 834385463
- 834405825
- 834531779
- 834546874
- 834638000
The next most pervasive comment is regarding the consent buttons at the bottom of the dialogue where users opt in or out.9 Many individuals call this design a dark pattern. Harry Brignull, a UX specialist focusing on deceptive interface practises, describes dark patterns as tricks used in websites and apps that make you do things that you didn't mean to. The dark pattern in this situation is the opt-in button being highlighted. Many community members assert that users will see the big blue button and click it without actually reading the dialogue's contents. They just want to record their audio and this window is a distraction that prevents them from doing so; it needs to get out of the way and the quickest way to dismiss it is clicking that blue button. Below is a list of some comments criticising this design.
Another issue that was brought up by a couple of individuals was the lack of a privacy policy.10 The consent dialogue links to one, but, at the time of writing, one does not exist at the provided URL. I have archived the state of the page in case that changes in the future.
Opinions on the backend
if (!inputFile.Write (wxString::FromUTF8 (ClientID + "\n")))
return false;
The issue many individuals take with this snippet is saving the
ClientID
. Say an individual has an odd file that causes Audacity to
crash any time they try to open it. Say they attempt to open it a
hundred times. Without giving the client a unique ID, it could look like
there are 100 people having an issue opening a file instead of just the
one. However, by virtue of each installation having an entirely unique
ID, this telemetry is not anonymous. Anonymity would be sending
statistics in such a way that connecting those failed attempts to a
single user would be impossible. At best, this implementation is
pseudonymous because the client is given a random ID, you don't have to
sign in with an account or something.
def_vars()
set( CURL_DIR "${_INTDIR}/libcurl" )
set( CURL_TAG "curl-7_76_0")
Timothe Litt's comment gives a good description of why "vendoring in" libcurl is a bad idea11 and Tyler True's comment gives a good overview of the pros and cons of doing so.12 Many people take issue with this specifically because it's libcurl. Security flaws in it are very common and Audacity's copy would need to be manually kept up to date with every upstream release to ensure none of its vulnerabilities can be leveraged to compromise users. If the Audacity team was going to stay on top of all of the security fixes, they would need to release a new version every week or so.
ExternalProject_Add(curl
PREFIX "${CURL_DIR}"
INSTALL_DIR "${CURL_DIR}"
GIT_REPOSITORY https://github.com/curl/curl
GIT_TAG ${CURL_TAG}
GIT_SHALLOW Yes
CMAKE_CACHE_ARGS ${CURL_CMAKE_ARGS}
)
The problem with downloading curl at build-time is that it's simply disallowed for many Linux- and BSD-based operation systems. When a distribution builds an application from source, its build dependencies are often downloaded ahead of time and, as a security measure, the build machine is cut off from the internet to prevent any interference. Because this is disallowed, the build will fail and the application won't be available on those operation systems.
Note, however, that these build machines would have the option to disable telemetry at build-time. This means the machine wouldn't attempt to download curl from GitHub and the build would succeed but, again, telemetry would be disabled for anyone not on Windows or macOS. This defeats the whole purpose of adding telemetry in the first place.
S.Id (wxID_NO).AddButton (rejectButtonTitle);
S.Id (wxID_YES).AddButton (acceptButtonTitle)->SetDefault ();
There was a lot of feedback about the decision to highlight the consent button but that was mentioned up in the frontend section; I won't rehash it here.
Broader and particularly well-structured comments
The Audacity team's response
My opinions
Can't decide whether to include this section or not. If you make it all the way down here, let me know what you think.
TODO Catchy title about Supernote being "the new paper" Supernote Writing Productivity Organisation
I like writing things down. I like the feel of the pen (preferably a fountain pen) glide smoothly over the paper, that nice solid feeling of the tip against the table, seeing the ink dry as it flows from the nib, accidentally swiping my hand through it before it's finished and smearing a bit of ink across the page, cursing under my breath as I dab it up with a handkerchief or a napkin or something else nearby. I also love that writing things by hand has an impact on memory and improves retention.
The problem
Unfortunately, I don't love keeping up with that paper. Across many different classes, even with dedicated folders for each one, something important inevitably gets lost. Notebooks are also bulky and can take up a lot of space. I tried bullet journalling for about a month earlier this year and, while the process was enjoyable, the maintenance was not. My brain moves faster than my pen (even though I have terrible handwriting) and I inevitably forget letters or even whole words. This is a problem while writing in pen because white-out looks ugly and I dislike wasting whole pages because of a couple mistakes.
The obvious solution here is to get an iPad with an Apple Pen, right? Right??
Wrong because Apple bad13.
The solution
Enter the world of … what are they even called? E-ink notebooks? Paper tablets? E-R/W14? Do they even have a "device category" yet? I don't know but they solve my problem in a wonderful way.
As the names suggest, these are devices that can usually open and read e-books (EPUBs, PDFs, etc.), annotate them, and create standalone pages of notes as if they were full notebooks. The most well-known of these devices is likely the reMarkable. They had a hugely successful crowdfunding campaign and produced the reMarkable 1, followed by the reMarkable 2 in 2020. There are a few others by now but we'll look at the reMarkable first.
The reMarkable
This device boasts all of the features I was looking for. It renders digital content, from books and manuals to comics and light novels, allows you to mark those documents up as you would if it were physical media, create full notebooks of hand written text, organise them, search, and, if your handwriting is legible enough (mine certainly is not), perform OCR on your notes and email a transcription to yourself. It even runs Linux and the developers have opened SSH up so you can remote in and tinker with it as much as you like. Because of this, there's a pretty awesome community of people creating third-party tools and integrations that add even further functionality. My favourite is probably rMview, a really fast VNC client for the reMarkable that allows you to view your device's screen on any computer.
After watching all of MyDeepGuide's extensive playlist on the reMarkable, however, I decided to go with a different product.
Enter the Supernote A5X
The Supernote A5X has all of the basic features the reMarkable has: reading documents, writing notes, and organising your content. Its implementation, on the other hand, seems to be much more polished. It also lacks some features from the reMarkable while adding others.
Operating System
While the reMarkable runs Codex15, a "custom Linux-based OS optimised for low-latency e-paper", the Supernote just runs Android. There are both benefits and detriments to this; on one hand, they're running all of Android, bloated that it is, on a very lightweight tablet. On the other, they don't have to develop and maintain a custom operating system. This allows them to focus on other aspects that are arguably more important so I don't actually mind that it runs Android.
The only place that Android stands out is in system operations; file transfer uses MTP and, when you swipe down from the top of the device, a small bar appears similar to what was in early Android. This lets you change WiFi networks, sync with the companion app on your LAN, the remote servers, take a screenshot, search, and access the system settings. Nothing else about the device really makes me think of Android.
Community
I don't usually browse Reddit but the Supernote community there is fascinating. I haven't looked around enough to know exactly what his relationship is with the company, but one of the members, u/hex2asc, seems to represent Supernote in something of an official capacity. He's incredibly active and usually responds to posts and questions within a couple of days.
Before I purchased one, I wrote a post asking about a couple of things that concerned me: sync targets, open document formats, and cross-note links. I don't ever plan to write full documents with a keyboard on the Supernote but it would still be nice. The other features would be absolutely killer for me as I would like to maintain a Zettelkasten (I wrote about using Vim to do so last year but didn't end up sticking with it) and manage document synchronisation with my own Nextcloud server. The community was quick to respond and confirm that Zettelkasten functionality would be implemented soon™. u/hex2asc responded the day after and said that WebDAV would be supported but not earlier than May, ODF would likely not be supported, and cross-note links were definitely a possibility. Another community member has been avidly following the subreddit and even put together an unofficial roadmap.
Interfaces
Home & Organisation
TODO Record very short videos
Settings
TODO Record very short videos
Writing & Annotating
The following images are screenshots of the full page above with the possible UI variations while reading a book. This first one is default, with the editing bar at the top. It is exactly the same as what's displayed on the blank pages for hand writing full notes. From left to right is the Table of Contents toggle, the pen tools (fineliner, "fountain" pen16, and highlighter), the erasers, lasso select tool, undo/redo, context menu, palm rejection toggle, previous page, goto page, next page, and exit.
You can hold your finger on that bar and drag it down to detach it from the top. The default width exposes all the tools without whitespace. You can move it around the screen by dragging the circle with a straight line through the middle on the far left.
If you tap that circle, the width shrinks and everything except the pens, erasers, and undo/redo buttons are hidden. It can be dragged the same was as in the previous image and tapping that circle will expand the bar again.
The last mode is with the bar completely hidden. You achieve this just by dragging it to the right edge of the screen. Once hidden, you can swipe right to left from the edge and it will be revealed flush with the right edge.
Experience
Reading content
I love e-ink. I think it looks beautiful and would love to have an e-ink monitor17. That said, the Supernote has an especially nice display with 226 PPI (pixels per inch). The image below was taken with my phone's camera so it's not very good. However, if you zoom in a bit, you can see that the curved edges of some letters are slightly pixellated. Viewing with my naked eye at a comfortable distance, it does look better to me than some of my print books.
At the moment, I am pretty disappointed with Table of Contents detection for ePUBs. A great many of my books seem to use a legacy ToC format that the Supernote sees and tries/fails to read before attempting to read the more up-to-date one. This is easily remedied by editing the ePUB in Calibre, going to Tools → Upgrade Book Internals → Remove the legacy Table of Contents in NCX format. You might need to make a small change to one of the HTML files and revert it before the save button is enabled. After that, just copy it back over to the Supernote and everything should work properly.
Writing notes
I write notes as often if not more often than I read and annotate books. It's the main reason I purchased the device and I love the experience. The Supernote doesn't really feel like paper despite what their marketing materials claim, though it doesn't feel bad either. It's hard to describe but I would say it's something like writing with a rollerball pen on high-quality paper with a marble counter underneath: incredibly smooth with but a little bit of texture so it doesn't feel like writing on a glass display.
While writing latency18 is noticeable, I really don't have a huge issue with it. I write very quickly but find that the slight latency actually makes writing more enjoyable. It sounds weird and I'm not sure why, but I really like writing on the Supernote; it's wonderfully smooth, pressure-sensitive, the latency makes things interesting, and the Heart of Metal pen feels good in my hand.
Surfacing Content
While organisation is done using a regular filesystem hierarchy, the Supernote does have other ways to search for and surface your notes. As you're writing, you can use the lasso select tool and encircle a word. A little dialogue pops up and gives you a few buttons for things you can do with that selection: copy, move to another page, cut, add it to the Table of Contents, or mark it as a key word. If you select the key word icon, the Supernote does some incredible OCR19 on it and displays a dialogue where you can add it to the note file as a tag. This dialogue allows you to edit the word before adding it just in case the OCR was wonky. Even with my terrible handwriting, I've found that it works very well and I rarely have to make edits.
TODO Pong Isi and Volpeon when finished
TODO Migrating repositories between git hosts
TODO A perfect email setup (for me)
I've never been satisfied with any of the email clients most people use.
I've tried Thunderbird, Evolution, Mailspring, Mail.app, Roundcube,
SOGo, Geary, and many more. None of them handle multiple accounts
particularly well because all of the emails associated with that account
are bound within it. Sure, you can make a new folder somewhere called
TODO
and move all of your actionable emails to that folder but, when you
go to move actionable emails from another account into that folder,
you'll likely find that the client simply doesn't let you. If it does,
when you reply, it will likely be sent from the wrong account. This is a
limitation of the IMAP protocol; everything is managed locally but
changes are pushed to the remote server and mixing things the way I want
leads to broken setups.
Before I go any further, these are a few characteristics of my ideal email tool.
- Support for multiple accounts (obviously)
- Native desktop application (not Electron)
- Has stellar keyboard shortcuts
- Doesn't require internet connectivity (other than downloading and sending of course)
- Organisation can be done with tags
Why tags?
Because they're better. Hierarchies are useful for prose and code but
not for files, emails, notes, or anything where an item may fit within
multiple categories. Imagine you get an email from your Computer Science
professor that includes test dates, homework, and information about
another assignment. In that same email, he asks every student to reply
with something they learned from the previous class as a form of
attendance. In a hierarchy, the best place for this might just be a TODO
folder even though it would also fit under School
, CS
, Dates
, To read
,
and Homework
. Maybe you have a few minutes and want to clear out some
emails that don't require any interaction. In a tag-based workflow, this
would be a good time to open To read
, get that email out of the way, and
remove the To read
tag. It would still show up under the other tags so
you can find it later and take the time to fully answer the professor's
question, add those dates to your calendar, and add the homework
assignments to your TODO
list. Hierarchies can be quite cumbersome to
work with, especially when one folder ends up getting all the data. Tags
ensure that you only see what you want when you want it. Tags are more
efficient and they will remain my organisation system of choice.
The tools
In short, the tools we will be using are…
- OfflineIMAP to download our emails
notmuch
, the primary way emails will be organisedafew
to apply initialnotmuch
tags based on subject, sender, recipient, etc.- NeoMutt to interact with those emails, reply, compose, add/remove tags, etc.
msmtp
for relaying our replies and compositions to our mail provider
Yes, it's a lot. Yes, it's time-consuming to set up. Yes, it's worth it (in my opinion).
OfflineIMAP
As I said above, IMAP is limiting; we need to use some other method of downloading our emails. There's an awesome piece of software called OfflineIMAP which is built for exactly this purpose. Its configuration can be rather daunting if you have as many accounts as I do (17) but it's not terrible.
General
[general]
metadata = ~/.offlineimap
accounts = use_exa
maxsyncaccounts = 1
ui = basic
ignore-readonly = no
pythonfile = ~/.offlineimap.py
socktimeout = 60
fsync = true
The first big option is accounts
; it tells OfflineIMAP what to actually
sync. What to put there will be defined further down but use_exa
is just
filler text. The example account is user@example.com
and I shortened
that to use_exa
. maxsyncaccounts
is also fairly important as it tells
OfflineIMAP to only pull emails from one account at a time. This is
certainly slower than multiple but it's also safer because we'll be
running this in the background and don't want many OfflineIMAP processes
executing concurrently and interfering with each other. pythonfile
will
be discussed later.
Account
[Account use_exa]
localrepository = use_exa-local
remoterepository = use_exa-remote
quick = 10
utf8foldernames = yes
postsynchook = notmuch new
In the first block, localrepository
and remoterepository
tell OfflineIMAP where
to look for your emails. use_exa-local
is an arbitrary naming scheme I use to
differentiate between the various local and remote accounts. It can easily be
swapped with something else.
Repository
[Repository use_exa-local]
type = Maildir
localfolders = ~/mail/use_exa
sync_deletes = yes
[Repository use_exa-remote]
type = IMAP
remotehost = imap.example.com
starttls = yes
ssl = no
remoteport = 143
remoteuser = user@example.com
remotepasseval = get_pass("use_exa")
auth_mechanisms = GSSAPI, XOAUTH2, CRAM-MD5, PLAIN, LOGIN
maxconnections = 1
createfolders = True
sync_deletes = yes
The repository sections describe how the emails are stored or retrieved.
In the local
block, you'll notice that the type is Maildir
. In this
format, each email is given a unique filename and stored in a hierarchy
of folders within your account. This is often how your emails are stored
on your provider's mail server.
pythonfile
is used here to authenticate with the remote server. This can
be complicated and depends entirely on how you manage your passwords. I
use KeePassXC and love it. When I set OfflineIMAP up, however, it didn't
have libsecret
compatibility. This would have made setup significantly
easier but, as it already just works™, I don't really see a reason to
change it.
This new feature allows libresecret
-based applications to query
KeePassXC for your passwords or store them there on your behalf. CLI/TUI
applications that need a secure mechanism for background authentication
can use secret-tool lookup Title "TITLE_OF_PASSWORD"
as the password
command. See the pull request for more details. Because this wasn't a
feature when I first set it up, I put my passwords in plaintext files
and encrypted them with the GPG key stored on my YubiKey. As long as my
key is plugged in, OfflineIMAP can authenticate and download all my
emails just fine. The process for using a GPG key not stored on a
hardware token is pretty much the same and I'll talk about that process
instead.
These are the contents of my ~/.offlineimap.py
.
#! /usr/bin/env python2
from subprocess import check_output
def get_pass(account):
return check_output(["gpg", "-dq", f" ~/.mail_pass/{account}.gpg"]).strip("\n")
This runs gpg -dq ~/.mail_pass/use_exa.gpg
then strips the newline
character before returning it to OfflineIMAP. -d
tells GPG that you're
passing it a file you want decrypted and -q
tells it not to give any
output other than the file's contents. For a setup that works with this
Python script, put your passwords in plaintext files with the account
name as the file name (e.g. use_exa
). You'll then encrypt it with gpg
-er <YOUR_KEY_ID> use_exa
. Running gpg -dq use_exa.gpg
should display
your password. Repeat for every account and store the resulting files in
~/.mail_pass/
.
The other option, sync_deletes
, is whether or not to delete remote
emails that have been deleted locally. I enabled that because I want to
have easy control over how much remote storage is used.
Here's the next block again so you don't have to scroll up:
[Repository use_exa-remote]
type = IMAP
remotehost = imap.example.com
starttls = yes
ssl = no
remoteport = 143
remoteuser = user@example.com
remotepasseval = get_pass("use_exa")
auth_mechanisms = GSSAPI, XOAUTH2, CRAM-MD5, PLAIN, LOGIN
maxconnections = 1
createfolders = True
sync_deletes = yes
This one's pretty self-explanatory. type
, remotehost
, starttls
, ssl
, and
remoteport
should all be somewhere in your provider's documentation.
remoteuser
is your email address and remotepasseval
is the function that
will return your password and allow OfflineIMAP to authenticate. You'll
want enter the name of your password file without the .gpg
extension;
the script takes care of adding that. Leave auth_mechanisms
alone and
the same for maxconnections
unless you know your provider won't rate
limit you or something for opening multiple connections. sync_deletes
is
the same as in the previous block.
Copy those three blocks for as many accounts as you want emails
downloaded from. I have 510 lines just for Account
and Repository
blocks
due to the number of address I'm keeping track of.
notmuch
notmuch
is a fast, global-search, and tag-based email system. This
what does all of our organisation as well as what provides the "virtual"
mailboxes NeoMutt will display later on. Configuration is incredibly
simple. This file goes in ~/.notmuch-config
.
[database]
path=/home/user/mail/
[user]
name=Amolith
primary_email=user@example.com
[new]
tags=unread;new;
ignore=Trash;
[search]
exclude_tags=deleted;spam;
[maildir]
synchronize_flags=true
First section is the path to where all of your archives are, the [user]
section is where you list all of your accounts, [new]
adds tags
to mail
notmuch hasn't indexed yet and ignores indexing the Trash
folder, and
[search]
ignores mail tagged with deleted
or spam
. The final section
tells notmuch
to add maildir flags which correspond with notmuch
tags.
These flags will be synced to the remote server the next time
OfflineIMAP runs and things will be somewhat organised in your webmail
interface.
After creating the configuration file, run notmuch new
and wait for all
of your mail to be indexed. This could take a short amount of time or it
could take minutes up to an hour, depending on how many emails you have.
After it's finished, you'll be able to run queries and see matching
emails:
$ notmuch search from:user@example.com
thread:0000000000002e9d December 28 [1/1] Example User; Random subject that means nothing
This is not terribly useful in and of itself because you can't read it or reply to it or anything. That's where the Mail User Agent (MUA) comes in.
afew
afew
is an initial tagging script for notmuch. After calling notmuch
new
, afew
will add tags based on headers such as From:
, To:
, Subject:
,
etc. as well as handle killed threads and spam. The official quickstart
guide is probably the best resource on getting started but I'll include
a few tips here as well.
NeoMutt
msmtp
msmtp
is what's known as a Mail Transfer Agent (MTA). You throw it an
email and it will relay that to your mail provider's SMTP server so it
can have the proper headers attached for authentication, it can be sent
from the proper domain, etc. All the necessary security measures can be
applied that prevent your email from going directly to spam or from
being rejected outright.
msmtp
's configuration is also fairly simple if a bit long, just like
OfflineIMAP's.
# Set default values for all following accounts.
defaults
# Use the mail submission port 587 instead of the SMTP port 25.
port 587
# Always use TLS.
tls on
This section just sets the defaults. It uses port 587 (STARTTLS) for all SMTP servers unless otherwise specified and enables TLS.
account user@example.com
host smtp.example.com
from user@example.com
auth on
user user@example.com
passwordeval secret-tool lookup Title "user@example.com"
This section is where things get tedious. When passing an email to
msmtp
, it looks at the From:
header and searches for a block with a
matching from
line. If it finds one, it will use those configuration
options to relay the email. host
is simply the SMTP server of your mail
provider, sometimes this is mail.example.com
, smtp.example.com
, etc.
I've already explained from
, auth
simply says that a username and
password will have to be provided, user
is that username, and
passwordeval
is a method to obtain the password.
When I got to configuring msmtp
, KeePassXC had just released their
libsecret
integration and I wanted to try it. secret-tool
is a command
line tool used to store and retrieve passwords from whatever keyring
you're using. I think KDE has kwallet
and GNOME has gnome-keyring
if
you already have those set up and want to use them; the process should
be quite similar regardless.
As mentioned above secret-tool
stores and retrieves passwords. For
retrieval, it expects the command to look like this.
secret-tool lookup {attribute} {value} ...
I don't know what kwallet
and gnome-keyring
's attributes are but
this can be used with KeePassXC by specifying the Title
attribute. If
the password to your email account is stored in KeePassXC with the
address as the entry title, you can retrieve it by simply running…
secret-tool lookup Title "user@example.com"
If you have a different naming system, you'll have to experiment and try different things; I don't know what KeePassXC's other attributes are so I can't give other examples.
You could also just use the same method I described in the Repository section! It will work perfectly fine here as well.
passwordeval gpg -dq ~/.mail_pass/use_exa.gpg
Now that the whole block is assembled, copy/paste/edit for as many accounts as you want to send email from.
Summary
TODO Pong fluffy when finished
TODO Making yourself overly available
Notes
Get rid of information that isn't important
Escalate the info that is
Set clear boundaries for when you are available
Enforce those with automatic DnD rules or use timers
With groups…
Specialisation is good and should be encouraged
All of the above points apply with coworkers as well
TODO Pong Jake when finished
TODO Setting LXC up for local "cloud" development
Education @Education
TODO Homeschooling
Music @Music
Pipe Smoking @Pipe__Smoking
Dungeons & Dragons @Dungeons__and__Dragons
Footnotes
Note that because I am not a C programmer, these reviews might not be entirely accurate and I wouldn't be able to catch the reviewer's error. I am relying on other community members to catch issues and comment on them; none of the reviews I link to have such comments so I'm assuming they are correct.
Link to the comment and the screenshot is the same as previous
It's not really a fountain pen even though that's what they call it; it's just pressure-sensitive.
There does seem to be a group of people interested in just such a thing: Challenges Building an Open-Source E Ink Laptop
15Taken from their support page about the reMarkable 2; search the page for operating system and it should show up.
13I dislike Apple's operating system, their hardware, business model, privacy practises, and much of what they stand for as a company. Don't @ me.
14E-R/W is a play on media commonly being labelled as R/W when you can read from it and write to it.
In this situation, latency refers to how long it takes for "ink" to show up on the "page" after writing something.
Optical Character Recognition: the program looks at your handwriting and tries to turn it into text.