So I moved my Emacs to the cloud. Well, partially, at least (more on that later below).

Everything started few weeks ago when I was moving one of my servers from one cloud provider to another. I realized that this is going to be much cheaper, so I started to think what else I could do.

One of my long time ideas is to eliminate client side problems as much as possible, meaning things like hardware failures, power failures (I live in the country side) and things like that. As I’ve explained in many of my posts, I like to keep things as static and simple as possible, and in my ideal world my computer system would be a big Lisp image that never dies. Always available, always the way I left it. So how is that related to cloud computing? Well, if I could run my Emacs in the cloud and access it in a thin client -style, there would be no worries about client-side problems. Well, besides network-related problems, obviously.

In the first iteration I got a bit ambitious and really tried to implement almost full thin client. So all things, including web browsing would be in the cloud, running on top of full Window Manager. It became apparent almost immediately that this isn’t going work. Modern web browsers are gargantuan monsters and they don’t just work over remote connections.

My remote connection technology of choice is X2Go, and I cannot run remote browser with acceptable speed even in the local network! All other gui applications, like gimp or LibreOffice run just fine, so there’s just something totally weird going on with the browsers. I guess it has something to do with the way they present the display. It’s not simple gui elements, but rather dynamically generated images. Or something like that. I don’t know the exact details, but they are monsters.

So I dropped the full thin client idea, and concentrated Emacs only over X2Go. Why X? Emacs runs fine on console, but I’m using a lot of gui features, like viewing images and PDF documents in Emacs buffers.

Turned out working just perfectly! And since I’m running only single application (as opposed to Windows Manager), I could make things even more simple by running the Emacs as X2Go published application. Instead of running full Emacs every time, I’m starting Emacs server at boot, and then spawning emacsclient frames, so all remote frames appear as normal Emacs frames on my local desktop. Clipboard works in both directions, and in general it behaves and feels like a local emacs frame.

Setup

Installation was actually very simple on CentOS 8. Just a matter of sudo dnf install x2goserver. After that, I added emacsclient as published application in

/etc/x2go/applications/emacsclient.desktop:

[Desktop Entry]
Name=Emacsclient
GenericName=Text Editor
Exec=emacsclient -c
Icon=emacs
Type=Application
Terminal=false
Categories=Utility;TextEditor;X-Red-Hat-Base;
StartupWMClass=Emacs
X-Desktop-File-Install-Version=0.23

What next?

So what is it I’m doing with my shiny and beautiful cloudified Emacs then? Some examples:

  • Email (mu4e). It’s nice to be able to check it any time, anywhere.
  • Instant messaging. As above, plus I never lose anything because it runs 24/7.

In general there’s really nothing I cannot do remotely with the Emacs, but I don’t want to rush into things.

Some security concerns

Basically, I need to consider the data in the cloud Emacs very carefully. Email and instant messaging are anyways something I cannot have control over, so I don’t feel very much concerned in having that in the cloud image, provided that the access to this system is very limited.

Access is allowed only from known networks of my ISP to port 22 (ssh). SSH-daemon is configured to allow only public key authentication. I also had to review the access to cloud provider console very carefully.

Another concern was about my PGP keys.

I felt very unconfident about leaving my normal keyring there, so first I just dropped the idea of using my private PGP keyring in the cloud Emacs. This also escalated further and I had to refactor my gnupg setup on my local machine. It turned out very problematic, so I basically had to generate new keys. Maybe I’ll write more about that later, but shortly put, now the master key is properly configured and backed up, and the subkeys are used with hardware security token (Yubikey, maybe I’ll write more about this in some later blog post).

So at first, I just created some temporary keyring for protecting things like ~/.authinfo and for making backups.

Later I learned that the gpg-agent can also be forwarded to remote systems, so now I can access my email with my real PGP keyring the way I’d access it locally. This is extremely useful (and exiting!) when you use hardware tokens like the YubiKey. It feels kinda magic to sign email in the cloud Emacs and see the locally attached YubiKey blink for confirmation.

Which leads me to yet another security concern.

I have programmed my YubiKey to require physical touch confirmation for all OpenPGP transactions. This means I cannot use gnupg for things like automatic decryption of ~/.authinfo.gpg with offlineimap. Or to put it in another way: my current PGP setup requires always me personally initiate and confirm any OpenPGP transaction. Periodical, automated IMAP checks cannot fit to this scheme.

So I just brutally changed this process to use clear text ~/.authinfo. First it sounds like a bad idea, but this is a single user system, and if someone hacks in, I have much bigger problem than losing my temporary gmail or facebook device credentials. I can revoke these credentials any time, and probably should do so periodically anyways. Also, my postfix setup is exposing the gmail credentials anyways. Malicious actor could just do strings /etc/postfix/sasl_password.db.

To put it short, if someone gets access to the system, game is over. So I just need to make sure access is as limited as possible and there’s nothing critical.