Android 5.0 thoughts

Having upgraded my tablet and phone to Android 5.0 — they’re both Nexus devices, so it was a fairly simple process to get the upgrade without waiting — here are my thoughts.

Many of the improvements in the new version of Android are related to security. I don’t know whether this is driven by Apple’s (often erroneously) perceived leadership in this area, by the ongoing revelations about surveillance and law enforcement misbehavior, or by some combination of the two, but it’s welcome all the same.

Screen pinning is a new security feature which lets you lock the phone to a single app screen, so you can hand it to someone to look at without their being able to go look at your contacts, calendar, and other info. To unpin the screen, you can require your security code. This is good — my insurance company offers phone-based insurance info in an app, but before screen pinning there was no way I was going to hand my phone to a police officer at a traffic stop.

Smart unlock is another useful security feature. It lets you set up the phone to stay unlocked when within range of a particular NFC or Bluetooth device. In my case, I can set it up with my Fitbit — suddenly I don’t have to deal with PIN codes to use my phone, yet if I misplace the phone it’ll be securely locked.

There’s also a new guest mode, for when someone wants to borrow your phone. It sets up a clean restricted environment where they can check their e-mail or send a message, then either you or they can wipe it clean.

The lock screen now shows notifications. If you’re concerned about the security implications, you can choose to have it redact anything considered personal, like content of text messages.

Other than that, it’s mostly a cosmetic upgrade. A lot of the improvements will be rolled out to phones that don’t get Android 5.0, because they’re improvements to Google’s apps.

I love the new calendar. In portrait mode, you get an agenda timeline by default; flip the phone on its side and it switches to week grid mode. You can also switch views with the action bar, of course, but the designers have clearly thought carefully about which views make the most sense for which screens.

The new contacts app makes better use of screen space. It also brings back the “join” option. I know a lot of people who use multiple e-mail addresses, and since Google and Twitter tend to helpfully add contacts to your address book, in some cases I had three entries for the same person.

I’m less convinced by the new Gmail. I’m a bit of a traditionalist when it comes to mail, and I don’t believe in using my inbox as a to-do list, so it seems like Google’s general direction with e-mail isn’t what I’m looking for.

As for overall graphical style, I definitely like the new look. As others have said, Google is getting good at design much faster than Apple seems to be able to learn how to provide useful cloud services.

Playing Domino without a POODLE

If you run any kind of Internet server, you’ve hopefully heard about the POODLE vulnerability in SSL 3.

If you run a Domino server, you need to worry about this, because Firefox plan to turn off SSLv3 support in their next release in a couple of weeks and remove the code in the release after that — and Chrome will follow soon after. SSLv3 is the only secure connection supported in Domino out of the box, so that could leave you with no HTTPS support.

Don’t panic, though. IBM’s software developers have beavered away and produced interim fixes for Domino TLS 1.0 support. Another option is to run Domino behind the IBM HTTP server.

I decided against those approaches, for the following reasons:

Regarding IBM HTTP server, it’s only officially supported on Windows, and my servers are Linux. You can obtain a version of IBM HTTP server as part of WebSphere, but that’s a lot of software to download and install just for a web server, particularly when it might not work and isn’t officially supported.

As far as the interim patches go, while TLS 1.0 support solves the immediate problem, it’s far from ideal, as the current version of TLS is TLS 1.2, and TLS 1.0 is known to be not much more secure than SSLv3 when considering another attack known as BEAST.

So instead, I decided to put Apache in front of Domino, acting as a reverse proxy — much as you’d usually put a web server in front of WebSphere or any other J2EE server, or a Ruby on Rails server. Apache handles all the secure connection details, via TLS 1.2, and then forwards the request via plain HTTP to localhost, to the Domino server which is running on a different port. Some “magic” HTTP headers are used to tell Domino where the original HTTPS request came from, so from the point of view of a Domino application, everything looks exactly as if the request had gone straight to Domino.

The process of setting all this up wasn’t too hard, but it required assembling information from a variety of sources, and experimenting inside a VM until I was sure I could do it without significant downtime. So, I thought I’d write up my final process.

If you prefer, Jesse Gallagher has had success using nginx as reverse proxy for Domino.

You might also want to look at Darren Duke’s prebuild Ubuntu VM set up to proxy Domino.

Step 1 is to install Apache and OpenSSL. That’ll depend on your OS. For RHEL or CentOS, it’s a matter of yum install httpd mod_ssl openssl.

Theoretically you can use the new release of kyrtool to pull out your existing SSL key from Domino’s keyring and import it into Apache. I took the easy way out and generated a whole new key.

So, my step 2 is to generate yourself an RSA key pair for Apache:

openssl genrsa -aes128 4096 > myhostname.key

Next, you want to generate a Certificate Signing Request (CSR) for the public key portion of your new key:

openssl req -utf8 -new -key myhostname.key -out myhostname.csr 

Now you send the CSR file to your favorite SSL certificate vendor. If they give you a choice, request SHA2-256 for the hash algorithm; SHA-1 is insecire and will be removed from browsers some time next year.

In the mean time, you can generate a self-signed certificate for testing:

openssl x509 -req -days 365 -in myhostname.csr -signkey myhostname.key -out myhostname.crt

That’s one long command.

Generally for RHEL or CentOS, Apache expects to find its SSL key files in the following locations:

/etc/pki/tls/private/localhost.key # Private key
/etc/pki/tls/certs/localhost.crt   # Signed public key
/etc/pki/tls/certs/ca-bundle.crt   # Certificate chain

If you want to start Apache without needing the password you set when you created your new private key, you can create a copy of the key file with the encryption and password removed:

openssl rsa -in myhostname.key -out myhostname-decrypted.key

Make sure the decrypted file is in /etc/pki/tls/private and not readable by anyone other than root.

Step 3 is to make some Domino changes.

In the Server Document, Ports… tab, Internet Ports… sub-tab, set the HTTP port to be something other than 80. I picked 1080, but you could use pretty much any port, as it’ll be invisible to end users. While you’re there, turn off SSL as well, as we won’t be using Domino’s SSL any more.

A brief word about firewalls at this point: I’m assuming that all unknown ports, including whatever you pick, are firewalled off by default. In my case, port 1080 is blocked completely from end users. I’m also assuming that your server is allowed to talk to itself as localhost, on any port it likes.

Next, you’ll want to issue the commands

set config HTTPAllowDecodedUrlPercent=1
set config HTTPEnableConnectorHeaders=1

to the Domino console.

Step 4 is to configure Apache to act as reverse proxy for whatever port you just picked. There are a lot of instructions out there for how to do it, but I found that most of them were overly complex. All it really needs is:

ProxyRequests Off
ProxyPreserveHost On
AllowEncodedSlashes On

# Bounce port 80 to 443 (HTTPS all the things!)
RewriteEngine on
RewriteCond %{SERVER_PORT} !^443$
RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NE,NC,R=301,L]

# Preserve remote IP info via special WebSphere variables
SetEnvIf REMOTE_ADDR (.*) temp_remote_addr=$1
RequestHeader set $WSRA %{temp_remote_addr}e
RequestHeader set $WSRH %{temp_remote_addr}e
# Tell Domino we're always on SSL
RequestHeader set $WSIS true

# Proxy to Domino
ProxyPass / http://127.0.0.1:1080/
ProxyPassReverse / http://127.0.0.1:1080/

For RHEL/CentOS, stick that info in /etc/httpd/conf.d/domino.conf. If you’re interested in the mysterious $WS headers, Vincent Kong has more info.

Step 5 is very important. If your server is running with SELinux enabled in enforcing mode — which mine all are — you need to allow Apache to initiate TCP/IP connections.

/usr/sbin/setsebool -P httpd_can_network_connect 1

This command can take a few seconds to complete. The -P means it should be persistent across reboots.

Step 6 is to update the default Apache SSL config to harden it and prevent use of old, broken encryption algorithms. Mozilla has a useful guide for this; for RHEL/CentOS, the config is in /etc/httpd/conf.d/ssl.conf.

Now for the moment of excitement. Go to the Domino console and tell http restart. When it has done so, you should find that it’s now listening on port 1080 (or whatever you chose) instead of port 80. You can verify this with

lsof -i :1080

Assuming that worked, /etc/init.d/httpd start. If you did everything right, you’ll have just switched over your Domino server to be fronted by Apache, with only a few seconds of downtime.

You should now be able to go to your server’s web URL. If you go via HTTP, you should be immediately bounced to HTTPS. You should see your Domino server’s web site output exactly as if Apache wasn’t there. However, if you check the security settings in your browser, you should find that you’re connecting via TLS 1.2!

The final step 7 is to install the certificate you eventually get from your SSL cert provider. That’s simplicity itself — just replace the .crt file you generated in /etc/pki/tls/certs with the signed one, and apachectl restart.

If you use iNotes, there are some extra pieces of Apache config needed; there’s a developerWorks article about that.

In the wake of shellshock

So, shellshock. It’s big. I think it’s bigger than heartbleed, because the bug has been in the code for 22 years, so there are an awful lot of systems out there with a vulnerable shell installed and nobody maintaining them properly.

One misconception I’ve seen posted across the web is that you’re not in trouble if you don’t use bash as your shell, or that you’re safe if you have dash as /bin/sh. Sadly, that’s not true. GNU tools like env will use bash to execute other programs no matter what your personal shell is. Other programs you use may invoke bash too — for example, on Ubuntu the zcat utility is a bash script. So are Firefox and Chrome.

As for exploits, basically any program which sets environment variables based on user input and uses the shell to execute something can become a vector. There’s a working exploit for Linux DHCP, so you can get pwned simply by connecting to a WiFi network. There are at least two worms spreading via the web, where you can pwn servers just by setting your web browser’s user agent to something cunning. There’s an IRC worm active, and an exploit via email has been demonstrated for some server-side software. SSH restricted mode is affected, becoming SSH totally unrestricted mode. And that’s just the stuff I know about.

At work, a colleague just asked what needs to be upgraded. My answer: Every Linux system on the planet. Prioritize ones which accept data from outside your organization (no matter how they do it). You’ll eventually want to patch everything running a Linux distibution, because the bug is also perfect for use crafting local root exploits.

Yes, it really is that bad. Even Windows systems with Cygwin are affected. Many embedded systems use busybox rather than a full shell, so (for example) your router might be safe, but I advise that you check: See if /bin/bash exists and whether /bin/bash --version reports that it really is bash. One small silver lining is that Android phones aren’t affected unless the user has chosen to install a bash shell, so we aren’t looking at a mobile phone worm apocalypse yet.

Oh, and if you patched your Linux system yesterday, patch again, because the first patch didn’t completely solve the problem, and there was a second patch released this morning.

Apple, meanwhile, still doesn’t have a patch for their old forked version of bash. Instead, they have a statement that the issue is not a problem for Mac users, so I’m sure hackers are developing OS X exploits right now.

Some people have been warning about the dangers of bash scripting for years. I’ve been bash-averse for decades; in fact, I never write shell scripts unless I absolutely have to because shell in general has so many pitfalls compared to (say) Ruby.

Of course, I’ve still had to deal with the fallout from shellshock, but I must admit I’m chuckling about the fact that purely by chance, I decided last week was the week I would finally get to grips with using Ansible to push software updates to all the servers I look after.

Another bash basher is David Jones, who has now written about why bash is bad, and provides some tips on turning your bash scripts into standard shell scripts.

If you really must continue to use bash, use unofficial bash strict mode.