How to Tidy Up Your Digital Life

The title here refers to the popular Marie Kondo books and Netflix series all based on the concept of “tidying up”:

Our goal is to help more people tidy their spaces by choosing joy, and we are committed to developing the simplest and most effective tools to help you get there.

Even without reading the books, at my house we’ve been taking this approach and discarding, recycling or donating clothes, books, and other household goods. Getting to a clean organized drawer or closet is a great reward and we’re using that to keep us motivated as we work our way around the house.

However, there’s a big pit stop for all the digital devices. They end up in the garage waiting for something before they can be donated:

  • For computers back up the hard drive then wipe it.
  • For phones make sure all the photos are copied off, then do a full erase; but maybe keep it around in case the new one breaks or something.
  • For older electronics and games, save them for nostalgia reasons or maybe someone else could use them.

You can guess the result here: everything has piled up in the garage for years.

Collection of used computers

I have a similar “collection” going which is content stored in so many different online services. These too need tidying up and consolidation (but without the extra step of hardware to deal with).

Taken together I’m capturing here my to-do list to tidy up my digital life and get back to some level of normalcy. I’ll report back with a follow-up post to see how well I’ve done!

Hardware

Here’s all the old hardware I can think of so far, with a rough plan for what to do:

  • Synology NAS: buy a couple more drives to upgrade capacity and look at cloud backup options.
  • Old laptops & desktop PCs: the oldest computers are probably 20 years old at this point; boot up and then back up any useful data (mainly digital photos/videos) to my Synology NAS station– waiting for data to be backed up, drives wiped, and then recycled.
  • PC parts & cables: recycle all the PC parts; for cables which are still relevant just keep a couple and recycle the rest.
  • Cell phones and MP3 players: I think we still have almost every cell phone anyone in the family ever owned (along with a matching giant box filled with phone charging cables). Some of these may be interesting to save as classics, but the rest need to be wiped and recycled.
  • Multiple boxes full of backup drives, CDs and DVDs: review and consolidate anything still relevant (again, mainly looking for digital photos) to the Synology.
  • Video games: we have Xbox, Xbox 360, PlayStation 2, Nintendo Wii and GameCube systems, accessories and games, and a bunch of Windows PC games. Likely keep the Nintendo system (which the kids still like to play) but donate the rest (hopefully to The MADE).

Data / Content

  • Passwords: 1Password and Google Docs; probably a few on paper in my desk. Need to get all of these captured just in 1Password and probably need a round of password updates especially for the older ones.
  • Cloud storage: I’m using way too many services, including iCloud, Dropbox, Box, OneDrive, and Google Drive (2 accounts). This is a good example of free (cost) actually being non-free (time) with everything spread out and so close to the respective limits. The plan is to consolidate down to one paid service plus likely Google Drive.
  • CrashPlan backups: this is currently costing $10/computer/month and I’m up to 10 devices now, some of which are dead computers. The plan is to check for any files needed first, then reduce the number of computers on the account while I look for better alternatives.
  • Notes: these are scattered across Evernote, text files in Dropbox, RTF files on my Mac, Simple Note, Google Keep, Google Docs (2 accounts) Mac To-Do app and Trello boards. I’m not sure the plan here but need to reduce to just a couple of places (probably Evernote + text files in Dropbox + Google Docs).
  • Source code for personal projects: this is spread across Dropbox, my Mac, GitHub, Box and on my Pair webhost. The plan is to move “real” projects onto GitHub (possibly some of which will be marked as private) and the rest of the scratch or test code get organized under Dropbox.
  • YouTube videos: I have 2 separate personal accounts, neither of which has many videos, but my older account has the more popular screencasts. The likely plan is to delete any obsolete or private videos, keep the useful live ones, and for anything new create them on the newer Google account (cantoni.org).
  • Photos: Many are on Flickr (which was sold and is changing) but the bulk are on backup disks or in my Mac Photos library. I need to get a better workflow without relying on Mac Photos and get everything organized in one place on the Synology.
  • Bookmarks: I previously used Delicious and XMarks sync, both of which are gone now. I do have a few backups from Chrome and Firefox, although they are not in sync with each other. Need to come up with a better sync solution here, ideally across browsers.
  • Personal files on work laptop and vice versa: Get these sorted out and separated.
  • Web domain names, hosts and contents. I’ve already started cleaning this up a bit by letting domain names expire and creating my own static sites for parked domains. Still to do: move WordPress to a new location and cleanup/shutdown my old shared hosting account on Pair.

Now that I’ve put my to-do list out here, it’s time to get started! Stay tuned for updates…

Photo credit: ademrudin on Flickr

How to Use GitHub Pages for Parked Domains

ifmodified.com screenshot

For domain names I’ve reserved but haven’t done anything with yet, I like to have them parked with my own simple landing page rather than one of those ad-filled “parked domain” pages hosted by the registrar. Previously I had set mine up on a Digital Ocean VM. This wasn’t too difficult (and had a side benefit of forcing me to brush up on my Apache HTTP Server skills) but I’ve switched to a much easier method: hosting for free via GitHub Pages. This is not only free in terms of cost, but also time because it’s very simple to set up and run.

GitHub Pages

First of all this was a good excuse to finally learn more about GitHub pages and support for building static sites with Jekyll. In essence GitHub will automatically create and publish an HTML website from your repo source files. Both public and private repos are supported; GitHub will remind you that all generated sites are public though, so be careful if publishing private repos this way.

For anyone wanting to learn more, I suggest going through the Jekyll Quickstart first and then the specific notes for running Jekyll on GitHub Pages. After coming up to speed on the basics, I tried publishing things a few different ways and came to a few conclusions:

  • Pages makes it very easy to publish automatically (no need for any separate deployment tools)
  • You can use either full Jekyll, or fall back to writing your own HTML (which Jekyll which simply publish as-is)
  • Troubleshooting Jekyll build issues is difficult if not impossible
  • With an NPM module it’s possible to run the Pages build locally which is very helpful for debugging
  • The GitHub documentation for Pages is pretty confusing
  • Pages would be a great solution for anything simple, but not for anything more complex or critical
  • SSL is included, and certificate renewals are automatic (thanks to Let’s Encrypt)
  • Custom domains are also supported (no charge)

In the end I decided to go with a very plain HTML solution without anything Jekyll specific. Read on to see the steps I followed.

Setting up Parked Domains on GitHub Pages

These are the steps I followed for setting up my parked domain name pages on GitHub Pages. The example I’ll use here is my domain ifmodified.com.

  1. For your domain name (whether it’s new or existing), lower the DNS time to live (TTL) to be very short (an hour or less). If you had an existing TTL which was longer, you can do this step the day before to give it time to propagate.
  2. On GitHub, create a new public repo and choose the option to include a starter README. I’m using the domain name as the repo name, so my repo is at bcantoni/ifmodified.com.
  3. Note that private repos work too, but since the site is going to be public anyways, it seems to make sense for the repo to be public.
  4. On your local system, do a git clone to bring the repo down.
  5. Add your web content (even if just a simple index.html).
  6. Commit and push
  7. On the GitHub repo settings, enable GitHub Pages and choose the option to publish from root of the master branch. (Other options here include using the /docs path in the master branch, or the dedicated gh-pages branch.) I also like to turn off wikis, issues and projects since none of those will be needed.
  8. Your site should now be available on youruserid.github.io/yourdomainname.com.
  9. Next you’ll set up your custom domain by starting at your DNS provider. Add an A round-robin pool to include the 4 GitHub IP addresses: 185.199.108.153, 185.199.109.153, 185.199.110.153, 185.199.111.153.
  10. Wait a bit for this change to propagate, then check the DNS entry on your system with the command line dig yourdomainname.com. If dig does not respond with the new expected IP addresses, wait and try again later.
  11. Once the DNS lookup is working, return to the GitHub Pages settings and enter your domain as the custom domain name and save it.
  12. After a moment, GitHub should say your site is now available under yourdomainname.com.
  13. Finally you can enable Enforce HTTPS. After several minutes, this will be configured on the GitHub side and you can confirm by visiting your custom domain.
  14. If you have any trouble, remove and then re-add the custom domain name. That step will trigger the logic on the GitHub side to reconfigure everything.
  15. Once everything is confirmed working, return to your DNS provider and set a normal TTL once again (typically 24 hours or more).
  16. Whenever you need to make site content changes, just push them to GitHub and your results will automatically go live right away.

The UI might change around a bit over, but as of this writing here’s what the GitHub Pages settings section will look like when done:

Screenshot of GitHub Pages settings

Finally the results – these are my currently parked domains which are all using this technique:

For completeness, I also used the same method to hang a really simple landing page for Gorepoint (this one uses a minimal design built with Materialize.)

Photo credit: chuttersnap on Unsplash

How to Fix Dell Monitors With Mac Laptops

The Problem

I’m lucky to have a work-provided MacBook Pro as my primary system along with a pretty nice Dell UltraSharp 30-in monitor at the office. One of the things I’ve always struggled with in this combination is really poor resolution when the Dell is connected. This problem with Dell monitors isn’t quite the “fuzzy font” problem you’ll see if you search around (that’s mostly referring to font smoothing or anti-aliasing adjustments which macOS can apply). Instead it’s more like the screen is running at a much lower than native resolution.

This weekend I finally upgraded my system to macOS Mojave (10.14) and this morning in the office my favorite display bug had once again returned. These are my notes for fixing the issue with links to the original sources.

The Solution

The canonical article I found is on the EmbDev.net forums where the problem and solution (paraphrased) is summarized as:

Many Dell monitors (e.g., U2713H, U2713HM, …) look really bad when connected to a Mac (OS X 10.8.2) via DisplayPort, as if some sharpening or contrast enhancement was applied. Others have reported the same problem. The reason is that the DisplayPort uses YCbCr colors instead of RGB to drive the display, which limits the range of colors and apparently causes the display to apply some undesired post processing. The problem can be solved by overriding the EDID data of the display in order to tell OS X that the display only supports RGB.

The fix is to create a display EDID profile to force macOS to use RGB mode. You can read through the forum post for details and check this this patch-edid patch for a Ruby script to make it pretty easy.

The Steps

These are the steps to run that patch script and get a display override file in the right place on macOS Mojave. This should only be done by advanced users and/or those who know what they’re doing because it involves disabling Apple’s System Integrity Protection for a short time.

Steps:

  1. Restart the Mac in Recovery Mode (hold down Cmd-R once the Apple logo appears)
  2. Open a terminal window and disable SIP: csrutil disable
  3. Restart the Mac again and let it boot normally
  4. Download the patch-edid.rb script
  5. Run the script with Ruby: ruby patch-edid.rb
  6. Confirm that you have a local folder DisplayVendorID-10ac with a file DisplayProductID-xxxx
  7. Copy the folder DisplayVendorID-10ac and its contents over to /System/Library/Displays/Contents/Resources/Overrides/ (sudo will be required)
  8. Restart the Mac in Recovery Mode once again
  9. Open a terminal window and re-enable SIP: csrutil enable
  10. Restart the Mac again and let it boot normally
  11. Open the Display Preferences for the external Dell monitor and under the Color tab select the “forced RGB mode (EDID override)” option
  12. One more system restart may be needed for the changes to take effect

HTTP Request Inspectors

Someone wrote in to let me know that an older project of mine which provides a simple webservice echo test was referencing some out of date projects. My HTTP test service doesn’t use any of those other services, but I’ve updated the blog post description to point to some new options for comparison purposes.

The original hosted version of RequestBin (at requestb.in) is no longer live. This was run by Runscope at the time and the source code is still up on GitHub (Runscope/requestbin). You can see their the status change:

We have discontinued the publicly hosted version of RequestBin due to ongoing abuse that made it very difficult to keep the site up reliably. Please see instructions below for setting up your own self-hosted instance.

Here’s the commit with the awesome (but sad) commit message of :( when they turned it off. Some good comments there from fans of the service.

The good news is with the source code still available, people can still use the service themselves (Heroku and Docker instructions are included) and the adventurous ones can run live services using the same. Here are a few I found with some quick searching (but have not personally used):

Converting HTML to Markdown using Pandoc

Markdown is a great plain text format for a lot of applications and is often used to convert to HTML (for example on my WordPress blog here). There are also some good use cases for the opposite: converting from HTML into Markdown. I recently had such a case to convert some older blog posts from raw HTML into Markdown found that Pandoc made it really easy.

What’s Pandoc

Pandoc is an open-source utility for converting between a number of common (and rare) document types, for example plain text, HTML, Markdown, MS Word, LaTeX, wiki, and so on. The output formats list is really extensive, and people can write their own “filters” to handle other formats as well, or to customize the existing ones to their exact needs. The project tagline sums it up nicely:

If you need to convert files from one markup format into another, pandoc is your swiss-army knife.

Screenshot of Pandoc website showing all the supported file formats
The Pandoc website lists all of the support file types it can convert between

My Use Case

My particular use case was to convert about a dozen really old blog posts from this website. I wrote these back in the early days when I managed this site in CityDesk and later migrated to MovableType. The Google Search Console alerted me to some crawler errors which turned out to be caused by raw PHP file content being served instead of real HTML.

My approach for cleaning this up was as follows:

  1. Convert HTML original articles into Markdown format
  2. Do some manual cleanup editing and double-check links are still valid
  3. Drop the Markdown into the appropriate Posts within WordPress
  4. Modify my existing .htaccess files to do permanent (301) redirects for all of the old URLs

Examples

Simple HTML Example

With Pandoc installed, you can try a simple test pulling down the installation instructions page:

curl --silent https://pandoc.org/installing.html | pandoc --from html --to markdown_strict -o installing.md

To see the result, consider this HTML snippet from installing.html:

<h2 id="compiling-from-source">Compiling from source</h2>
<p>If for some reason a binary package is not available for your platform, or if you want to hack on pandoc or use a non-released version, you can install from source.</p>
<h3 id="getting-the-pandoc-source-code">Getting the pandoc source code</h3>
<p>Source tarballs can be found at <a href="https://hackage.haskell.org/package/pandoc" class="uri">https://hackage.haskell.org/package/pandoc</a>. For example, to fetch the source for version 1.17.0.3:</p>
<pre><code>wget https://hackage.haskell.org/package/pandoc-1.17.0.3/pandoc-1.17.0.3.tar.gz
tar xvzf pandoc-1.17.0.3.tar.gz
cd pandoc-1.17.0.3</code></pre>

We can see the resulting Markdown turned out very well:

## Compiling from source

If for some reason a binary package is not available for your platform, or if you want to hack on pandoc or use a non-released version, you can install from source.

### Getting the pandoc source code

Source tarballs can be found at <a href="https://hackage.haskell.org/package/pandoc" class="uri">https://hackage.haskell.org/package/pandoc</a>. For example, to fetch the source for version 1.17.0.3:

    wget https://hackage.haskell.org/package/pandoc-1.17.0.3/pandoc-1.17.0.3.tar.gz
    tar xvzf pandoc-1.17.0.3.tar.gz
    cd pandoc-1.17.0.3

My Blog Post Conversions

For my dozen old HTML articles, the straight conversion ended up being a bit noisy, especially with the some old CMS template boilerplate around the content which was no longer needed. To clean those up I used a little bit of Sed to clean it up before conversion:

#!/bin/bash
echo "converting $1"
cat $1 | sed '1,/<div class="asset-header">/d' | sed '/<div class="asset-footer">/,/<\/html>/d' | pandoc --wrap=none --from html --to markdown_strict > $1.md

(The above Sed commands clean up the HTML source in two passes: first removing everything from top of file to <div class="asset-header">, which is where the blog post started; and then removing all from <div class="asset-footer"> to the end of file.)

After that, I just needed to do some minor editing cleanups on the Markdown files before bringing them in to WordPress. Success!

Further Reading

There are a few good online converters you can try; keep in mind some of these are limited in the number of characters they can handle:

To learn more and go deeper on Pandoc, I recommend going through their excellent user’s guide.

And finally a big recommendation for Dillinger, a great online tool for editing Markdown text with live HTML rendering. I use that for writing these blog articles as well, before moving them in to WordPress.