All posts by Brian Cantoni

Over 1 Million Tweets Delivered by Tweetfave

My Tweetfave side project has just passed a major milestone – over 1 million Tweets delivered!

McDonalds Sign

Tweetfave is a simple service which sends emails with all tweets you’ve marked as favorites (or in today’s terminology “liked”). I launched this publicly over 6 years ago in May 2013 and it’s been running along quietly since then.

The chart below shows the cumulative total number of tweets delivered through the service. At the end of August 2019 we just passed the 1 million mark (on pace for about 1.1 million by the end of the year):

Tweetfave growth chart showing 1 million tweets after 6 years

It’s kind of impressive the amount of usage given the small number of users. Overall Tweetfave has had almost 300 people sign up and try it; currently 110 users are still active. The numbers are definitely skewed towards a few power users: the top 20 users account for almost 80% of the total usage. In fact, despite being “customer #1”, I’m only the 18th most active user on the system :)

So, what are some lessons learned along the way?

Building it was straightforward with PHP and libraries for everything. It wasn’t necessarily easy (in particular learning the Twitter API), but all the building blocks came together pretty smoothly. Everything I needed had PHP libraries, including the Twitter API, Mustache templates, composing/sending email, and the MySQL database. I wrote up more about all the software and services used back in 2013.

Left alone, it just kept working. For the most part the service just kept running along. There were a few outages along the way from Twitter and AWS S3. There were also a few updates needed to keep up with Twitter API changes, for example when support for longer tweets rolled out in 2016. One problem is there is no monitoring to speak of, but since I still use the system a lot I’ll eventually notice when things are broken.

Minimal marketing results in minimal usage. At I time I launched Tweetfave I pinged people I knew who might like it and got a few takers. I would also tweet about it occasionally and there were a few referrals from other users. For the most part I think people stumble across and try it out and some stick around. I imagine a real focus on marketing could drive more users; maybe some day in the future :)

Original feature set was pretty complete. If I was more bold I might call the original launch my Minimal Viable Product which in some sense it was. I’ve really only made a few more feature tweaks over time, the biggest of which was adding RSS feeds in 2015.

Email delivery can be tricky. One area I didn’t have much experience with was deliverying so much email. For the majority of time I used SendGrid who were great. (I used their tech support multiple times which is very impressive considering I was always at the free level.) Recently I switched to mailgun because I continued to have delivery problems to Hotmail/Outlook account; those are all working now. I’m still small enough to be on the free tier; it’s running about 3000 emails/month recently, with about 32% open rate which seems pretty good. Also, switching email providers when your interface is just SMTP is very easy.

Infrastructure needs some updates. The service is running on an unsupported and very old PHP (5.5.9) and unsupported Ubuntu (14.04). I also need to move the MySQL data from my shared web host account over to Digital Ocean (the service and website are already there). Once MySQL is updated, I need to revisit some database issues when tweets contain Unicode characters. There is not a lot of test code which makes me nervous to make big changes, so perhaps as part of this I’ll finally add some better tests.

Still useful as a bookmarking/read it later service. I originally created Tweetfave to help fit my model of reading Twitter and it’s still effective for me today. Like many “read it later” type services, I suffer from not always going back and reading everything but it’s very handy to have them all there in my inbox. Email is powerful for these types of services and I’ve got a few more related ideas that I’d like to build.

If you made it this far, give Tweetfave a try and let me know what you think (brian AT cantoni.org)!

How to Tidy Up Your Digital Life

The title here refers to the popular Marie Kondo books and Netflix series all based on the concept of “tidying up”:

Our goal is to help more people tidy their spaces by choosing joy, and we are committed to developing the simplest and most effective tools to help you get there.

Even without reading the books, at my house we’ve been taking this approach and discarding, recycling or donating clothes, books, and other household goods. Getting to a clean organized drawer or closet is a great reward and we’re using that to keep us motivated as we work our way around the house.

However, there’s a big pit stop for all the digital devices. They end up in the garage waiting for something before they can be donated:

  • For computers back up the hard drive then wipe it.
  • For phones make sure all the photos are copied off, then do a full erase; but maybe keep it around in case the new one breaks or something.
  • For older electronics and games, save them for nostalgia reasons or maybe someone else could use them.

You can guess the result here: everything has piled up in the garage for years.

Collection of used computers

I have a similar “collection” going which is content stored in so many different online services. These too need tidying up and consolidation (but without the extra step of hardware to deal with).

Taken together I’m capturing here my to-do list to tidy up my digital life and get back to some level of normalcy. I’ll report back with a follow-up post to see how well I’ve done!

Hardware

Here’s all the old hardware I can think of so far, with a rough plan for what to do:

  • Synology NAS: buy a couple more drives to upgrade capacity and look at cloud backup options.
  • Old laptops & desktop PCs: the oldest computers are probably 20 years old at this point; boot up and then back up any useful data (mainly digital photos/videos) to my Synology NAS station– waiting for data to be backed up, drives wiped, and then recycled.
  • PC parts & cables: recycle all the PC parts; for cables which are still relevant just keep a couple and recycle the rest.
  • Cell phones and MP3 players: I think we still have almost every cell phone anyone in the family ever owned (along with a matching giant box filled with phone charging cables). Some of these may be interesting to save as classics, but the rest need to be wiped and recycled.
  • Multiple boxes full of backup drives, CDs and DVDs: review and consolidate anything still relevant (again, mainly looking for digital photos) to the Synology.
  • Video games: we have Xbox, Xbox 360, PlayStation 2, Nintendo Wii and GameCube systems, accessories and games, and a bunch of Windows PC games. Likely keep the Nintendo system (which the kids still like to play) but donate the rest (hopefully to The MADE).

Data / Content

  • Passwords: 1Password and Google Docs; probably a few on paper in my desk. Need to get all of these captured just in 1Password and probably need a round of password updates especially for the older ones.
  • Cloud storage: I’m using way too many services, including iCloud, Dropbox, Box, OneDrive, and Google Drive (2 accounts). This is a good example of free (cost) actually being non-free (time) with everything spread out and so close to the respective limits. The plan is to consolidate down to one paid service plus likely Google Drive.
  • CrashPlan backups: this is currently costing $10/computer/month and I’m up to 10 devices now, some of which are dead computers. The plan is to check for any files needed first, then reduce the number of computers on the account while I look for better alternatives.
  • Notes: these are scattered across Evernote, text files in Dropbox, RTF files on my Mac, Simple Note, Google Keep, Google Docs (2 accounts) Mac To-Do app and Trello boards. I’m not sure the plan here but need to reduce to just a couple of places (probably Evernote + text files in Dropbox + Google Docs).
  • Source code for personal projects: this is spread across Dropbox, my Mac, GitHub, Box and on my Pair webhost. The plan is to move “real” projects onto GitHub (possibly some of which will be marked as private) and the rest of the scratch or test code get organized under Dropbox.
  • YouTube videos: I have 2 separate personal accounts, neither of which has many videos, but my older account has the more popular screencasts. The likely plan is to delete any obsolete or private videos, keep the useful live ones, and for anything new create them on the newer Google account (cantoni.org).
  • Photos: Many are on Flickr (which was sold and is changing) but the bulk are on backup disks or in my Mac Photos library. I need to get a better workflow without relying on Mac Photos and get everything organized in one place on the Synology.
  • Bookmarks: I previously used Delicious and XMarks sync, both of which are gone now. I do have a few backups from Chrome and Firefox, although they are not in sync with each other. Need to come up with a better sync solution here, ideally across browsers.
  • Personal files on work laptop and vice versa: Get these sorted out and separated.
  • Web domain names, hosts and contents. I’ve already started cleaning this up a bit by letting domain names expire and creating my own static sites for parked domains. Still to do: move WordPress to a new location and cleanup/shutdown my old shared hosting account on Pair.

Now that I’ve put my to-do list out here, it’s time to get started! Stay tuned for updates…

Photo credit: ademrudin on Flickr

How to Use GitHub Pages for Parked Domains

ifmodified.com screenshot

For domain names I’ve reserved but haven’t done anything with yet, I like to have them parked with my own simple landing page rather than one of those ad-filled “parked domain” pages hosted by the registrar. Previously I had set mine up on a Digital Ocean VM. This wasn’t too difficult (and had a side benefit of forcing me to brush up on my Apache HTTP Server skills) but I’ve switched to a much easier method: hosting for free via GitHub Pages. This is not only free in terms of cost, but also time because it’s very simple to set up and run.

GitHub Pages

First of all this was a good excuse to finally learn more about GitHub pages and support for building static sites with Jekyll. In essence GitHub will automatically create and publish an HTML website from your repo source files. Both public and private repos are supported; GitHub will remind you that all generated sites are public though, so be careful if publishing private repos this way.

For anyone wanting to learn more, I suggest going through the Jekyll Quickstart first and then the specific notes for running Jekyll on GitHub Pages. After coming up to speed on the basics, I tried publishing things a few different ways and came to a few conclusions:

  • Pages makes it very easy to publish automatically (no need for any separate deployment tools)
  • You can use either full Jekyll, or fall back to writing your own HTML (which Jekyll which simply publish as-is)
  • Troubleshooting Jekyll build issues is difficult if not impossible
  • With an NPM module it’s possible to run the Pages build locally which is very helpful for debugging
  • The GitHub documentation for Pages is pretty confusing
  • Pages would be a great solution for anything simple, but not for anything more complex or critical
  • SSL is included, and certificate renewals are automatic (thanks to Let’s Encrypt)
  • Custom domains are also supported (no charge)

In the end I decided to go with a very plain HTML solution without anything Jekyll specific. Read on to see the steps I followed.

Setting up Parked Domains on GitHub Pages

These are the steps I followed for setting up my parked domain name pages on GitHub Pages. The example I’ll use here is my domain ifmodified.com.

  1. For your domain name (whether it’s new or existing), lower the DNS time to live (TTL) to be very short (an hour or less). If you had an existing TTL which was longer, you can do this step the day before to give it time to propagate.
  2. On GitHub, create a new public repo and choose the option to include a starter README. I’m using the domain name as the repo name, so my repo is at bcantoni/ifmodified.com.
  3. Note that private repos work too, but since the site is going to be public anyways, it seems to make sense for the repo to be public.
  4. On your local system, do a git clone to bring the repo down.
  5. Add your web content (even if just a simple index.html).
  6. Commit and push
  7. On the GitHub repo settings, enable GitHub Pages and choose the option to publish from root of the master branch. (Other options here include using the /docs path in the master branch, or the dedicated gh-pages branch.) I also like to turn off wikis, issues and projects since none of those will be needed.
  8. Your site should now be available on youruserid.github.io/yourdomainname.com.
  9. Next you’ll set up your custom domain by starting at your DNS provider. Add an A round-robin pool to include the 4 GitHub IP addresses: 185.199.108.153, 185.199.109.153, 185.199.110.153, 185.199.111.153.
  10. Wait a bit for this change to propagate, then check the DNS entry on your system with the command line dig yourdomainname.com. If dig does not respond with the new expected IP addresses, wait and try again later.
  11. Once the DNS lookup is working, return to the GitHub Pages settings and enter your domain as the custom domain name and save it.
  12. After a moment, GitHub should say your site is now available under yourdomainname.com.
  13. Finally you can enable Enforce HTTPS. After several minutes, this will be configured on the GitHub side and you can confirm by visiting your custom domain.
  14. If you have any trouble, remove and then re-add the custom domain name. That step will trigger the logic on the GitHub side to reconfigure everything.
  15. Once everything is confirmed working, return to your DNS provider and set a normal TTL once again (typically 24 hours or more).
  16. Whenever you need to make site content changes, just push them to GitHub and your results will automatically go live right away.

The UI might change around a bit over, but as of this writing here’s what the GitHub Pages settings section will look like when done:

Screenshot of GitHub Pages settings

Finally the results – these are my currently parked domains which are all using this technique:

For completeness, I also used the same method to hang a really simple landing page for Gorepoint (this one uses a minimal design built with Materialize.)

Photo credit: chuttersnap on Unsplash

How to Fix Dell Monitors With Mac Laptops

The Problem

I’m lucky to have a work-provided MacBook Pro as my primary system along with a pretty nice Dell UltraSharp 30-in monitor at the office. One of the things I’ve always struggled with in this combination is really poor resolution when the Dell is connected. This problem with Dell monitors isn’t quite the “fuzzy font” problem you’ll see if you search around (that’s mostly referring to font smoothing or anti-aliasing adjustments which macOS can apply). Instead it’s more like the screen is running at a much lower than native resolution.

This weekend I finally upgraded my system to macOS Mojave (10.14) and this morning in the office my favorite display bug had once again returned. These are my notes for fixing the issue with links to the original sources.

The Solution

The canonical article I found is on the EmbDev.net forums where the problem and solution (paraphrased) is summarized as:

Many Dell monitors (e.g., U2713H, U2713HM, …) look really bad when connected to a Mac (OS X 10.8.2) via DisplayPort, as if some sharpening or contrast enhancement was applied. Others have reported the same problem. The reason is that the DisplayPort uses YCbCr colors instead of RGB to drive the display, which limits the range of colors and apparently causes the display to apply some undesired post processing. The problem can be solved by overriding the EDID data of the display in order to tell OS X that the display only supports RGB.

The fix is to create a display EDID profile to force macOS to use RGB mode. You can read through the forum post for details and check this this patch-edid patch for a Ruby script to make it pretty easy.

The Steps

These are the steps to run that patch script and get a display override file in the right place on macOS Mojave. This should only be done by advanced users and/or those who know what they’re doing because it involves disabling Apple’s System Integrity Protection for a short time.

Steps:

  1. Restart the Mac in Recovery Mode (hold down Cmd-R once the Apple logo appears)
  2. Open a terminal window and disable SIP: csrutil disable
  3. Restart the Mac again and let it boot normally
  4. Download the patch-edid.rb script
  5. Run the script with Ruby: ruby patch-edid.rb
  6. Confirm that you have a local folder DisplayVendorID-10ac with a file DisplayProductID-xxxx
  7. Copy the folder DisplayVendorID-10ac and its contents over to /System/Library/Displays/Contents/Resources/Overrides/ (sudo will be required)
  8. Restart the Mac in Recovery Mode once again
  9. Open a terminal window and re-enable SIP: csrutil enable
  10. Restart the Mac again and let it boot normally
  11. Open the Display Preferences for the external Dell monitor and under the Color tab select the “forced RGB mode (EDID override)” option
  12. One more system restart may be needed for the changes to take effect

HTTP Request Inspectors

Someone wrote in to let me know that an older project of mine which provides a simple webservice echo test was referencing some out of date projects. My HTTP test service doesn’t use any of those other services, but I’ve updated the blog post description to point to some new options for comparison purposes.

The original hosted version of RequestBin (at requestb.in) is no longer live. This was run by Runscope at the time and the source code is still up on GitHub (Runscope/requestbin). You can see their the status change:

We have discontinued the publicly hosted version of RequestBin due to ongoing abuse that made it very difficult to keep the site up reliably. Please see instructions below for setting up your own self-hosted instance.

Here’s the commit with the awesome (but sad) commit message of :( when they turned it off. Some good comments there from fans of the service.

The good news is with the source code still available, people can still use the service themselves (Heroku and Docker instructions are included) and the adventurous ones can run live services using the same. Here are a few I found with some quick searching (but have not personally used):