URL Parsing

By Aaron O. Ellis

I have a URL parsing problem.

How do we get a computer to recognize a URL in a block of text? Having a method to do this allows links such as google.com to be made clickable: google.com.

This automatic link generation has become such a feature of mainstream social media sites that its absence is notably jarring to the user. But despite this being a critical UX component, it often goes wrong. For instance, on my Android text messenger:

URL creation on Android

So how can we parse URLs? And how does it go wrong?

One way to perform this parsing is through a regular expression, which can match any given sequence of characters. Unfortunately, we can’t guarantee that the most distinguishing component of a URL, the scheme (e.g. http:// or https://), will be present. And since we can’t expect subdomains to be included as well, at minimum our desired regular expression is no more than a few whitelisted characters joined by a period.

Honoring the first rule of development - has someone else written this code - we found a Stack Overflow response with a pretty good expression if you don’t require IP addresses or internationalization:


Even with those caveats, this expression fails on some important URLs. It places multiple restrictions on the size of the domain name (e.g. twitter) and the top level domain, a.k.a. TLD (e.g. .com). It would miss t.co and entangled.ventures, along with any other single character domain name or TLD over six characters.

Instead of imposing character limits, other attempts at parsing URLs have just whitelisted domains, such as Gruber’s:


But this expression would miss any link with an emerging TLD, such as one of Google’s 2015 April Fools’ pranks: com.google.

A since the list of new TLDs continues to grow - there are currently over 900 - any whitelist would quickly grow stale or unwieldy. And returning to our non-whitelisted regular expression, we’d even have to watch for new TLDs that push the character length limitations, such as .international and .cancerresearch.

And if we resign ourselves to create a link for any words joined with a ., we’re going to generate a number of false positives for files (e.g. bower.json or report.xlsx).

So normally this is where the developer gives up, tells the designer to create a new <input> field and calls it an early day.

Or, you could keep digging that hole, and build infrastructure that would perform the ultimate test of whether a loosely matched sequence of characters is a URL or not: send an http request at it.

But that’s another story…