Archive | Uncategorized RSS feed for this section

SEO for Software Companies

This is a rough outline of my verbal remarks while giving a presentation at the Software Industry Conference.  Regular readers of my blog will notice it has a lot of overlap with previous posts on the topic, but I thought posting it would save presentation watchers from having to take copious notes on URLs and expand the reach of the presentation to people who couldn’t attend this year.

Brief Biography

My name is Patrick McKenzie.  For the last six years I was working in Japan, primarily as a software engineer.  In the interim, I started a small software company in my spare time, at about five hours a week, which recently allowed me to quit my day job.  Roughly half my sales and three quarters of my profits come as a result of organic SEO, and the majority of the remainder come from AdWords.  If you need to know about AdWords, talk to Dave Collins, who is also attending. This presentation is about fairly advanced tactics — if you need beginner-friendly SEO advice, I recommend reading SEOMoz’s blog or taking a look at SEOBook.

Bingo Card Creator makes bingo cards for elementary schoolteachers.  This lets them, for example, teach a unit on chemistry and then, as a fun review game, call out the names of compounds like “ozone” and have students search out the chemical symbols on their bingo cards.  Students enjoy the game because it is more fun than drilling.  Teachers enjoy the game because it scales to any number of students and slides easily into the schedule.  However, making the cards by hand is a bit of a pain, so they go searching on the Internet for cards which already exist and find my website.  I try to sell them a program which automates card creation.

SEO can be a powerful tool for finding more prospects for your business and increasing sales.

SEO In A Nutshell

People treat SEO like it is black magic, but at the core it is very simple: Content + Links = You Win.

Content: Fundamentally, users searching are looking for keywords, and Google wants to send searchers to content which is responsive to the intent of the searcher.  Overwhelmingly, this means to content directly responsive to the keywords.  This is particularly true on the long tail, meaning queries which are not near the top of the query frequency distribution.  Many more people search for “credit cards” than for “How do I make a blueberry pie?”

For the most popular queries, the page that ranks will likely not be laser-targeted on “credit cards”.  However, for the long tail, a page that is laser-targeted will tend to win if it exists.  The reason is that Google thinks that your wording carries subtle clues of your intent, so it should generally be respected.  Someone looking for “How to make a blueberry pie?” isn’t necessarily as sophisticated a cook as someone who searches for “blueberry pie recipe” — they might not even be looking with the intention of making a blueberry pie, but rather out of curiosity as to how it is made, and so a recipe does not directly answer their intent.

Links: With billions of pages on the Internet, there needs to be a way to sift the wheat from the chaff and determine who wins out of multiple close pages.  The strongest signal for this is how trusted a site and how trusted a page is, and this is overwhelmingly measured by links.  A link from a trusted page to another page says “I trust this other page”, and the aggregate graph shows you which pages are most trusted on the Internet.  Note that trust is used as a proxy for quality because it is almost impossible to measure quality directly.

It is important to mention that links to one bit of content on your site help all other content — perhaps not as much as the linked content, but still substantially.  Wikipedia’s article on dolphins doesn’t necessarily have thousands of links pointing to it, but over their millions of articles like the History of the Ottoman empire, they have accumulated trust sufficient that a new page on wiki is assumed to be much better than a new page on a hobbyist’s blog.  Note that because Wiki ranks for nearly everything they tend to accumulate new citations when people are looking for someone to cite.  This causes a virtuous cycle (for Wiki, anyway): winners win.  You’ll see this over and over in SEO.

Despite this equation looking additive, SEO very rarely shows linear benefits.  Benefits compound multiplicatively or exponentially.  Sadly, many companies try to develop their SEO in a linear fashion: writing all content by hand, searching out links one at a time, etc.  We’ll present a few techniques to do it more efficiently.

The Biggest Single Problem

The biggest single problem with software company’s SEO is that they treat their website like a second class citizen.  The product gets total focus from a team of talented engineers for a year, and then the website is whipped up at 2 AM in the morning on release day and never touched again.  You have to treat your website like it was a shipping software product of your company.

It needs:

  • testing
  • design
  • strategic thought into feature set
  • continuous improvement
  • for loops

“For loops?”  Yes, for loops.  You’d never do calculations by counting on your fingers when you have a computer available to do them for you.  Hand-writing all content in Notepad is essentially the same.  Content should be separated from presentation — via templates & etc — so that you can reuse both the content and presentational elements.  Code reuse: not just for software.

Scalable Content Generation

Does anyone have a thought about how large a website’s optimal size should be?  10 pages?  A hundred pages?  No, in the current environment, the best size for a website is “as large as it possibly can be”, because of how this helps you exploit the long tail.  As long as you have a well-designed site architecture and sufficient trust, every marginal topic you cover on your website generates marginal traffic.  And if you can outsource or automate this such that the marginal cost of creating a piece of content is less than the marginal revenue received from it, it makes sense to blow your website up.

This is especially powerful if you can make creation of content purely a “Pay money and it happens” endeavor, which lets you treat SEO like a channel like PPC: pour in money, watch sales, laugh all the way to the bank.  The difference is that you get to keep your SEO gains forever rather than having to rebuy them on every click like PPC.  This is extraordinarily powerful if you do it right.  Here’s how:

Use a CMS

The first thing you need to enable scalable content generation is a CMS.  People need to be able to create additional content for the website without hand-editing it.  WordPress is an excellent first choice, but you can get very, very creative with custom CMSes for content types which are specific to your business if you have web development expertise.

Note that “content” isn’t necessarily just blog posts.  It is anything your customers perceive value out of, anything which solves problems for them.  That could be digitizing your documentation, or answering common questions in your niche (“How do I…” is a very common query pattern), or taking large complex data sets and explaining their elements individually in a comprehensible fashion.  Also note that it isn’t strictly text: you can do images and even video in a scalable fashion these days.

For example, using Flickr Creative Commons search, you can tap millions of talented photographers for free to get photos, so illustrating thousands of pages is as simple as searching, copying, and crediting the photographer.  You can use GraphicsMagick or ImageMagick to create or annotate images algorithmically.  You can use graphing libraries to create beautiful graphs from boring CSV files — more on that later.

The reasons why you’d use a CMS are they make content easy to create and edit, so you’ll do more of it.  Additionally, by eliminating the dependency on the webmaster, you can have non-technical employees or freelancers create content for you.  This is key to achieving scale.  You can also automate on-page SEO optimization — proper title tags, interlinking, etc — so that content creators don’t have to worry about this themselves.

Outsource Writing

You are expensive.  English majors are cheap.  Especially in the current down economy, stay at home moms, young graduates, the recently unemployed, and many other very talented folks are willing to write for cheap, particularly from home.  This lets you push the marginal cost of creating a new page to $10 ~ $15 or lower.  As long as you can monetize that page at a multiple of that, you’ll do very well for yourself.  Demand Media is absolutely killing it with this model.

Finding and managing writers is difficult.  If you use freelancers and find good ones, hold onto them for dear life, since training and management are your main costs.  Standardize instructions to freelancers and find folks who you can rely on to exercise independent thinking.

You can also get content created as a service, using TextBroker.  Think of the content on your website as a pyramid: you have a few pages handwritten by domain experts with quality off the chart, and then a base of the pyramid which is acceptable but perhaps not awe-inspiring.  At the 4 star quality level, you can get content in virtually infinite quantity at 2.2 cents per word.  You can either have someone copy/paste this into your website or do a bit of work with their API and automate the whole process.

You can use software to increase the quality of outsourced content.  For example, putting a picture on it automatically makes it better.  You can automate that process so your editors can quickly do it for all pages.  You can remix common page elements — calls to action, etc — which are polished to a mirror shine with the outsourced content.  You can also mix content from multiple sources to multiply its effectiveness: if you have 3 user segments and 3 features they really value, that might be 9 pages.  (If you use 2 features per page, that is 18.  As you can see, the math is gets very compelling very quickly.)

Milk It

Now that you’re set up to do content at scale, you can focus on doing it well.  The best content is:

Modular: You can use it in multiple places on the website.  You paid good money for it.  If you can use it in two places, the cost just declined by half.

Evergreen: The best possible value for an expiration date is “never”.  Chasing the news means your content gets stale and stops providing value for the business.  Answering the common, recurring, eternal problems and aspirations of your market segment means content written this year will never go out of style.  That lets you treat content as durable capital.  Also, because it tends to pick up links over time, it will get increasing traffic over time.

The first piece of content I made for my website took me two hours to write.  It made $100 the first month.  Not bad, but why only get paid once?  It has gone on to make me thousands over the years, and it will never go out of style.

Competitively Defensible: One of the tough things about blog posts is that any idiot can get a blog up as easily as you can.  Ideally, you want to focus on content which other people can’t conveniently duplicate.  OKCupid’s blog posts about dating data are a superb example of this: they use data that only they have, and they’ve made themselves synonymous with the category.  No wonder they’re in the top 3 for “online dating”.  Proprietary data, technical processes which are hard to duplicate, and other similar barriers establish a moat around your SEO advantage.

Process-oriented: If something works, you want to be able to exploit it in a repeatable fashion.  Novelty is an excellent motivational factor and you can’t lose it, but novelty that can be repeated is a wonderful thing to have.  You also want to have a defined step where you see what worked and what didn’t, so that you can improve your efforts as you go on.

Tracking:

Track what works!  Do more of that!  Install Google Analytics or similar to see what keywords people are reaching for on your site.  Keywords (or AdWords data) are great sources of future improvements.  Track conversions based on landing page, and create more content based on the content which is really winning.  If content should be winning but isn’t, figure out why for later iterations — maybe it needs more external or internal promotion, a different slant, a different target market, etc.

Case Study

Getting into the heads of my teachers for a moment — a key step — most teachers have a lesson planned out and need an activity to slot into it.  For example, they know they have a lesson about the American Revolution coming up.  Some of them, who already like bingo, are going to look for American Revolution bingo cards.  If my site ranked for that, that would be an opportunity to tell them that they could use software to create not just American Revolution activities but bingo for any lesson if they just bought my software.

So I made a CMS which, given a list of words and some explanatory text, would create a downloadable set of 8 bingo cards (great for parents, less great for teachers) on that topic, make a page to pitch that download in, and put an ad for Bingo Card Creator on the page.  Note how I’m using this content to upsell the user into more of a relationship with me: signing up for a trial, giving me their email address, maybe eventually buying the software.

I have a teacher in New Mexico who produces the words and descriptions for me.  The pages end up looking like this for the American Revolution.  She produces 30 activities a month for $100, and I approve them and they go live instantly.  This has been going on for a few years.  In the last year, I’ve started doing end-to-end conversion tracking, so I can attribute sales directly to the initial activity people started with.

This really works.  Some of the activities, like Summer bingo cards or Baby Shower Bingo cards, have resulted in thousands of dollars in sales in the last year.  $3.50 in investment, thousands in returns.  And there is a long tail of results:

This graph shows the 132 of the 900 activities which generated a sale in the last year.  You can see that there is a long tail which each generated one sale — in fact, a hundred of them.  Sure, you might not think that Famous Volcano bingo cards would be that popular, but I’ll pay $3.50 to get a $30 sale as often as the opportunity is offered.  These will also continue producing value in the next years, as they already have over the last several years: note that roughly half of these which produced a sale in the last 12 months were written in 2007 or 2008.

This took only a week or two to code up, and now 5 minutes a month sending my check and a thank-you note to the freelancer.  I’ve paid her about $3,000 over the last few years to write content.  In the last year alone, it has generated well over $20,000 in sales.  If you do things this efficiently, SEO becomes a channel like PPC — put in a quarter, get out a dollar, redeploy the profits to increase growth.

Any software company can create content like this, with a bit of strategic thinking, some engineering deployed, and outsourced content creation.  Try it — you can do an experiment for just a few hundred dollars.  If it works, invest more.  (Aaron Wall says that one of the big problems is that people do not exploit things that work.  If you’ve got it, flaunt it — until it stops working.)

Linkbait

Linkbait is creating content intended to solicit links to your website.  This can be by exploiting the psychology of users — they show things to friends because they agree with them strongly, or they hate them.  They create links because it creates value for them, not value for you — it increases their social status, it flatters their view of the world, it solves their problems.

All people are not equal on the Internet: twenty-something technologists in San Fransisco create hundreds of times more links per year than retired teachers in Florida.  All else being equal, it makes sense to create more of your linkbait targeted at heavy linking groups.  They’ve been labeled the Linkerati by SEOMoz, and I recommend the entire series of posts on them highly.

Software developers have some unique, effective ways to create linkbait.  For example:

Open Source Software

OSS developers and users are generally in very link-rich demographics.  OSS which solves problems for businesses tends to pick up links from, e.g., consultants deploying it — they will cite your website to justify their billing rate.  That is a huge win for you.  There are also numerous blogs which cover practically everything which happens in OSS.

OSS is fairly difficult to duplicate as linkbait, because software development is hard.  (Don’t worry about people copying it — you’ll be the canonical source, and the canonical source for OSS tends to win the citation link.  Make sure that is on your site rather than on Github, etc.)

OSS in new fields in software — for example, Rails development the last few years — has landgrab economics.  The first semi-decent OSS in a particular category tends to win a lot of the links and mindshare.  So get cracking!  And keep your eyes open for new opportunities, particularly for bits of infrastructural code which you were going to write for your business needs anyhow.

Case Study: A/Bingo

I’m extraordinarily interested in A/B testing, and wanted to do more of it on my site.  At the time, there was no good A/B testing option for Rails developers.  So I wrote one.  It went on to become one of the major options for A/B testing in Rails, and was covered on the official Rails blog, Ajaxian, and many other fairly authoritative places on the Internet.  It is probably the most effective source of links per unit effort I’ve ever had.

Some tactical notes:

  • Put it on your website.  You did the work, get the credit for it.
  • Invest in a logo — you can get them done very cheaply at 99designs.  Pretty things are trusted more.
  • Spend time “selling” the OSS software.  Documentation, presentation of benefits, etc.
  • OSS doesn’t have to be a huge project like Apache.  You can do projects in 1 day or 1 week which people will happily use.  (Remember, pick things which solve problems.)

Conclusion

I’m always willing to speak to people about this.  Feel free to email me (patrick@ this domain).

Speaking at Software Industry Conference

I’m currently in Dallas at the Software Industry Conference, where I’ll be giving a presentation about SEO strategies on Saturday.  In the meanwhile, if you’re at the conference or feel like coming out to the Hyatt Regency, feel free to get in touch with me.

As you have probably guessed, I’ll be posting the presentation and some textual elaboration on it right after I finish delivering it.  (I don’t know if video will be available this time.)

Running Apache On A Memory-Constrained VPS

Yesterday about a hundred thousand people visited this blog due to my post on names, and the server it was on died several fiery deaths. This has been a persistent issue for me in dealing with Apache (the site dies nearly every time I get Reddited — with only about 10,000 visitors each time, which shouldn’t be a big number on the Internet), but no amount of enabling WordPress cache plugins, tweaking my Apache settings, upgrading the VPS’ RAM, or Googling lead me to a solution.

However, necessity is the mother of invention, and I finally figured out what was up yesterday. The culprit: KeepAlive.

Setting up and tearing down HTTP connections is expensive for both servers and clients, so Apache keeps connections open for a configurable amount of time after it has finished a request.  This is an extraordinarily sensible default, since the vast majority of HTTP requests will be followed by another HTTP request — fetch dynamically generated HTML, then start fetching linked static assets like stylesheets and images, etc.  Look, of 43 requests, 42 were not the last request in a 3 second interval.  It is a huge throughput win.  However, if you’re running a memory constrained VPS and get hit by a huge wave of traffic, KeepAlive will kill you.

When I started getting hit by the wave yesterday, I had 512MB of RAM and a cap (ServerLimit = MaxClients) of 20 worker processes to deal with them.  Each worker was capable of processing a request in a fifth of a second, because everything was cached.  This implies that my throughput should have been close to 20 * 60 * 5 = 60k satisfied clients a minute, enough to withstand even a mighty slashdotting.  (That is a bit of an overestimation, since there were also static assets being requested with each hit, but to fix an earlier Reddit attack I had manually hacked the heck out of my WordPress theme to load static assets from Bingo Card Creator’s Nginx, because there seems to be no power on Earth or under it that can take Nginx down.)

However, I had KeepAlive on, set to three seconds.  This meant that for every 250ms of a worker streaming cached content to a client, it spent 3 seconds sucking its thumb waiting for that client to come back and ask for something else.  In the meantime, other clients were stacking up like planes over O’Hare.  The first twenty clients get in and, from the perspective of every other client, the site totally dies for three seconds.  Then the next twenty clients get served, and the site continues to be dead for everybody else.  Cycle, rinse, repeat.  The worst part was people were joining the queue faster than their clients were either getting handled or timeouted, so it was essentially a denial of service attack caused by the default settings.  The throughput of the server went from about 60k requests per second to about 380 requests per second.  380 is, well, not quite enough.

Thus the solution: turning KeepAlive off.  This caused CPU usage to spike quite a bit, but since the caching plugin was working, it immediately alleviated all of the user-visible problems.  Bingo, done.

Since I tried about a dozen things prior to hitting on this, I thought I’d quick write them down in case you are an unlucky sod Googling for Apache settings for your VPS, possibly Ubuntu Apache settings, or that sort of thing:

  • Increase VPS RAM: Not really worth doing unless you’re on 256MB.  Apache should be able to handle the load with 20 processes.
  • Am I using pre-fork Apache or the worker MPM? If  you’re on Ubuntu, you’re probably using the pre-fork Apache.  MPM settings will be totally ignored.  You can check this by running apache2 -l .  (This is chosen at compile time and can’t be altered via the config files, so if — like me — you just apt-get your way around getting common programs installed, you’re likely stuck.)
  • What should my pre-fork settings be then?

Assuming 512 MB of RAM and you are only running Apache and MySQL on the box:

<IfModule mpm_prefork_module>
StartServers          2
MinSpareServers       2
MaxSpareServers      5
ServerLimit          20
MaxClients           20
MaxRequestsPerChild  10000
</IfModule>
You can bump ServerLimit and MaxClients to 48 or so if you have 1GB of RAM.  Note that this assumes you’re using a fairly typical WordPress installation, and you’ve tried to optimize Apache’s memory usage.  If you see your VPS swapping, move those numbers down (and restart Apache) until you see it stop swapping.  Apache being inaccessible is bad, swapping might slow your server down bad enough to kill even your SSH connection, and then you’ll have to reboot and pray you can get in fast enough to tweak settings before it happens again.
  • How do I tweak Apache’s memory usage? Turn off modules you don’t need.  Go to /etc/apache2/mods-enabled.  Take note of how many things there are that you’re not using.  Run sudo a2dismod (name of module) for them, then restart Apache.  This literally halved my per-process memory consumption last night, which let me run twice as many processes.  (That still won’t help you if KeepAlive is on, but it could majorly increase responsiveness if you’ve eliminated that bottleneck.)  Good choices for disabling are, probably, everything that starts with dav, everything that starts with auth (unless you’re securing wp-admin at the server layer — in that case, enable only the module you need for that), and userdir.
  • What cache to use? WordPress Super Cache.  Installs quickly (follow the directions to the letter, especially regarding permissions), works great.  Don’t try to survive a Slashdotting without it.
  • Any other tips?  Serve static files through Nginx.  Find a Rails developer to explain it to you if you haven’t done it before — it is easier than you’d think and will take major load off your server (Apache only serves like 3 requests of the 43 required to load a typical page on my site — and two of those are due to a plugin that I can’t be bothered to patch).
  • My server is slammed and I can’t get into the WordPress admin to enable the caching plugin I just installed:  Make sure Apache’s KeepAlive is off.   Change your permit directive in the Apache configuration to

<Directory /var/www/blog-directory-getting-slammed-goes-here>

Options FollowSymLinks

AllowOverride All

Order deny,allow

Deny from all

Allow from <your IP address goes here>

</Directory>

This will have Apache just deny requests from clients other than yourself (although Apache will keep the connection open if you’re using KeepAlive, which won’t due you a lick of good since it will still hold the line open so that it can deny their next request promptly — don’t use KeepAlive).  That should let you get into the WordPress admin to enable and test caching.  After doing so, you can switch to Allow from All and then test to see if your site is now surviving.

Sidenote: If you can possibly help it, I recommend Nginx over Apache.  I use Apache because a couple of years ago it was not simple to use Nginx with PHP.  This is no longer the case.  The default settings (or whatever  you’ve copied from the My First Rails Nginx Configuration you just Googled) are much more forgiving than Apache’s defaults.  It is extraordinarily difficult to kill Nginx unless you set out to do so.  Apache.conf, on the other hand, is a whole mess of black magic with subtle interactions that will kill you under plausible deployment scenarios, and the official documentation has copious explanations of What the settings do and almost nothing regarding Why or How you should configure them.

Hopefully, this will save you, brave Googling blog owner from the future, from having to figure this out by trial and error while your server is down.  Godspeed.

Falsehoods Programmers Believe About Names

[This post has been translated into Japanese by one of our readers: 和訳もあります。]

John Graham-Cumming wrote an article today complaining about how a computer system he was working with described his last name as having invalid characters.  It of course does not, because anything someone tells you is their name is — by definition — an appropriate identifier for them.  John was understandably vexed about this situation, and he has every right to be, because names are central to our identities, virtually by definition.

I have lived in Japan for several years, programming in a professional capacity, and I have broken many systems by the simple expedient of being introduced into them.  (Most people call me Patrick McKenzie, but I’ll acknowledge as correct any of six different “full” names, any many systems I deal with will accept precisely none of them.) Similarly, I’ve worked with Big Freaking Enterprises which, by dint of doing business globally, have theoretically designed their systems to allow all names to work in them.  I have never seen a computer system which handles names properly and doubt one exists, anywhere.

So, as a public service, I’m going to list assumptions your systems probably make about names.  All of these assumptions are wrong.  Try to make less of them next time you write a system which touches names.

  1. People have exactly one canonical full name.
  2. People have exactly one full name which they go by.
  3. People have, at this point in time, exactly one canonical full name.
  4. People have, at this point in time, one full name which they go by.
  5. People have exactly N names, for any value of N.
  6. People’s names fit within a certain defined amount of space.
  7. People’s names do not change.
  8. People’s names change, but only at a certain enumerated set of events.
  9. People’s names are written in ASCII.
  10. People’s names are written in any single character set.
  11. People’s names are all mapped in Unicode code points.
  12. People’s names are case sensitive.
  13. People’s names are case insensitive.
  14. People’s names sometimes have prefixes or suffixes, but you can safely ignore those.
  15. People’s names do not contain numbers.
  16. People’s names are not written in ALL CAPS.
  17. People’s names are not written in all lower case letters.
  18. People’s names have an order to them.  Picking any ordering scheme will automatically result in consistent ordering among all systems, as long as both use the same ordering scheme for the same name.
  19. People’s first names and last names are, by necessity, different.
  20. People have last names, family names, or anything else which is shared by folks recognized as their relatives.
  21. People’s names are globally unique.
  22. People’s names are almost globally unique.
  23. Alright alright but surely people’s names are diverse enough such that no million people share the same name.
  24. My system will never have to deal with names from China.
  25. Or Japan.
  26. Or Korea.
  27. Or Ireland, the United Kingdom, the United States, Spain, Mexico, Brazil, Peru, Russia, Sweden, Botswana, South Africa, Trinidad, Haiti, France, or the Klingon Empire, all of which have “weird” naming schemes in common use.
  28. That Klingon Empire thing was a joke, right?
  29. Confound your cultural relativism!  People in my society, at least, agree on one commonly accepted standard for names.
  30. There exists an algorithm which transforms names and can be reversed losslessly.  (Yes, yes, you can do it if your algorithm returns the input.  You get a gold star.)
  31. I can safely assume that this dictionary of bad words contains no people’s names in it.
  32. People’s names are assigned at birth.
  33. OK, maybe not at birth, but at least pretty close to birth.
  34. Alright, alright, within a year or so of birth.
  35. Five years?
  36. You’re kidding me, right?
  37. Two different systems containing data about the same person will use the same name for that person.
  38. Two different data entry operators, given a person’s name, will by necessity enter bitwise equivalent strings on any single system, if the system is well-designed.
  39. People whose names break my system are weird outliers.  They should have had solid, acceptable names, like 田中太郎.
  40. People have names.

This list is by no means exhaustive.  If you need examples of real names which disprove any of the above commonly held misconceptions, I will happily introduce you to several.  Feel free to add other misconceptions in the comments, and refer people to this post the next time they suggest a genius idea like a database table with a first_name and last_name column.

Detecting Bots with Javascript for Better A/B Test Results

I am a big believer in not spending time creating features until you know customers actually need them.  This goes the same for OSS projects: there is no point in overly complicating things until “customers” tell you they need to be a little more complicated.  (Helpfully, here some customers are actually capable of helping themselves… well, OK, it is theoretically possible at any rate.)

Some months ago, one of my “customers” for A/Bingo (my OSS Rails A/B testing library) told me that it needed to exclude bots from the counts.  At the time, all of my A/B tests were behind signup screens, so essentially no bots were executing them.  I considered the matter, and thought “Well, since bots aren’t intelligent enough to skew A/B test results, they’ll be distributed evenly over all the items being tested, and since A/B tests measure for difference in conversion rates rather than measuring absolute conversion rates, that should come out in the wash.”  I told him that.  He was less than happy about that answer, so I gave him my stock answer for folks who disagree with me on OSS design directions: it is MIT licensed, so you can fork it and code the feature yourself.  If you are too busy to code it, that is fine, I am available for consulting.

This issue has come up a few times, but nobody was sufficiently motivated about it to pay my consulting fee (I love when the market gives me exactly what I want), so I put it out of my mind.  However, I’ve recently been doing a spate of run-of-site A/B tests with the conversion being a purchase, and here the bots really are killers.

For example, let’s say that in the status quo I get about 2k visits a day and 5 sales, which are not atypical numbers for summer.  To discriminate between that and a conversion rate 25% higher, I’d need about 56k visits, or a month of data, to hit the 95% confidence interval.  Great.  The only problem is that A/Bingo doesn’t record 2k visits a day.  It records closer to 8k visits a day, because my site gets slammed by bots quite frequently.  This decreases my measured conversion rate from .25% to .0625%.  (If these numbers sound low, keep in mind that we’re in the offseason for my market, and that my site ranks for all manner of longtail search terms due to the amount of content I put out.  Many of my visitors are not really prospects.)

Does This Matter?

I still think that, theoretically speaking, since bots aren’t intelligent enough to convert at different rates over the alternatives, the A/B testing confidence math works out pretty much identically.  Here’s the formula for Z statistic which I use for testing:

The CR stands for Conversion Rate and n stands for sample size, for the two alternatives used.  If we increase the sample sizes by some constant factor X, we would expect the equation to turn into:

We can factor out 1/X from the numerator and bring it to the denominator (by inverting it).  Yay, grade school.

Now, by the magic of high school algebra:

If I screw this up the math team is *so* disowning me:

Now, if you look carefully at that, it is not the same equation as we started with.  How did it change?  Well, the reciprocal of the conversion rate (1 – cr) got closer to 1 than it was previously.  (You can verify this by taking the limit as X approaches infinity.)  Getting closer to 1 means the numerators of the denominator get bigger, which means the denominator as a whole gets modestly bigger, which means the Z score gets modestly smaller, which could possibly hurt the calculation we’re making.

So, assuming I worked my algebra right here, the intuitive answer that I have been giving people for months is wrong: bots do bork statistical significance testing, by artificially depressing z scores and thus turning statistically significant results into null results at the margin.

So what can we do about it?

The Naive Approach

You might think you can catch most bots with a simple User-Agent check.  I thought that, too.  As it turns out, that is catastrophically wrong, at least for the bot population that I deal with.  (Note that since keyword searches would suggest that my site is in the gambling industry, I get a lot of unwanted attention from scrapers.)  It barely got rid of half of the bots.

The More Robust Approach

One way we could try restricting bots is with a CAPCHA, but it is a very bad idea to force all users to prove that they are human just so that you can A/B test them.  We need something that is totally automated which is difficult for bots to do.

Happily, there is an answer for that: arbitrary Javascript execution.  While Googlebot (+) and a (very) few other cutting edge bots can execute Javascript, doing it on web scales is very resource intensive, and also requires substantially more skill for the bot-maker than scripting wget or your HTTP library of choice.

+ What, you didn’t know that Googlebot could execute Javascript?  You need to make more friends with technically inclined SEOs.  They do partial full evaluation (i.e. executing all of the Javascript on a page, just like a human would) and partial evaluation by heuristics (i.e. grep through the code and make guesses without actually executing it).  You can verify full evaluation by taking the method discussed in this blog post and tweaking it a little bit to use GETs rather than POSTs, then waiting for Googlebot to show up in your access logs for the forbidden URL.  (Seeing the heuristic approach is easier — put a URL in syntactically live but logically dead code in Javascript, and watch it get crawled.)

To maximize the number of bots we catch (and hopefully restrict it to Googlebot, who almost always correctly reports its user agent), we’re going to require the agent to perform three tasks:

  1. Add two random numbers together.  (Easy if you have JS.)
  2. Execute an AJAX request via Prototype or JQuery.  (Loading those libraries is, hah, “fairly challenging” to do without actually evaluating them.)
  3. Execute a POST.  (Googlebot should not POST.  It will do all sorts of things for GETs, though, including guessing query parameters that will likely let it crawl more of your site.  A topic for another day.)

This is fairly little code.  Here is the Prototype example


  var a=Math.floor(Math.random()*11);
  var b=Math.floor(Math.random()*11);
  var x=new Ajax.Request('/some-url', {parameters:{a: a, b: b, c: a+b}})

and in JQuery:


  var a=Math.floor(Math.random()*11);
  var b=Math.floor(Math.random()*11);
  var x=jQuery.post('/some-url', {a: a, b: b, c: a+b});

Now, server side, we take the parameters a, b, and c, and we see if they form a valid triplet.  If so, we conclude they are human. If not, we leave continue to assume that they’re probably a bot.

Note that I could have been a bit harsher on the maybe-bot and given them a problem which trusts them less: for example, calculate the MD5 of a value that I randomly picked and stuffed in the session, so that I could reject bots which hypothetically tried to replay previous answers, or bots hand-coded to “knock” on a=0, b=0, c=0 prior to accessing the rest of my site.  However, I’m really not that picky: this isn’t to keep a dedicated adversary out, it is to distinguish the overwhelming majority of bots from humans. (Besides, nobody gains from screwing up my A/B tests, so I don’t expect there to be dedicated adversaries. This isn’t a security feature.)

You might have noticed that I assume humans can run Javascript.  (My site breaks early and often without it.)  While it is not specifically designed that Richard Stallman and folks running NoScript can’t influence my future development directions, I am not overwrought with grief at that coincidence.

Tying It Together

So now we can detect who can and who cannot execute Javascript, but there is one more little detail: we learn about your ability to execute Javascript potentially after you’ve started an A/B test.  For example, it is quite possible (likely, in fact) that the first page you execute has an A/B test in it somewhere, and that you’ll make an AJAX call from that page you register your humanness after we have already counted (or not counted) your participation in the A/B test.

This has a really simple fix.  A/Bingo already tracks which tests you’ve previously participated in, to avoid double-counting.  In “discriminate against bots” mode, it tracks your participation (and conversions) but does not add them to the totals immediately unless you’ve previously proven yourself to be a human.  When you’re first marked as a human, it takes a look at the tests you’ve previously participated in (prior to turning human), and scores your participation for them after the fact.  Your subsequent tests will be scored immediately, because you’re now known to be human.

Folks who are interested in seeing the specifics of the ballet between the Javascript and server-side implementation can, of course, peruse the code at their leisure by git-ing it from the official site.  If you couldn’t care less about implementation details but want your A/B tests to be bot-proof ASAP, see the last entry in the FAQ for how to turn this on.

Other Applications

You could potentially use this in a variety of contexts:

1) With a little work, it is a no interaction required CAPCHA for blog commenting and similar applications. Let all users, known-human and otherwise, immediately see their comments posted, but delay public posting of the comments until you have received the proof of Javascript execution from that user. (You’ll want to use slightly trickier Javascript, probably requiring state on your server as well.) Note that this will mean your site will be forever without the light of Richard Stallman’s comments.

2) Do user discrimination passively all the time. When your server hits high load, turn off “expensive” features for users who are not yet known to be human. This will stop performance issues caused by rogue bots gone wild, and also give you quite a bit of leeway at peak load, since bots are the majority of user agents. (I suppose you could block bots entirely during high load.)

3) Block bots from destructive actions, though you should be doing that anyway (by putting destructive actions behind a POST and authentication if there is any negative consequence to the destruction).

Interviewed by Andew Warner On Entrepreneurship [Video]

The interviewed I mentioned earlier got rescheduled due to technical difficulties, but it is now up on Mixergy’s site.  You can see it here.

Topics include:

  • Why would teachers want to play bingo anyhow?
  • How did you pull this off while full-time employed?
  • What is it like being a Japanese salaryman?
  • What is the next product?  (Spoiler: Not telling you yet, come back in May.)
  • How did you get traction early at the start?
  • How do you make your processes more reliable to maximize on the effectiveness of your time?

I’m pretty happy with how it came out, although given that it was about 2 in the morning when I recorded it due to time zone differences, sometimes my ability to speak in coherent sentences leaves a bit to be desired.  If you have any questions, feel free to comment here or there.

Peldi from Balsamiq Interviewed For An Hour

Peldi from Balsamiq, who is hugely inspiring to the rest of the uISV community and myself, was interviewed for over an hour earlier this week on Mixergy.  Go watch it.  Everything he says about customer service, building remarkable products, early marketing (his post on the subject contains some of the best advice I’ve ever read), and competition just knocks it out of the park.

For folks here who have been reading me for a while but do not know about Mixergy yet: Andrew Warner does interviews with successful Internet business folks.  Most of them are inspiring, and many have killer, actionable tips that you can use in your businesses.  (I particularly like the one with the Wufoo guys, Peldi’s, and this one by Hiten Shah of Kissmetrics and, earlier, CrazyEgg, which I’ve mentioned a time or three here.)

Andrew interviewed me earlier, too.  The interview and transcript will be up one of these days, after the editors have made me sound intelligible.  (It is amazing what you can do with computers!)

Data Driven Software Design Presentation (plus bonus interview)

Last week I went down to Osaka to give a presentation to the Design Matters group at the Apple Store.  I originally prepared a very geeky software-centric dive into the magic of using statistics to improve your software, but I was informed that the audience wouldn’t be as geeky as I had expected, so with great help from Andreas and company I retooled the presentation into something less technical and more interesting on the same topic.  I don’t believe it was videotaped, but you can see my presentation and notes on Data-driven Software Design below:

Data-Driven Software Design

(Incidentally, that Slideshare widget is great SEO now isn’t it.  I’m leaving their links attached out of sheer amusement.)
After the presentation, I met with some folks from MessaLiberty, one of the most impressive companies I’ve seen in Japan.  They do lots of WordPress/website consulting and are coming out with a recommendation engine product one of these days — all with a team of about seven young engineers working sane hours.  Ah, there is hope for the future yet.
Anyhow, they asked if they could interview me for their video blog.  You can see the interview in English and, in the near future (after they get done editing it) in Japanese.  Topics include a brief overview of the above presentation, when you should start A/B testing versus when to redirect your efforts elsewhere, and my advice for getting a job in Japan (spoiler: learn Japanese).

Quick Start For Rails on Windows Seven

Today I killed a few hours getting my Rails environment working on my brand new shiny 64 bit Windows Seven laptop.  These instructions should also work with Windows Vista.  I’m assuming you’re a fairly  experienced Rails developer and just ended in dependency purgatory like I did for the last few hours.

1.  Grab the MySQL developer version for your architecture (32 bit or 64 bit as appropriate) here.

2.  Grab Ruby here.  I used the 1.8.6 RC2 installer for my 64 bit architecture.

3.  Add C:\Ruby\bin to your path.  You can do this on Windows by opening the Start Menu, right clicking My Computer, clicking Properties, clicking Advanced / System Settings, and then adding it to the end of the PATH variable on the lower of the two dialogs.  Apologies for inexact setting names, my computer is Japanese so I’m working from memory.

4.  Verify that your path includes C:\Ruby\bin by opening a new command line and executing “path”.

5.  Good to go?  OK, execute:

gem install --no-rdoc --no-ri rails
gem install mysql

You’ll get all manner of errors on that MySQL installation. That is OK.

6. Here’s the magic: copy libmySQL.dll from here to C:\Ruby\bin . If you do not do this, you will get ugly errors on Rails startup about not being able to load mysql_api.so.

You should now be able to successfully work with Rails as you have been previously, even from your Windows machine, and you will amaze your Mac-wielding friends.

Getting Interviewed By Andrew Warner at Mixergy

Andrew Warner of Mixergy will be interviewing me at 11 AM Pacific tomorrow, which is something like 14 hours from the timestamp on this post.  If it is 11 ~ 12 AM Pacific, you can catch the live interview and participate in a chatroom.  I’m told the main theme for the interview will be a business biography, so my regular readers are likely going to hear a lot of things you already know (“It makes bingo cards!  Wow, fancy that.”), but Andrew has a way of wheedling secrets out of people so I’m sure you’ll still enjoy it.

If you have any subject you’d particularly like to hear about, please post it in the comments and I’ll tell Andrew to ask about it.