It sure can, especially if a lot of testing is done. According to a recent article published by Wired Magazine on AB Testing, in 2011 Google ran over 7,000 A/B (or multivariate) tests to optimize its products. Google started conducting A/B tests on its search algorithm in 2000 and since then has become addicted. Amazon, Ebay, and Netflix are addicted as well. Big surprise? Not really.
This raises an interesting question: How often are we guinea pigs affects the results of an online experiment? I feel important. The Wired article sums it up nicely:
Today, A/B is ubiquitous, and one of the strange consequences of that ubiquity is that the way we think about the web has become increasingly outdated. We talk about the Google homepage or the Amazon checkout screen, but it’s now more accurate to say that you visited a Google homepage, an Amazon checkout screen. What percentage of Google users are getting some kind of “experimental” page or results when they initiate a search? Google employees I spoke with wouldn’t give a precise answer—”decent,” chuckles Scott Huffman, who oversees testing on Google Search.
Obviously, these companies (Google, Amazon, eBay, Netflix) are conversion machines. They have very specific and measureable goals and tons of traffic volume to gauge how well those goals are being achieved.
They’re doing a fantastic job of evolving their sites, and it’s probably because they have lots of resources working on test optimization along with user experience and design specialists as the sites morph over time. They’re doing such a good job, you don’t even know you’re being subjected to a test and as the sites evolve, the changes all seem logical and consistent.
Testing needs to be balanced with design strategy. Most sites started with a design concept for a reason. Ideally this concept was based on user research or performance from previous designs. Testing ensures continuous performance improvements and insights are generated, but since tests are most often performed on small, isolated sections of websites, user experience professionals need to be included in the evolution of the site to ensure consistency across the full design. If they’re not part of this process, it could create design islands (can I say that?) within a site that disrupt the user experience. If a green background works best for a test on product page A, a red background works best for a test on product page B, and a blue background works best for a test on product page C, despite those discrete winning test results, it might not be best to implement different color styles to individual product pages.
So keep on testing, a lot, but be sure to consult your user experience folks and designers to prevent design islands.