In 2008, an inversion took place, and, as with most revolutions, it happened quietly. Online searches ceased being an expression of intent — a directive, and instruction, a query. Search became something larger — a query back to the user, a nudge, a prompt.
With the flip of a switch, Google engineer Kevin Gibbs took an opt-in beta feature of the web search giant into the mainstream, altering the way that we use search forever, tempting us to follow new and unexpected paths, and potentially rewiring our brains in the process.
Today’s Insiders will explore the evolution of algorithmic prompts, morbid curiosity, and how Commerce was at the center of it all.
The Power of Suggestion
Google wasn’t the first to roll out a text input suggestion tool. But this product differed from those provided by Yahoo and MSN. The popular web portals of the day delivered typeahead suggestions of other successful user searches. They would prompt you based on other user inputs that resulted in affirmative clicks through to the desired source material. Google Suggest (hereafter, Suggest) took it a step further by prompting the user based on the target content’s contextual meaning.
On the surface it seems like Suggest was opening a noble discourse with the average Google user. Discourse is a series of questions, answers, and ponderances. ”We think we understand what you may be looking for — are any of these related searches relevant?” But Suggest was not a conversation. It was an interruption, sometimes laced with the sequitur, the offbeat, or the outrageous.
We’ve written about doom-scrolling and its link to algorithmic post-hypnotic suggestion in these pages in the past. Suggest’s launch day sent an unsuspecting public down a rabbit hole. In the early days of Suggest, engineers watched blogs like Digg and Reddit for unexpected results. Some results became lore for those of us old enough to retell the oral history of the Internet.
One such Suggest result stands out as being indicative of the product’s shortcomings in its early launch period. The query input “what” or “what are” often produced results like those shown below, to hilarious effect. The suggestion “what are these strawberries doing on my nipples i [sic] need them for the fruit salad” is the third result.
The Suggest algorithm was finely-tuned in the beta years to take into account the authority of the source. As expected, published works ranked highly in the algorithm, as did products for sale. Titles on Amazon.com tended to litter results with offbeat and unexpected suggestions. Vanessa Feltz’s paperback title What are These Strawberries Doing on my Nipples?… I Need Them for the Fruit Salad! was one such title.
A mainstay personality on the BBC for early morning shows, and other British programming throughout the 1990s, What are these Strawberries… made Feltz a published author in 1994. Strawberries was a 256-page collection of offbeat, sometimes satirical, sex advice from the would-be advice columnist for SHE Magazine.
The design of an algorithm doesn’t often take into account the maximalist and absurdist tendencies used to attract attention. At least, it didn’t in 2008. A published author and television presenter, whose first work was available for sale at the “world’s largest bookstore” seems like a perfectly reasonable response for a tool like Suggest.
As it happens, Commerce was the authority that drove the relevancy.
Before the Internet, you had to go looking for trouble. Signals of interest were deliberate actions: driving into the parking lot, entering the building, and pulling out your wallet, and spending cash. The Web turned it upside down: trouble came looking for you.
Starting in 2008, perfectly innocuous searches became a starting point for inspiration and discovery of content of a more curious, more illicit, nature; spreading like a wildfire throughout the internet. The interruption of the algorithm doesn’t just provide the kindling, it provides the spark, too.
The Medium is the Message
A popular quip as of late from Future Commerce co-founder, Brian Lange, is “the medium is the message!”, evoking the media futurist and theorist, Marshall McLuhan.
McLuhan envisioned a world where “the media” wasn’t just a collection of passive channels or published works. Media was thought incarnate; the ability to conjure ideation in its consumer.
From “Is Google Making Us Stupid?” in The Atlantic, published September 2009:
And what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. — Nicholas Carr
One could argue that Google didn’t just improve our discourse with others by making passive suggestions in reaction to a query; it altered the way that we think. It changed the way we store information and that changes the way we are inspired or delighted in our everyday use of search — not just on Google, but everywhere.
Suggest’s Strawberries flub didn’t just direct traffic to Amazon or cause a few people to search out raunchier content for a day on the Net; it spawned a subgenre of content creation. Hilarious book reviews for Strawberries began to appear.
I had hoped this would have advise for handling situations where one finds strawberries on various parts of their anatomy. I've had strawberries on my buttocks for some time now and don't know what to do. Unfortunately this book focuses solely on the nipples. Hopefully the author will pen a followup.
Within days the search was patched, and Suggest no longer provided Strawberries or any of Feltz’s other work as a related query for the “what are these” prompt. Where the algorithm had nudged us, the reaction of the internet caused engineers to nudge the algorithm back.
But the damage had been done. Derivative works and internet memes exploded, and users began hunting for comedic (and sometimes dystopic) examples of Suggest. Tumblr blogs collected example screenshots, and fascinating results began to emerge along racial and socioeconomic lines.
As Venture capitalist Ben Casnocha noted on his blog at the time, grammatical distinctions seemed to yield disturbing results. A query for “is it wrong to…” would yield the prompts:
- Is it wrong to… sleep with your cousin
- Is it wrong to… cheat
- Is it wrong to… question god
Whereas the query “is it ethical to…” would yield the prompts:
- Is it ethical to… sell customer information
- Is it ethical to… conduct research on animals when it causes pain and discomfort
- Is it ethical to… eat meat
Michael Agger wrote in a 2009 piece for *Slate* that there were contrasts between “smart” searches and “dumb” searches. The use of the numeral 2 in place of the word “to” would return prompts like “how 2 get weed” or “how 2 get pregnant”. Contrast this with college-level grammar:
People who start their search "how one might" are more likely to search "how one might discover a new piece of music" or "how one might account for the rise of andrew jackson in 1828.”
Suggest didn’t just introduce us to new rabbit holes. It revealed to us our digital tribes.
Queries, Prompts, and Anti-Search Patterns
Fast-forward 14 years into the future, and today suggestive algorithms are the norm. Our use of digital products depends on the prompts it returns to us. Our interactions with the algorithm no longer require a click-through; in fact, they don’t require any affirmative action on our part whatsoever.
Backscroll, dwell time, number of repeats of a video, pausing, zooming; these micro-interactions are the new intent-harvesting that platforms like Instagram and TikTok use to gauge our interest. It’s our modern-day nudge back at the data set that tell us what we will interact with. Mind you, not what we like, but what will garner a reaction.
TikTok’s reputation as a teeny-bopper, booty-dancing, platform is propagated by the fact that this genre of content has high engagement. If you’re like Willem, you probably didn’t realize that you don’t have to “like it” to tell the algorithm that you have interacted with it. Our morbid curiosity is weaponized against us.
Today, algorithms aren’t updated en masse quarterly, or in named-release cycles. They’re tailored by recent interactions, within milliseconds. Carole Cadwalladr wrote for The Guardian in 2016: “I typed: a-r-e. And then j-e-w-s… It offered me a choice of potential questions it thought I might want to ask: “are jews a race?”, “are jews white?”, “are jews christians?”, and finally, “are jews evil?”
Soon after, “are women” produces a similar result. But we’re no longer in algorithmic prompt territory, we’re in a filter bubble; Google Featured Snippets provide “authoritative sources” with legitimacy on a given search prompt, no matter how vile the results may be.
Post-2016 we’re more aware of social media’s affect on our points of view; and especially aware of our media diet’s effect on our tastes. In our 2022 Visions Report, we surveyed 1000 consumers, asking them if they hide their behavior from the algorithm. The results were surprising. We found that 43% of our study participants have changed their digital behaviors to hide from the algorithm.
Instead of hiding from the algorithm, some are fighting back. Most platforms, Twitter and TikTok included, have included feedback mechanisms to allow users to “train” the algorithm. “I don’t like this” or “I’m not interested in this” features allow a product owner to take those signals into account.
But it doesn’t always work, especially with paid content. To counter-program the algorithm, many users are intentionally delivering negative intents. These behaviors can be as benign as muting or blocking a user or keyword. But some are more hostile, reporting a disliked post as “unsafe content”, for instance. Some users take the time to use the search bar as an “anti-search” bar. One anonymous TikTok user told me “I typed I hate Fabletics into the search bar enough times that it finally stopped suggesting the brand to me.”
While hatred expression as an anti-search pattern may yield short-term results, I fear for the long-term behaviors this creates. Our short-form content media diet is already fickle; but the swift turn to plaintext expression of hatred is worrisome, especially when adjacent positive searches yield similar results. Rather than telling the algorithm what you hate, why not tell it what you like?
This training period makes us much more fearful to leave platforms after we have groomed them to our liking. This is a new form of lock-in: there is no data export ability, no preference map, and no migration or portability.
Worse still, there is no such thing as algorithmic feature parity, because our ever-changing tastes are the feature. This worry, what we have entitled as “algorithm anxiety”, elevates the platform from a pet that we train into a god that we fear. As we wrote in Insiders #108: The Idolatry of the Algorithm:
What we gain in [a positive] experience, we trade-off in a background static of worry and paranoia about ruining some perceived gain or progress. A squandered time investment made in "perfecting" the engine. When the engine works in our favor, it feels like divine blessing. When it works against us, it feels like punishment.
The inversion that took place in 2008 caused a butterfly effect of inspiration and causation that we’re still reckoning with today. As McLuhan predicted, the medium is the message. Part of our lived experience is in video and in text, delivered by digital networks, created by billions of your peers. Everything in your feed is an outrage, an earworm, because your fellow humans have already sorted it for you, filtering out the boring and the mundane; leaving only the scintillating and the morbid for you to follow.
Today we don’t need a Tumblr post or a Digg.com frontpage story to go viral for us to fight back. Our means of contending with it is to train it, or to unplug from it.
When you wake from the algorithmic dreamstate, you may find yourself wondering “just what are these strawberries doing on my nipples?”