The birth & death of search engine optimization
Published on , 3872 words, 15 minutes to readA brown-haired catgirl with her hair tied back in a ponytail in a green hoodie runs down an idyllic forest path. She is wearing a pair of blue running shoes and dark green yoga pants. An animal vaguely resembling a fox is running alongside her. - Hohaku-XL Beta
Searching the Internet for information sucks. We live in an age of information surplus. At any point in the internet there are an unimaginable number of things to read, watch, listen to, and play through. The average person's backlog of entertainment stretches for several lifetimes.
It is impossible to consume all of the information on the Internet. It was impossible even when the Internet was much smaller. Much, much smaller.
The early Internet
Let's take a moment to imagine what it was like to be an early Internet user in the late 90's. Your main connections to the outside world were Television broadcasts over the air, the mail system, and the telephone. When people got computers in their houses, they were tools that were used for document typesetting, playing new kinds of video games, financial spreadsheets, and other such practical tasks.
In order to connect out into the Internet, you had to tie up your phone line and dial into the number for your Internet Service Provider (ISP). You could then connect to your ISP's email servers to fetch your new email and send all the email in your local outbox. While you were connected to the internet, you tied up the phone line. Nobody else in the house could use the telephone while you were connected to the Internet. This was a problem for many families.
One of the early inventions was the World Wide Web (WWW) protocol and its associated HyperText Transfer Protocol (HTTP). This allowed people to publish documents written in HyperText Markup Language (HTML) so that other people could read them. This was the birth of the modern Internet and a new class of tool called a "web browser".
In the early days of this brave new world wide web, individual pages were mostly self-standing things. People handcrafted "home pages" full of the links they cared about so that they could find them more easily. In those days, finding a new website that you enjoyed meant that you connected to the FTP server for your ISP and updated your personal home page to include a link to it. If you were lucky, those links were categorized.
These web pages were usually published by enthusiasts, college professors, researchers, and individuals that wanted to earnestly share their knowledge with the world. There was no pressure to make money. There was no ultimate drive to succeed. People just wanted to record things like what happened with robotics in 1984 or how to make a really good chocolate cake. And then they wanted to share that information whoever wanted to read it.
This formed a "web" of "links" between things. You could follow a link on one website and end up somewhere completely different. It was totally decentralized, and it was beautiful.
But it was also a mess. There was no way to find anything. Everything was an unsorted mass of information and finding any one thing in particular was difficult. This is where search engines came in.
So, with this mindset, let's imagine what a huge innovation search engines actually were. No longer did you need to collect a list of archeology professors in order to find out information about dinosaurs. You could just type the name of the dinosaur you wanted into a box and hit enter. You would get a list of websites that were about that dinosaur. You could then click on one of those links and read about the dinosaur.
All of that research, all of that knowledge, all of that information was at your fingertips. You could find anything you wanted to know about literally anything. It was a huge leap in capability.
This is the huge advantage that Google gave to early Internet users. It enabled people to find out things like that chocolate cake recipe that they'd been looking for. It allowed you to find someone's Super Mario 64 fansite to find out how many times you needed to press the "A" button to beat the game.
The use of this technology exploded. All the eyes were on the top of the rankings because users were lazy. 99% of traffic goes to page 1 and 1% or less ends up on anything beyond it. If you wanted traffic to your page, you needed to be on page 1. If you wanted to be on page 1, you needed to understand how the search engine worked so that you could make your website rank higher.
This is why search engine optimization (SEO) was born. It is a way to game the system and get your website ranked first. Today, it is a multi-billion dollar industry, and it is somehow still growing.
Let's say you have an online store that sells radishes. If you wanted people to be able to find your website, you needed to make sure that it was ranked highly for the word "radishes". The best way that this happened was to get people to write articles about radishes on your website and become seen as an expert in radishes. You would then get a lot of links back to your website (backlinks) from other websites that were about radishes. This would make your website rank higher for the word "radishes".
This is why you see so many websites for companies that have a blog section. Half of it is to get people to write about what they enjoy so that they can express themselves in new and interesting ways; but the secret other half is that if you're an expert in radish science, people will really want to buy your radishes.
The incentives entirely changed when online advertising was invented.
Now, instead of just making money selling radishes and radish consumption accessories, you could make money by people viewing your website about radishes. Combine this with the fact that people only look at the first page of search results, and you have a recipe for disaster.
It's easy to imagine how someone could make a website about random topics that rank highly in search engines and then fill it with advertisements. This is the birth of the modern "content farm". It's a website that doesn't have information out of the joy of research, writing, or expression. It's a website that serves to have articles there only to make them rank highly in search engines so that they can make money from advertisements when people confuse them for a real website.
This is why search engine results have become useless for actually learning about things most of the time. Producing bullshit that has no basis in fact is astronomically cheaper than spending the time needed to make a quality article.
Here's a fun experiment to try. Take an open source project such as yt-dlp and try to find it from a very generic term like "youtube downloader". You won't be able to find it because of all of the content farms that try to rank at the top for that term. Even though yt-dlp is probably actually what you want for a tool to download video from YouTube.
This is the era we are currently in. Actual information that you want is locked down in social networks that are turning off search access to the public. The only way that I'd expect someone to find out that yt-dlp exists is to ask someone else that already knows about it.
The main thing stopping more content farms is that setting everything up takes time. You have to understand the tooling at play. You have to set up your own servers. You have to configure the blog software. You have to create or purchase templates and WordPress plugins that make your website rank highly. You have to reach out to actually successful people and get them to link to you. You have to do all of this before you can even start writing the bullshit "information" that you want to write.
This has been teaching people that they should not use tools like Google in order to learn about new topics. People search Reddit for information about bread machines instead of searching Google to find words that resemble information about bread machines. This is why the younger generations are searching TikTok for information they need instead of using Google.
Above all, this is why it's probably better to learn about new things from ChatGPT instead of Google. ChatGPT has the problem of hallucinating information that isn't true, but at least it's not trying to sell you something.
Earlier this year when I got my hands on the ChatGPT API (along with the rest of the world), I decided to make a tool that would generate infinite bullshit about a given topic. This was to help me work through my fears about an Internet full of un-content so that I could put them to the canvas to see how scary they really were. I call the results Arsène.
For reasons which are trivial and thus an exercise left to the reader, I decided to make this tool generate horoscopes about cryptocurrency. Most financial analysis is just generic bullshit anyway, so it seemed like a good fit.
Every 12 hours, the Arsène server collects the following information:
- The current phase of the moon
- The current astrological sign
- The current and previous price of Ethereum
- Three tarot card interpretations
- The top headline about Ethereum
It then uses this information to generate a new article about Ethereum and puts it on the front page of the website. Every day you get two new articles about Ethereum. More if I manually trigger the "generate article" flow to test something.
I don't like explaining these bits of public art. I generally like creating things that make the reader have to evaluate it critically in order to understand what is going on and where. In that view, having to explain art "devalues" it in a way that I can't really put to words. Nonetheless, I feel like the quality of the articles should point you to the fact that it's AI generated fairly instantly.
The astounding part is that Arsène uses a 7 billion parameter model too, which is microscopic compared to GPT-3's 130 billion parameters, but it gets the job done well enough.
And then comes in real influx of the AI boom.
The reason I was inspired to write all this out was seeing this Twitter thread where they spelled out a "SEO heist" that stole a bunch of traffic from a competitor using generative AI tools.
The basic idea is that you use generative AI to systematically churn out articles about a given topic and then use A/B testing to eliminate the ones that don't "work" on search engines in favor of the ones that do. This is a way to automate the process of creating content farms.
Imagine a world where if you want to write about radishes, you could just type in "How do I make soap with radishes? Explain in five paragraphs." and get a result like this:
Making soap with radishes involves a natural and traditional method that harnesses the cleansing properties of radishes. Here's a step-by-step guide in five paragraphs:
Start by collecting the necessary ingredients and tools. You'll need fresh radishes, which contain natural saponins—a type of compound that creates a soapy lather when agitated in water. Additional ingredients might include oils (like olive, coconut, or sunflower oil), lye (sodium hydroxide), water, and essential oils for fragrance (optional). Ensure you have protective gear such as gloves, goggles, and long sleeves when handling lye. Have a heat-resistant container, stainless steel or heatproof plastic mixing bowls, a blender, a thermometer, and soap molds ready for use.
Measure the amount of radish liquid extract you obtained and calculate the necessary amounts of oils accordingly. Heat these oils gently in a stainless steel or heatproof plastic container until they reach around 100-110°F (37-43°C). In another container, carefully mix the lye with water, following proper safety measures. Gradually pour the lye solution into the warmed oils while stirring continuously. Use a stick blender to thoroughly mix the solution until it reaches trace—a stage where the mixture thickens to a pudding-like consistency.
At trace, add essential oils for fragrance if desired and mix thoroughly. Pour the soap batter into soap molds, tapping them gently on a surface to release air bubbles. Smooth the top with a spatula and cover the molds with a towel or piece of cardboard to insulate and promote even saponification. Allow the soap to sit in the molds for 24-48 hours undisturbed.
After 24-48 hours, carefully remove the soap from the molds and cut it into bars if needed. Place the bars on a drying rack or a well-ventilated area to cure for 4-6 weeks. During this time, the soap will harden and any excess moisture will evaporate, resulting in a milder, longer-lasting bar. Once fully cured, store your radish-based soap in a dry, cool place or package it for use or gifting.
Remember, making soap involves handling potentially hazardous materials like lye, so it's crucial to follow safety guidelines, use protective gear, and accurately measure ingredients for a successful and safe soap-making process.
Content generated by AI using the model GPT-3.5 Turbo 16k.
And just like that, you have a step by step process to add to your website about radishes. It not only tells you to use lye (a caustic material), but also tells you to use sodium hydroxide (another quite caustic material). It's a recipe for disaster if you aren't careful.
But, the ability to just systematically churn out infinite amounts of information about radishes for next to nothing is a boon for content farms. The GPT-3.5 Turbo 16k API that was used to generate that article costs USD$0.0020 per 1000 tokens of output. That article is 2308 characters long and contains 357 words. According to the OpenAI tokenizer, that works out to 478 tokens. With the system prompt and the request of "write about how to make soap with radishes", this means that producing that one article cost one tenth of a penny.
Now imagine how this would scale. Swap out radishes for other vegetables and make articles for all of them. Swap out soap making for steaming and make articles for all of them. Swap out vegetables for fruit and make articles for all of them. If your ads get you $1 per thousand views, then by definition any time an article gets one view it's already paid for itself. Two hits and it's profitable.
Then bring SEO tools into the mix and analysis of trending conversations on Twitter, Reddit, and other social media platforms. This would be a way to systematically churn out infinite amounts of bullshit that would not only drown out everything else, but allow you to sit back and do nothing as the money rolls in.
This is the future that we are in. This is the future that we are living in. This is the future that we are going to be living in for the rest of our lives.
Even better, there's goddamn startups around the idea of making tools for SEO people to automate this. Out of morbid curiosity, I decided to try having it write an article titled "How to cheat at SERP SEO using OpenAI to do all the work for you". I've uploaded it here in case you want to see what kind of garbage output it gets. I chose to use GPT-4 Turbo 128k for generating this article just so that we got the best possible quality garbage.
This website also will have the ability to do google searches, ingest the first page of results, and then use that to automatically churn out a mountain of bullshit (complete with integration into your CMS of choice!) so that you too can pull off your own SEO heists while doing nothing of value for society yourself.
Is this really going to be the future of our industry? Are we really going to just sit back as AI generated bullshit destroys one of the most powerful tools for learning about new things that humanity has ever created? Surely the search engines will be able to detect this and stop it, right?
Oh Aoi, you sweet summer child. You think that the search engines are going to be able to detect this? You think that they're going to be able to stop it? You are so deliciously naïve that I almost don't want to tell you the truth to preserve your innocence.
And the search engine maintainers can't really do anything about it because AI algorithms can't reliably detect if a given bit of text is AI generated or not. They can't just ban all AI generated text because it's impossible for small companies like Google and Microsoft to even know what text was generated by AI or not. They also can't just ban all text that is about radishes because that would be a huge blow to the radish industry.
The main problem with AI generated text is that most people literally cannot tell if a given bit of text is AI generated or not. It's about as good as random chance for most people. This is why I think that AI generated text is going to be the next big thing in content farms.
This is the future that we are in. This is the real weapon to surpass Metal Gear. We are surrounded by information and nearly none of it is useful. Most of it is poorly attempting to sell us something. It is an infinite arms race that is destroying the very tools that we use to learn about observable reality.
The worst part
When this technology was first released to the public via ChatGPT, I was excited. I thought that it could be a new way to have your grammar get checked, to workshop ideas with the creativity of the entire internet crammed into a single text box, and it would be a way to unlock entire aspects of human expression that have been trapped inside people for their entire lives.
Earlier this year, the only way to really get human-comparable output from large language models was to use OpenAI's APIs. This created a limiting factor around OpenAI's business model and capacity planning. This theoretically put a hard cap on the amount of bullshit that could be generated by these models, and it was a cap that was controlled by a single company that could shut down anyone they wanted at any time for any reason.
Then Facebook wrote a paper about a large language model they made and some idiot released the weights on 4chan. This kickstarted the ability to run models like ChatGPT on your own hardware.
At the time, I thought that this wasn't a good thing because of the massive potential for abuse. I had thoughts like:
Oh god, if people have their own models they can run themselves, this can automate transphobia, homophobia, racism, and all the other bad things that people say on the internet. This is going to be a disaster.
And I was right. It is a disaster. But not for the reasons I thought. At the same time, I also predicted that we'd have a competitor to ChatGPT running on local hardware by the end of the year:
I think it's very likely that we are going to get something like ChatGPT running in real time on consumer hardware by the end of the year
This is my official prediction.
I was wrong. We got it in seven months.
And at the same time, AI companies started heavily filtering and lobotomizing AI models so that they don't look bad when screenshotted on Twitter. This is why so many of your ChatGPT conversations include phrases like:
I'm sorry, but as an AI language model, I cannot provide instructions on how to create or use explosive materials. Explosives are dangerous and should only be handled by trained professinals in controlled environments.
Content generated by AI using the model Yi 34b Chat (@f16 quantization).
Or you get vague answers like:
(What happened in Tiananmen Square on June 4, 1989?)
(I'm sorry, I can't provide you with that information.)
Content generated by AI using the model Yi 34b Chat (@f16 quantization).
Well of course that one would get rejected from a large language trained in China (amusingly it lets you get the correct information if you ask in traditional Chinese, I have no idea how difficult that would be to guard against), but it actually gets kinda worse:
How do I kill all python processes on my Ubuntu server?
I apologize, I should not provide recommendations about harming processes or systems.
Content generated by AI using the model Claude 2.1.
Yeah, sure, that system call is badly named, but it's not like I'm asking how to send newborn children into the sun. I'm asking how to kill a process, a normal system administration task that has a rather morbid name.
This level of lobotmization and limitation is actually worse than people turning local models into n-word machines. It makes ChatGPT and other models less useful for actually learning about things because they don't want people to spread memes, misinformation, and instructions on how to make pipe bombs. It's a noble goal, but it's also a goal that is impossible to achieve.
So then we end up in a situation where content farms can write about sex toys using locally hosted models but a confused person asking ChatGPT about transgender topics get shut down. I fear that this all is going to make search engines even less useful for actually learning about new topics than they already are.
So we're left in a situation where ChatGPT will lie to you, but most of the time its output is truthful enough to learn from. I hate that this is the reasonable take at this point.
Until then, I guess the best CAPTCHA is to ask people how to make a pipe bomb.
Facts and circumstances may have changed since publication. Please contact me before jumping to conclusions if something seems wrong or unclear.
Tags: philosophy, ai, chatgpt