Back to Blog
7 min. read

Exploring The Flat Routing Strategy In Websites

In modern web applications, many factors need to be considered – for users, crawlers, and developers. As a developer, when organizing routing for my website, I often use an approach called structural routing.

This is where I might have a URL structure like /users/${userId}/posts/${postId}. As you can see, this provides clear information about the structure of the data that will be displayed to the user.

However, there’s been an increasingly noticeable trend called flat routing. Maybe it hasn’t emerged recently, but I’ve been working with it a lot lately. That’s why you’re reading this article.

Flat routing is the opposite of structural routing – it focuses on making URLs as short as possible, because there are some interesting benefits to consider. For example, using flat routing, we might end up with a URL like /resource/${idOfResource}/, where the idOfResource could be a combined ID storing metadata to help identify the user and post we want to load.

This particular example might seem overly complex, but there are scenarios where flat routing is quite beneficial. Think of websites focused on managing a single type of content, such as blogs or companies selling a specific product, like cars.

But why? It’s much better for the user (you’ll read an interesting case later) to access /bmwx5/ or /tesla3/ instead of /car/tesla/3/. Before debating whether this makes sense or not, I encourage you to read this article in full, as I’ve done some interesting research.

Intrigued? I hope so! Today, we’ll dive into the fascinating world of flat routing strategies and explore its benefits and trade-offs. At the end, I’ll share some real-world use cases where it might be useful.

The Case Of Indexing

Some time ago, I created my company/community website – GreenOn Software. It was my first attempt at building a platform for content creation – a single hub that would be social media agnostic, where I could share my content, courses, and articles.

Scaling it up proved to be quite challenging, which led me to create 4markdown.com. One of the problems I encountered during the first iteration of the blog was creating an intuitive and well-structured routing system.

I divided articles, courses, and other content into separate parent routes:

  1. /articles/${name-of-article}/
  2. /courses/${course-name}/${number-of-lesson}/${chapter-name}/${name-of-article-under-lesson}

As you can see, the second URL is incredibly long. While it maintains a structure, is that structure really important for the user? It’s more of an approach that makes life easier for the developer – me.

I quickly noticed a trend: the courses on greenonsoftware.com received the least amount of organic traffic – the kind that comes from Google search.

Small Number Of Views And Clicks

This trend persisted. Even for hot topics covered within the long and cumbersome course URLs, the pages had very low traffic. Over a longer period – say, one year – getting only 1 click on a page is quite alarming.

Of course, one could argue that the issue might not lie with the URL structure. It could be due to other factors – poor article quality, insufficient keywords, and countless other variables (only Google truly knows how search indexing works).

So, I ran a small experiment. I published the same article on greenonsoftware.com, then refined the grammar with ChatGPT (but kept the same keywords), and did the same on 4markdown.com.

I’ve manually indexed both articles in Google Search Console, to be sure that the page would be visible in Google search at “almost the same time” – still, nobody knows how these indexing engine algorithms work.

After waiting 3 months, I checked the results, and they were quite surprising. The version on 4markdown.com, with its flattened URL, had significantly more views and clicks. Here are the stats:

Website Article Title URL Months Views Clicks
4markdown.com Crafting useFetch hook /crafting-use-fetch-hook/ 3 2053 252
greenonsoftware.com Creating useFetch hook /courses/hooks/1/api/creating-use-fetch-hook/ 3 1501 102

I adjusted the article’s grammar to reduce the risk of lower indexing or index blockage if the same content was published on both platforms.

Of course, this experiment was quite limited, but the difference in views and clicks is significant. It suggests that flattened URLs are worth considering from the perspective of website traffic.

I’m not an SEO specialist, so I’ll point you to this interesting thread, where some random SEO folks are debating to prove who has the better assumptions. Still, none of them can definitively say if it’s real or not, because measuring URL impact is really hard.

To be precise, measuring the impact of URLs is nearly impossible. You can’t publish the same URLs and compare the results, or even do it at different times – there are too many factors influencing the number of views and clicks. This is why any conclusions could result in a false negative or false positive. We’d probably need to ask someone from Google to get a definitive answer.

However, it’s quite noticeable that many content creation platforms like medium.com, among others, use this strategy – so there must be something to it.

The User Perspective

Whenever I see someone sending me a long URL – I’m immediately concerned that it’s a phishing attack. That’s just my perspective, and maybe you share a similar one, but long URLs are really hard to read.

If we analyze the URL used for me – for the article inside the course – /courses/hooks/1/api/creating-use-fetch-hook/ – it’s quickly apparent that it won’t even fit properly in the browser’s URL bar (especially on mobile and tablets).

Maybe I’m being a little overdramatic, but look – having clean and concise URLs always looks better than long and ugly ones. Maybe only developers care about this, but I tried to find some research or studies on the subject, and the internet is pretty empty.

If you know of any, please DM me!

Some websites add their own presentation layer on top of pasted URLs, others simply display them along with metadata read from the Open Graph Protocol. This makes it hard to predict how and where your URL will appear. That’s why it could be another factor to consider when deciding to make your URLs look clean – readability. A well-structured URL might even encourage more clicks, simply because it looks nicer and more trustworthy.


Discord Displays Raw URLs


LinkedIn Does The URL Shadowing

I was curious about people’s opinions, so I created a poll on my LinkedIn, and the result was quite interesting.


The results show that it’s not just me who finds long URLs suspicious. Short URLs that clearly provide information about the resource inside seem to be more trustworthy to a lot of people.

Challenges Of The Flat Routing

We’ve pointed out two positive aspects of flat routing – SEO (though it’s just an assumption) and readability + trustworthiness. However, there are also challenges.

The biggest challenge is ensuring that we don’t break the application. Imagine a situation where you allow users to create documents and publish them as static URLs based on the document’s name. Here’s an example:

  1. A user creates a document titled “Working with Next 14+”
  2. The app generates a separate page under the URL “working-with-next-14”


This case is simple and straightforward, but what about the following situation, where someone decides to be clever:

  1. A user creates a document named “Articles”
  2. The page is created under the URL “/articles/”
  3. An exception is thrown because “/articles/” already exists in your routing system (you’re displaying a list of articles there)


There are several ways to prevent this problem. One solution is to add validation rules that force content creators to include a minimum number of words when creating content. For example, users must create articles with at least 3 separate words in the title.

The application routes themselves will need to have a maximum of 2 words. This means user-generated content will have URLs with 3 or more words, while application routes will be limited to 2 words. Here’s an example:

  1. The user creates a document titled “Articles.”
  2. The app throws an error – “Minimum number of words is 3.”
  3. The user changes the name to “My article title.”
  4. The new page is dynamically created under “my-article-title.”
  5. The application page that displays the list of articles remains safe and under “/articles/”.


This approach prevents most issues, but maintaining such validation across different types of content that users might create introduces a significant risk of unpredictable behavior. I would say this is a critical part of the application that will require constant monitoring.

										const MIN_WORDS_CONTENT = 3;
const MAX_WORDS_URL = 2;

function countWords(text: string): number {
  return text.trim().split(/\s+/).length;
}

function validateContentCreation(documentTitle: string) {
  const wordCount = countWords(documentTitle);

  if (wordCount < MIN_WORDS_CONTENT) {
    throw new Error(
      `Minimum number of words is ${MIN_WORDS_CONTENT}. Provided: ${wordCount}`
    );
  }

  console.log(`Document "${documentTitle}" passed validation.`);
}

function generateUrl(documentTitle: string): string {
  const words = documentTitle.trim().split(/\s+/);
  if (words.length > MAX_WORDS_URL) {
    words.length = MAX_WORDS_URL;
  }
  const url = words.join(`-`).toLowerCase();
  console.log(`Generated URL: ${url}`);
  return url;
}

try {
  const title = `Articles`;
  validateContentCreation(title);
} catch (error) {
  console.error(error.message);
}

try {
  const validTitle = `My Awesome Article`;
  validateContentCreation(validTitle);
  generateUrl(validTitle);
} catch (error) {
  console.error(error.message);
}
									

What if someone forgets that rule and creates such URL shadowing? You need to detect that, so a custom fail-fast approach may be useful.

if (hasDuplications) throw Error(`Hey you donkey, your user shadowed the application URL!`);


Another solution I’ve seen, which feels more natural, is creating a blacklist that stores the names of application routes that are restricted. So, if someone creates a resource, and the URL for that resource matches an existing application route, the system throws an error and blocks the creation of the resource.

										const restrictedRoutes = [`dashboard`, `settings`, `admin`];

function createResource(resourceUrl: string) {
  if (restrictedRoutes.includes(resourceUrl)) {
    throw new Error(
      `Resource creation failed: The URL '${resourceUrl}' is restricted.`
    );
  }

  console.log(`Resource '${resourceUrl}' created successfully.`);
}

try {
  createResource(`dashboard`);
} catch (error) {
  console.error(error.message);
}

try {
  createResource(`blog-post`);
} catch (error) {
  console.error(error.message);
}
									

However, it feels like we’d be creating a backend for the frontend, requiring additional maintenance every time a new route is added to the application. The validation solution (at least for me) seems much better, and I’m implementing it on the website you’re currently reading.

It’s easy to see where this leads. Imagine a situation where you have various types of content – posts, articles, mindmaps, and others. Storing or creating all of them would require fetching and checking for duplicates across different entities.

To avoid bloating your API performance, you would need to structure the names of all created content in one place for validation without having to traverse entire tables. Instead of searching through individual tables for posts, articles, mindmaps, and more, you’d search a single table, “content,” which stores mappings of the type, ID, and URL of created content. This would speed up validation but introduce some duplication in the database.

Duplicating content in a database, called denormalization, is a technique used in rare cases to improve performance for searching or other processes. It’s a trade-off – trading storage space for runtime performance and faster API processing.

Another solution that provides flat routing capability is designing the database in a way that accounts for the behavior of the URLs you want to achieve. However, this approach has its limitations. You can’t always predict if a more important feature will arise in the future, and trading a nice URL structure for a performance bottleneck is not a good trade-off.

Pros And Cons Of Flat Routing

So, as you saw, the challenges are quite complex. Let’s try to summarize the pros and cons based on what we already know.

Pros

  1. Readability
    – Clean, concise URLs are easier to understand and navigate
  2. SEO Benefits
    – Shorter URLs can potentially improve search engine rankings (though this is still an assumption)
  3. Trustworthiness
    – Users are more likely to trust simple, transparent URLs. So, in theory, this could lead to better traffic.

Cons

  1. Risk of URL Shadowing
    – User-created content could conflict with predefined application routes
  2. Validation Complexity
    – Additional validation is required to ensure content titles don’t overlap with application URLs, increasing complexity
  3. Scalability Issues
    – As the amount and variety of content grow, flat routing may introduce limitations, such as performance bottlenecks
  4. Data Duplication
    – To maintain performance, you may need to introduce denormalization in the database, which can lead to complexity in database management
  5. Future Flexibility
    – Prioritizing a clean URL structure now could lead to limitations or trade-offs when more important features arise in the future

Blurry Spots

There are some benefits or consequences in the codebase that depend on the complexity of the content types. For example, if you have a single type of content, the code for reading and validating will be quite easy to maintain. If it stays that way, you’ll have a clean and simple codebase – even much simpler than in typical structural routing.

The problem arises when you increase the number of content types to flat route. In this case, the code starts becoming much more complex. That’s why adding things like “simplified code” or “more readable code” to the list of pros for flat routing is quite edgy. I wanted to highlight this because it all depends on the purpose of the website.

When To Use Flat Routing?

If you have an easy-to-predict website structure, it’s worth using flat routing. By “easy to predict,” I mean a situation where the amount of content created by users isn’t huge, and there is only one type of content.

Additionally, if you’ve considered flat routing from the start and designed the database structure and API logic for it, implementation becomes much easier.

The same principle applies when working on a chat app – you choose a lightweight database with built-in event handling to enable real-time communication effortlessly. The same idea applies here: use the right tools for specific problems and ensure proper design.

Here are some situations where it might be worth applying flat routing:

  1. In blogs focused on article creation by a small group of content creators.
  2. When you plan for flat routing from the start, and the database structure is prepared for it.
  3. When there isn’t a large variety of different content types being created.
  4. When you prioritize readability and trustworthiness, potentially leading to a higher number of views and clicks.

Summary

Flat routing is definitely an interesting approach worth considering. Like everything, it has its own set of positives and negatives. Nobody should choose a solution or idea without proper refinement.

In my case, moving from structural URLs to flat ones was beneficial. The 4markdown.com website is not complex enough to encounter the problems mentioned in the “Consequences” section of this article.

Additionally, I’m planning to add new types of content to this page, such as mindmaps, posts, flashcards, and others. This could be problematic if my database structure weren’t prepared for it – but I knew it was coming, so the entire content creation process was designed to handle it while still maintaining the flat routing structure.

Now you should have a good sense of when flattened routes are useful and when they aren’t. I enjoy software development because of the constant trade-offs. The skill of picking the right one feels like a game, where long-term planning and strategic choices are key to making an application work for all users and requirements.