Building a Serverless Stack Overflow (Question and Answer Platform) for Students Learning at Home

2021-10-14

Serverless Stack Overflow

Building a Serverless Stack Overflow (Question and Answer Platform) for Students Learning at Home

Imagine a world where every occupation had the type of power a tool like Stack Overflow has bestowed upon Software Engineers. Surgeons could repeatedly look up the difference between slicing and splicing, and mechanics could crowdsource the best way to remove a transmission from a Buick. The internet is full of information on almost anything you want to know, however, for students, finding answers to specific questions, explained for the right grade level is a challenge. Kids learning at home under quarantine, without ready access to their teacher, would greatly benefit from a community like Stack Overflow. So I decided to take a crack at building it and I’m going to show you how I went about architecting the application.

Building Stack Overflow today is far easier than it was in 2008. With the rise of serverless technologies we now have ways to launch applications faster, with less code, less setup, and that can scale to millions of users as needed. The setup I used for StudyVue cost zero dollars to launch and will only start to incur a cost if usage increases. The best part is if your application goes viral, these serverless setups can scale up to handle the load and scale back down again with no effort on your part. Without further ado let’s get started.

Product Definition

First I wanted to make sure to have the core product features squared away. I was not going to try to replicate all of the features of Stack Overflow, but still wanted to make sure to have a minimum viable version that gives students and teachers access to the most valuable pieces. Those pieces being a way to ask questions, receive multiple answers, and for users to be able to validate or invalidate those answers with a simple, binary voting system.

I also wanted to be cognizant of the fact that the target audience would be school-aged students. Therefore,being careful with personally identifiable information is a must and knowing how kids can be, there was going to have to be a way for users to flag abusive content. For this project I decided the best way to deal with personal information is to not ask for it in the first place. A simple login that only required an email address was an important feature. Email seems to be universal across generations so this will be a consistent way for students, teachers, and parents to verify their identity.

So the core feature list I went for was:

  1. Users can verify identity using their email with no other personal information required.
  2. Users can post a question.
  3. Users can post an answer.
  4. Users can vote on answers no more than once.
  5. Users can easily search for questions already posted.
  6. Users can report an abusive question or answer.
  7. Anyone can browse questions and answers.

I also took into consideration a few other requirements. The most important being that these pages could be indexed by search engines. As such, server side rendering of the question pages in particular was going to be necessary. Although google claims they do render and crawl client side rendered content, it has been my experience that if you want to be indexed and rank well with google, server side rendering (SSR) or pre-rendering via static site generation (SSG) is a requirement. In this case, since the data is dynamic and ever-changing, pre-rendering won’t be an option, I would need to make sure the public facing pages used SSR. Another nice feature of Next.js is that all of our markup is still written in JSX and are still just react components. These are served as static markup and then hydrated client side with interactivity. You are still free to render elements client side that do not need to be indexed as well. Next.js supports all three major use cases, SSR, pre-rendering, and client-side rendering out of the tin.

stack overflow node

The Stack

When evaluating the feature set there were a few things I wanted. I wanted to use React for the frontend and a serverless setup for my API. I would need to server side render most of the application, a cloud hosted database, and a way to handle search. I also wanted to consider how to deploy the app easily to keep this as simple and painless as possible.

Right now the most robust framework that supports server side rendered content for react is Next.js. I personally like NextJS for a few reasons. It integrates easily with Vercel (formerly Zeit) for serverless deployment, it supports server side rendering of our UI, api routes that are deployed as lambdas to Vercel, and it supports typescript out of the box. Being as this is a side project we are looking to develop quickly, I find typescript helps me write safer code without compromising my development speed.

For a database I chose FaunaDB. FaunaDB is a cloud-hosted, NoSql database that is easy to set up and can scale to millions of users. It has pay as you scale pricing, so you won’t incur any costs at startup. FaunaDB was easy to play around with in their web UI and model out my data before I ever wrote a single line of code. No need to run local copies of the databases, deal with running migrations, or worry about crashing the whole thing with a bad command. FaunaDB has user authentication and permissions features baked in as well so I can save some time building the authentication without bringing in another vendor.

Last, we are going to need search to be as robust as possible. The last thing users want is to be stuck with exact text matches or have to type questions in a specific way to return results. Search is messy in the wild and users expect even small apps to be able to deal with that. Algolia is the perfect solution for this. They bring the robustness of google style search to your datasets with little overhead. They also have a react component library that can drop right into the frontend.

Stack Overflow API

Initial Setup

Next.js + Vercel

Setting up a project with Next.js and Vercel can be ready to go and deployed in a few minutes by following the Vercel docs. One of the nice things about Vercel is they have a powerful CLI that you can run locally that closely mimics the production environment. I like to think about it as something like Docker for serverless apps. Setting up Vercel locally is simple, however, finding your way around their docs after the name change from Zeit can be a challenge.

Once you setup the Vercel CLI to run your application locally, you can further hook your Vercel project up to github to create staging URLs for every git branch you have, and have any merges into master automatically deploy to production. This way you are set up for rapid and safe iteration post launch without having to setup pipelines or containers and the like. I like to get this all squared away at the start of the project since you will need to start storing secrets and environment variables right away when setting up FaunaDB.

I personally enable typescript right away when working on a Next.js project. With Next.js this is pre-configured to work out of the box and FaunaDB also has type definitions published so it’s a great combination. I find strong types help me avoid silly errors as well as help me remember my data types and key names while I’m writing code. It can also be incrementally adopted. You don’t need to start off in strict mode right away. You can get a feel for it and gradually work your way up to a complete, strongly typed codebase. I have left the type definitions in my examples here so you can see how this looks but also may have stripped out some of the more defensive error handling for greater readability.

Setting Up the Database

I want to walk through the initial set up of FaunaDB inside of a Next.js app to be able to read and write to the database. I think that setting up environment variables with Next.js can be somewhat tricky so here’s a quick rundown of what I did.

You’ll want to first install the FaunaDB package from npm. Now head over to the FaunaDB console, go to the SECURITY tab and create a new API key. You’ll want to assign this key a role of Server since we just want this to work on this specific database.

Setting Up Stack Overflow Database

We want to copy this key now since this is the last time you will see it. We can now add this to our codebase, which requires that you add this info to four different files to work properly. First, you will want to put this in your .env and .env.build files.

// .env and .env.build files

FAUNADB_SECRET_KEY = '<YOUR_SECRET_KEY>'

Next, we want to add this to our Vercel environment. This can be done with the following command:

`$ now secrets add studyvue_db_key <YOUR_SECRET_KEY>`

This saves your key into Vercel and will be available when you deploy your app. We can now add this key to our now.json and our next.config.json files.

// now.json
{
 "version": 2,
 "build": {
   "env": {
     "FAUNADB_SECRET_KEY": "@studyvue_db_key",
   }
 },
 "builds": [{ "src": "next.config.js", "use": "@now/next" }]
}
// next.config.js
module.exports = {
 target: 'serverless',
 env: {
   FAUNADB_SECRET_KEY: process.env.FAUNADB_SECRET_KEY,
 }
}

Note how in our now.json file we reference the Vercel secret prefixed by the @ symbol. We namespace the key since right now Vercel keeps all of your secrets available to all applications. If you launch other apps or sites on Vercel you will likely want to prefix these secrets with the app name. After that, we can utilize the standard process.env.FAUNADB_SECRET_KEY throughout the application.

Now we can head back over to the FaunaDB console and begin modelling out our data.

Modeling Our Data

One of the best things about FaunaDB is how easy it is to set up your database. When I started out I just created an account and created all of my collections and indexes right in the GUI they provide. I’ll give a brief walk through of what that process was like to show the ease.

After you create your account you are taken right to the FaunaDB console where you can start by clicking NEW DATABASE in the top left hand corner. I’ll start by calling this StudyVue and leave the *"Pre-populate with demo data"* option unchecked.

database config

Once you create your database you are brought to the main dashboard for that database. You can already see that FaunaDB offers a lot of options like child databases and multi-tenancy, GraphQL, and functions. For this project, I just needed to deal with three things; collections, indexes, and security.

Collections

Collections are similar to tables in a traditional SQL database. If you are familiar with MongoDB, this is the same concept. We know from our product description we need five collections.

  1. Users
  2. Questions
  3. Answers
  4. Votes
  5. Abuse Reports

Creating these is simple, just go into the COLLECTIONS tab and click NEW COLLECTION. Here is an example of creating the users collection:

collections database config

You’ll notice two additional fields, one is History Days, which is how long FaunaDB will retain the history of documents within the collection. I left this set to 30 days for all my collections since I don’t need to retain history forever. The TTL option is useful if you want to remove documents that have not been updated after a certain period of time. I didn’t need that for my collections either but again it’s good to take note that it is available. Click save, and your new collection is ready to go. I then created the other five collections the same way with the same options. That’s it, no schemas, no migration files, no commands, you have a database.

Configuring Stack

Another thing you will notice is that I decided to store votes as their own collection. It is common when working with NoSql databases to get into the habit of storing these votes on the Answer document itself. I tend to always struggle with the decision to store data on the related document in one-to-many relationships or to make a new collection.

In general, I like to avoid nesting too much data in a single document, especially when that data could relate back to other collections, for example, a vote belonging to both a user and an answer. It can become unwieldy over time to manage this from within another document. With a relational approach, if we ever need to reference another document we just add an index and we have it. We may want to show a user all their up voted or down voted answers, or have an undo vote feature. Keeping votes in their own collection thus offers a bit more flexibility long term in the face of not knowing exactly where you will go. Another advantage is that the relational model is less costly to update. For instance removing a vote from an array of votes requires us to store the complete array again, whereas with the relational model we are just removing a single item from an index. While it may be easier to just store things nested in the same document, you’ll typically want to take the time to have more flexible, normalized models.

Indexes

Indexes are what you use to query the data in your collections. Creating indexes requires you to think about the relationships between your collections and how you want to be able to query and manipulate that data. Don’t worry if you are unsure of every possible index at this moment. One of the advantages of FaunaDB is that indexes and models are flexible and can be made at any time whenever you want.

I started with the obvious relations first and later on was able to add additional indexes as the product evolved. For example, right away I knew that I was going to want to be able to display all questions either on the homepage or on a page that houses a list of all the questions asked. This would allow users and most importantly search engine crawlers to be able easily find newly created questions.

To create an index go into the INDEXES tab and click NEW INDEX. Here you can select which collection you want this index to work with, in this case, questions, and the name of the index, which I will call all_questions.

indexes configuration

I also knew I was going to need to fetch a question by its ref id. This can be done easily without creating an index. However, I needed to be able to fetch all of the answers related to a question. So I have an index called answers_by_question_id that will allow me to perform a join between these two collections. In this case, I want the Source Collection to be answers and I want to populate the Terms field with the data attribute I will need to be able to query by, which is data.question. The question attribute will be what I am going to use to store the ref to the question that a particular answer is associated with.

Collection stack

I also know I am going to want to be able to fetch votes that are tied to a specific answer. I can now make an index called votes_by_answer that pulls from the votes collection and use data.answer to represent the attribute we want to be able to look up on.

new index

Setting up more indexes follows the same process. For collections where you only want to allow one entity with the same attributes to exist, such as users that should have a unique email address, we can make sure that only unique email addresses are allowed by checking the unique field. As you can see, we effectively model our entire database within the dashboard and are now ready to use this in the code base.

What is FQL?

FaunaDB has two ways to query the database. One is the more familiar GraphQL and the other is something called FQL. FQL is Fauna’s proprietary query language. It’s what is called an embedded domain-specific language (DSL), which is a powerful way to compose queries in the languages they support. It gives us the ability to use it to create composable functions and helpers throughout our codebase. For instance here is a function I made to create a user document.

export function createUserDocument(data: FaunaUserData) {
 return q.Create(q.Collection('users'), data);
}

We can take this a step further by utilizing a functional programming technique called composing functions. If you look at the FQL above what we see is that FQL is just composed of functions that take other functions as arguments. Let’s take a bit more of an advanced example.

Let's say we wanted to retrieve all questions from the questions index. The FQL looks like this:

const questions = await client.query(
   q.Map(
    q.Paginate(
      q.Match(
        q.Index('questions')
      )
    ),
    ref => q.Get(ref)
  )
)

We can see functional composition at work here where Map() takes two arguments that are functions. If we focus on the first argument we see a chain of unary functions, which are just functions that take one argument, the Paginate() function takes the Match() function which takes the Index() function. Without going into too much detail about functional programming, these types of unary function chains are ripe for functional composition. In this case I used the ramda library to compose more general, powerful helpers. So taking our above example and using ramda's compose helper we can create a function getAllByIndex().

export const getAllByIndex = compose(q.Paginate, q.Match, q.Index);

We read the compose function's arguments as being executed from right to left. So getAllByIndex() takes our index as a string and then passes it into Index() the output of which goes into Match() the output of which goes into Paginate(). We can now use this to cleanup our questions FQL query.

const questions = await client.query(
  q.Map(
    getAllByIndex('questions'),
    ref => q.Get(ref)
  )
)

We can continue to use this technique to create more helpers for common operations, like the below helper I created to get a collection's document by ref id.

export const getCollectionDocumentById = compose(q.Get, q.Ref);

While it was a little hard to get used to at first, the power of using FQL and readability when coupled with functional composition, was a worthwhile investment over GraphQL in my opinion.

Authenticating Users

When it came to user management, I wanted a way to verify that users are real people and I wanted a way to make sure we had a user’s email so that we could eventually build notifications for when their questions had new answers. I also wanted to make sure it was as simple as possible to create an account and move forward. I didn’t want to interfere with the spontaneity of wanting to ask or answer a question. One thing I personally hate is having to create new passwords for every new service I sign up for. I loved the idea of creating a magic link type login where the user submits their email and they click on a link that logs them into the app. This type of login has a major pitfall for mobile users that we will discuss in just a bit but let's begin modeling this out with FaunaDB’s internal authentication.

FaunaDB's internal authentication allows you to pass in an email and a credentials object with a password key. That password is then stored as an encrypted digest in the database and returns to us a token that can be used to authenticate that user. The tokens do not expire unless the user logs out, but the same token is never issued twice. We can use this system to create our magic login.

The Login

First, whether a user is logging in for the first time or returning to the site we want to make sure there is a single login pathway. To do this we can query the database first to see if that users' email exists already. If it does not exist, we'll create a new user and assign a randomized password. If the user does exist, we will update the user with a new randomized password. In both cases, we are going to receive back an authentication token we can now use to persist the login of that user.

In order to do this, we'll need a new index in order to fetch users by email. We can go ahead and call this users_by_email and this time check off the unique option so that no emails can be submitted to the collection twice.

Here's an example of how we can build this logic inside of our API. Notice that for our FQL query we use the Paginate() method instead of Get(). Get throws an error when no results are found, what we want to do is detect when there are no results and go on to create a new user.

let user: FaunaUser | undefined = undefined;
const password = uuidv4();
const { email } = JSON.parse(req.body);
// use paginate to fetch single user since q.Get throws error obj when none found
const existingUser: FaunaUsers | undefined = await client?.query(
         q.Map(
           q.Paginate(
             q.Match(
               q.Index('users_by_email'),
               email
             )
           ),
           ref => q.Get(ref)
         )
       );

       if (existingUser?.data.length === 0 ) {
         // create new user with generated password
         user = await client?.query(createUserDocument({
           data: {
             email
           },
           credentials: {
             password
           }
         }));
       } else {
         // update existing user with generated password
         user = await client?.query(
           q.Update(
             existingUser?.data[0].ref,
             {
               credentials: {
                 password
               }
             }
           )
         );
       }

Passing the Token

We still want the user to click a link in the email. We can send the entire token in the email link as a part of the URL to complete the authentication, however I'd like to be a bit more secure than this. Sending the entire token means that it is likely going to sit forever in plain text in a users inbox. While we aren't handling payment or personal information, there still is the potential for someone to accidentally share the link or forward the wrong message, exposing a valid token. To be extra secure, we really want to ensure that this link only works for a short duration of time, and it only works in the device and browser the user used to generate it.

We can use Http only cookies to help us with this. We can first take a section from the start of the token, let's say 18 characters, and then take the rest of the token and send it back in a temporary cookie that will be removed from the browser after 15 minutes. The section at the start of the token we can send in our email. This way the link will only work for as long as the cookie is persisted in the browser. It will not work if anyone else clicks on it since they do not have the other segment. After the two pieces are put back together by our API, we can send back the new Http cookie as a header with a thirty-day expiration to keep the user logged in.

Here we can log in the user we created and split the returned token into the piece we are going to email, and the piece we are going to store in the browser.

// login user with new password
const loggedInUser: { secret: string } | undefined = await client?.query(
         q.Login(
           getUserByEmail(email),
           { password }
         )
       );
// setup cookies
const emailToken = loggedInUser?.secret?.substring(0, 18);
const browserToken = loggedInUser?.secret?.substring(18);

// email link and set your http cookie...

Just to put our minds at ease, let's consider how easy it would be to brute force the other half of the token. FaunaDB tokens are 51 characters long, meaning the other half of our token contains 33 alphanumeric characters including dashes and underscores. That’s 64 possible characters so the total number of combinations would be 64^33 or 1.37371891×10^16. So the short answer is, brute-forcing just a piece of this token would take quite a long time. If this were a banking application or we were taking payments from people we'd want to possibly use an encryption scheme for the tokens and use a temporary token that expired for the login before getting the real long term token. This is something that Fauna's built-in TTL options on a collection item would be useful for. For the purposes of this app, breaking the token in two will work just fine.

Creating the API

To build out these features securely we are going to utilize api routes with Next.js. You are now seeing one of the advantages of the Next and Vercel combination. While we are technically deploying this a serverless app, we can manage our API and our client in a single monorepo.

For small projects that you are maintaining yourself this is incredibly powerful as you no longer need to sync your deployment of client-side and API features. As the project grows, your test suites can run on the entire application and when we add FaunaDB to the mix we don’t have to worry about running migrations post-deploy. This gives you the scalability of microservices in practice but without the added overhead of maintaining multiple codebases and deployments.

To set up an API simply create an api directory inside of the pages directory and now you can build out your API using file system routing. So if we create a login.ts file, we can now make requests to /api/login.

Here is an example login route where we can handle a GET or POST request that will be deployed as a serverless function:

get function

In this case, we can use a GET request to verify if a given token is valid and use a POST to log in a user and send the authentication email.

Sending the Auth Email

To send the emails with the passwords, I used nodemailer and mailgun. I won’t go into setting up mailgun here since you could use another provider like sendgrid, but I will mention that it is important to make sure you are careful sending your email inside of a callback instead of using async / await or promises. If you return out of a serverless function before receiving a success message from the email server, the serverless function instance shuts down without waiting for the email send call to resolve.

The Mobile Pitfall

When I first created and launched this app I built the magic link system and it was great on desktop. I thought it was incredibly seamless until I handed it off to my friends who primarily opened it on mobile phones or inside of a Facebook or Twitter browser. I'll give you the benefit of hindsight here and let you know that magic links are an awful experience on mobile devices.

Mobile devices, iOS specifically in this case, do not allow users to set a different default browser. Therefore many users would generate a link in the browser they like using (like Google Chrome) only to open the link in their default browser (Safari) through their preferred email application. Since our authentication system requires using the same browser and device to maintain security, nobody could log in with our magic links. On top of that, if users were using the browser inside of a social application like Facebook, there was no way to open the link inside the Facebook browser. I decided on a different UX to account for this. Instead, I would email a section of the token to be copy and pasted into a password input field instead. This had the added advantage of allowing the user to stay in the same browser tab while they authenticated and it would work well inside of all browsers even those who were inside of social applications that had their own internal browser windows.

Architecting the API

Now that we have a way to authenticate users, we can submit a question and save it to the database we’re going to create two things. First, we’ll create a page for asking a question, second, we’ll make an API route with a cloud function that can receive a POST request and save the data to our database. This has the advantage of allowing us to authenticate users in our API and ensuring they can’t manipulate our queries.

FaunaDB also has ways that you can safely do this on the client-side, however, I chose to only access the database from inside the API. Personally, I like the added security that working with our database through an API can provide. This also allows for some more freedom down the line should we incorporate other external services for things like monitoring, email notifications, caching, or even bringing in data from another database. I find having a server environment to unite these services allows for better performance tuning and security than trying to do it all in the browser. You are also not tied to Javascript, should you want to change the API to a more performant language like Go, which is supported by FaunaDB and Vercel, you are free to do so.

We can expand our API by creating a questions directory inside of the api directory with an index.ts file. This will be our main endpoint for creating questions. The endpoint can now be accessed at /api/questions, we'll use this endpoint to POST new questions and to GET the list of all questions. We are also going to need a way to fetch a single question by its id. We’ll create a new endpoint by creating a [qid].ts file in the same questions directory. This allows us to call /api/questions/:qid with a dynamic question id as the last part of the URL.

Api Routes vs getServerSideProps()

In Next.js you have two parts to your server-side processes. You have your API directory, which are your serverless functions that always execute on the backend. In my app I used these to fetch the raw data we need from the database.

Here’s an example of our /api/questions/:qid route, where we fetch our question, the answers with a reference to it, and all the votes with references to that answer. We then return that data in the response.

NextAPIRequest

You can see some of my helpers like questionRef() and getQuestionById() that are more good examples of using FQL to help make your code more readable and reusable all without a complex abstraction or ORM.

export const getCollectionDocumentById = compose(q.Get, q.Ref);

export function getQuestionById(id: string) {
 return getCollectionDocumentById(q.Collection('questions'), id);
}

export function questionRef(id: string | string[]): faunadb.Expr {
 return q.Ref(q.Collection('questions'), id);
}

The other part of our Next.js app that executes on a server is actually within our /pages/questions/[qid].tsx file that represents a page component in our app. Next.js allows you to export a function called getServerSideProps() that fetches the data necessary to render your page server-side before serving it. This is where I prefer to do any map reduces, sorting, or aggregating of the data itself. You can choose to do this in your API routes as well but I like to keep a separation of concerns here, where my API routes simply return the necessary data from the database and any aggregation needed for rendering and display is done in my getServerSideProps() functions.

export const getServerSideProps: GetServerSideProps = async ({req, params}) =>  {
 try {
   const host = req?.headers.host;
   const res = await fetch(`https://${host}/api/questions/${params?.qid}`)
   const resJson: QuestionResponse = await res.json()

   const { question, answers, votes } = resJson;

   return {
     props: {
       question,
       answers: mergeAndSortAnswersAndVotes(answers, votes)
     }
   }
 } catch (error) {
   throw new Error('Oops! Something went wrong...');
 }
};

I went on to use a similar setup for creating the other endpoints, with the API routes fetching data from fauna and the data processing done on the backend of our pages. The other added advantage of this is the data processing bit used for display may not be necessary for other things we may need these endpoints for like sending out notifications to users when a question is answered. In a sense we are doing a serverless take on the classic MVVM pattern, where our model sits in the API folder and our view models are our getServerSideProps functions.. This just showcases how even though we have a single repository with Next.js for code management, we can easily maintain separate domains for our services and renderings. We can also just as easily change this if need be in the future.

mvvm pattern

The Frontend

For this prototype I wanted to keep the frontend as simple as possible. Next.js already comes set up to use react out of the box but what about our styles? I personally love tachyons, which is a lightweight atomic CSS framework not unlike tailwind, just considerably lighter weight. While tailwind is more configurable, tachyons is far easier to memorize so I find myself just adding the classes without thinking or referring back to the documentation.

studyvue frontend

For any custom CSS I have to write or any styles that require dynamic variables I like to use the styled jsx that Next.js comes with out of the box. Typically with this setup I write very few styles or modifications myself. In this case I will be designing as I code as well so I just stuck to the tachyons defaults which are good for this project.

Here’s a quick look at the Header component:

<header className="Header flex items-center justify-between w-100 pb3 bb">
     <Link href="/">
       <a className="Header__logoLink db mv2 pa0 black link b">
         <img className="Header__logo db" alt="studyvue logo" src="/logo.svg" />
       </a>
     </Link>
     <nav className="Header__nav flex items-center">
       {userInfo.isLoggedIn && (
         <Link href="/me">
           <a className="Header__logoutLink db black f5 link dark-blue dim">
             <span className="di dn-ns pr2">Me</span><span className="dn di-ns pr3">My Stuff</span>
           </a>
         </Link>
       )}
       <Link href="/questions/ask">
         <a className="Header__askQuestionLink db ph3 pv2 ml2 ml3-ns white bg-blue hover-bg-green f5 link">
           Ask <span className="dn di-ns">a Question</span>
         </a>
       </Link>
     </nav>
     <style jsx>{`
       .Header__logo {
         width: 12rem;
       }

       @media screen and (min-width: 60em) {
         .Header__logo {
           width: 16rem;
         }
       }
     `}</style>
   </header>

At this point, you may also notice that I am adding my own class names as well like Header and Headerlogo__. This is a bit of a take on the classic BEM CSS methodology. I have modified this a bit for use with React and to be Component, Element, Modifier instead. Where the component name prefixes all class names used in that component, followed by two underscores, followed by the name of the element itself. Right now, I'm not managing a lot of styles, however, call me old school, but I still like to be able to comb the DOM in my developer tools and know exactly what I am looking at. So while most of these class names do not have style attached to them right now, I love the meaning it conveys as I develop so I've made a habit of adhering to this. It's also nice when the time comes to write end to end tests to be able to query any element easily.

User Context

All of the forms and UI elements inside of the application follow very standard React architectural methods so I won’t go into those in detail here. One thing that I think is worth talking about in the context of Next.js is how to have a global context to our app that lets us know if a user is logged in and what their user id is.

At this point, we have already set up our app to use an Http only cookie that will be passed on every request back to our API. The notable exception to this is our getServerSideProps function. This will receive the cookie, but in order to use it to fetch data from our API we will have to forward that cookie along. In this case, we don’t have to worry about this because all of our data is public-facing. Therefore any calls to fetch questions, answers, and votes can just use our standard server token from the API. Where we do need to pass the user token is any time we POST data to the database, when we want to have a page that shows a user's asked questions, and when changing layouts based on a user's logged-in status. In all of the above cases, we can make those calls from the client directly to our API so the saved token is passed along by default in cookies every time.

What we don't want to happen is see a re-render on every page load as we update our header to reflect if the user is logged in or not. The ideal scenario is when the user opens up the app, we check if the token saved to cookies is valid and then update our global context with a boolean value isLoggedIn and the userId from our database. I've opted not to pass the email back to the frontend under any circumstances to provide some additional protection of the only PII we do store in the database.

In Next.js this is done by creating a _app.tsx file in the pages directory. This is a wrapper component that we can use React's useEffect() hook in and run once when the application loads and it will hold that value until the browser is refreshed again. By using Next's Link components to navigate, the DOM is updated only where needed and our user context persists as our users navigate the application. You could do this user check during server-side rendering as well, however, I found keeping these user functions client-side to result in less code in my getServerSideProps functions since we don’t need to check for the presence of a token and forward that cookie along to the API.

Here is an example of my _app.tsx file:

app.tsx

Above you can see how the UserContext wraps the entire app and provides a method to update this from within the app via the setUserInfo() method. We can use this at the various login points in the application to update the context without refreshing the page after a new login. This allows for many points of login throughout the application and does not force users to go to a /login or /create-account route in order to participate. This, in conjunction with our easy two-step authentication, keeps the user in the experience at the place where they decided to login without forcing them to find their way back to the question or answer forms.

So in order for our product to be effective we need to have robust search. Ideally the search will be able to handle returning results in the event of misspellings and be able to query on the question as well as the additional description of the question. FaunaDB does have search features built into it for exact text search but to build out the kind of robustness we want is going to be quite a bit of overhead. Thankfully Algolia is a product designed to deal with this exact issue.

Setting up Algolia, like FaunaDB can all be done through their GUI interface. You create what are called Indices, which are just going to be copies of your FaunaDB objects. In this case, I only want to create an Index for the questions since this is what users need to be able to search on. In the future I could see a world where we also add the top voted answers to the search so we can get even richer results, but for now all that is needed on day one is indexing of the questions.

The way that I do this is upon successful saving of our question to FaunaDB in our API, I then follow that up with POST of a flattened copy of that object to Algolia. It’s important to only pass the fields you want to be able to search on to Algolia as well as the Ref of the Question. The Ref Id is what we are going to use to link to the actual question in our app at the route /questions/:qid. By doing this users can now search question titles and their descriptions and the results returned by Algolia can easily be used to link to the actual question page.

Here is an example of that flow inside the api:

const postQuestion: FaunaQuestion | undefined = await userClient?.query(
         createQuestionDocument(formattedBody)
       )

       try {
        const algoliaClient = algoliasearch('<your_algolia_id>', process.env.ALGOLIA_SECRET);
        const questionsIndex = algoliaClient.initIndex('prod_QUESTIONS');

         const refId = await userClient?.query(q.Select(['ref', 'id'], postQuestion));

         const indexableQuestionObj = {
           objectID: refId,
           question: postQuestion.data.question,
           description: postQuestion.data.description,
         }

         await questionsIndex?.saveObject(indexableQuestionObj)
       } catch (error) {
         console.error('Error indexing question with algolia: ', postQuestion);
       }
        return res.status(200).json(postQuestion);

The key thing to note here is I didn’t want any failures to index a question with Algolia to interrupt the user experience. Here we simply wrap that up in a try… catch block and in our catch where I am logging the error we can send that off to our error logging software like Sentry or LogRocket or Honeybadger. This will let us manually correct the issue if need be but all that would happen in a failure is the question won’t come up in search results. In that case, we don’t want users to try to double save the question since we’d end up with it in FaunaDB twice. In the future, we can create a system to retry adding failures to Algolia asynchronously outside the user flow to make this more robust, but either way, we want users to be able to move on as long as the data makes it to FaunaDB, our source of truth.

Algolia on the Client

Now that Algolia just saved us time on the building of search, we can use Algolia to save us some time building the actual search bar. Algolia has React components ready to go for us that can just be dropped into our app and styled with some CSS to match our theme.

We can just install the react-instantsearch-dom package from npm and we'll use the same Algolia search package that we used in our api on the client to fetch our results.

I will admit actually finding a code sample that showcased how this worked was a bit tough so here’s my approach. I made a component called SearchBar that wrapped up the Algolia InstantSearch and SearchBox components. I also defined a component called Hit that will represent the list item of a hit and showcase our data the way we want it to.

Here’s an example:

const searchClient = algoliasearch(
 '<YOUR_ALGOLIA_ID>',
 '<YOUR_ALGOLIA_KEY>'
);

const Hit = ({ hit: {
 question, 
 hashtags,
 objectID
}}: Hit) => {
 return (
   <div className="Hit pv3 bt b--silver">
   <Link href="/questions/[qid]" as={`/questions/${objectID}`}>
     <a className="Hit__question db f5 link dark-blue dim">
       <span>{question}</span>
     </a>
   </Link>
   </div>
 );
}

const Search = () => (
 <div className="Search">
   <InstantSearch
     indexName="prod_QUESTIONS"
     searchClient={searchClient}
   >
     <SearchBox translations={{
       placeholder: "Search questions or hashtags..."
     }} />
     <Hits hitComponent={Hit} />
   </InstantSearch>
   <style jsx global>{`
     .ais-SearchBox-form {
       position: relative;
       display: block;
     }

     .ais-SearchBox-input {
       position: relative;
       display: block;
       width: 100%;
       padding: 1rem 2rem;
       border: 1px solid #999;
       border-radius: 0;
       background-color: #fff;
     }

     .ais-SearchBox-submit,
     .ais-SearchBox-reset {
       position: absolute;
       top: 50%;
       transform: translateY(-50%);
       height: 1rem;
       appearance: none;
       border: none;
       background: none;
     }

     .ais-SearchBox-submitIcon,
     .ais-SearchBox-resetIcon {
       width: 1rem;
       height: 1rem;
     }

     .ais-SearchBox-submit {
       left: 0.2rem;
     }

     .ais-SearchBox-reset {
       right: 0.2rem;
     }

     .ais-Hits-list {
       padding: 0;
       list-style: none;
     }
   `}</style>
 </div>
);

As you can see I just used Next.js styled-jsx block with a global scope to style the classes inside of the Algolia components.

Algolia Components

And there you have it, professional-grade search and an easy to use component ready to go in under an hour.

Deployment

At this point deployment is as simple as typing now into the command line. One thing about using Vercel is that our deployment pipeline is effectively done for us before we even start writing the app. Rather than deploy directly from the terminal I set up their GitHub integration which does two things.

  1. Any merges into master are automatically deployed to production.
  2. Any new branches deploy an instance of our app with those changes. These effectively become our QA branches.

Now if you have any test suites to run in your deployment pipeline you will need another tool to run tests before deploy. In this case I am ok to run tests manually for a while as this is just a prototype and be cautious about merging into master. The nice thing is I can spin up QA branches and have people try out new changes and updates before sending them off to the public release.

In Conclusion

All in all the construction of the entire application took a few hours over about three weekends with the above approach. I have a performant, scaleable prototype to test my idea out with that I can grow into. I have found that combining Next.js with Vercel makes microservices less painful by abstracting the difficulties in managing multiple services into simple configuration files. Infrastructure as code is empowering for solo developers running on limited time.

FaunaDB was also an excellent choice as I got the flexibility of a NoSql database, but was also able to model out a normalized data model with ease. FQL was a powerful easter egg whose power I didn’t realize until I started actually working with it. I know I’ve just scratched the surface on how we can leverage this to optimize the various queries we need to make.

Depending upon how this experiment goes the future for this application can go in many directions. I think the key benefit to this type of architecture is that it's humble enough to not be too opinionated, flexible enough to allow for pivots, and with enough structure to not get sucked down wormholes of configuration and build steps. That’s all most developers can ask for, the ability to work efficiently on the business problem at hand.

Please take a look at the project here, and ask or answer some questions if you feel inclined!