Chapter 10: Building a Concurrent Application
We've now covered all the major areas that JavaScript has to offer in terms of concurrency. We've seen the browser and how the JavaScript interpreter fits into this environment. We've looked at the few language mechanisms that assist with writing concurrent code, and we've learned how to write concurrent JavaScript in the back-end. In this chapter, we're going to try and put this all together by building a simple chat application.
It's worth noting upfront that this isn't a basic rehash of individual topics covered in earlier chapters, which would serve no real purpose. Instead, we're going to focus more on the concurrency decisions that we have to make during the initial implementation of the app, adapting earlier ideas learned in this book wherever appropriate. It's the design of concurrency semantics we put to use in our code that matters much more so than the actual mechanism that's used to do so.
We'll start with a brief foray into the pre-implementation activities. Then, we'll look at the more detailed requirements of the application that we're building. Finally, we'll walk through the actual implementation, which is divided into two parts, the front-end and back-end.
Getting started
Looking at examples with code snippets is a good avenue for introducing a given topic. This is more or less what we've done so far throughout this book while going through concurrency in JavaScript. In the first chapter, we introduced a few concurrency principles. We should parallelize our code to take advantage of concurrent hardware. We should synchronize concurrent actions unobtrusively. We should conserve the CPU and memory by deferring computations and allocations wherever possible. Throughout the chapters, we've seen how these principles apply to different areas of JavaScript concurrency. They're also applicable in the first stages of development when we don't have an application or we're trying to fix an application.
We'll start this section with another look at the idea that concurrency is the default mode. When concurrency is the default, everything is concurrent. We'll go over again, why this is such an important system trait. Then, we'll look at whether or not the same principles apply to applications that already exist. Lastly, we'll look at the types of applications we might be building, and how they influence our approach to concurrency.
Concurrency first
As we're well aware by now, concurrency is difficult. No matter how we dress it up or how solid our abstractions are, it's simply counter-intuitive to how our brains work. This sounds impossible, doesn't it? This definitely isn't the case. As with any difficult problem, the right approach is almost always a variation of divide and conquer. In the case of JavaScript concurrency, we want to divide the problem into no more than a few really small, easy-to-solve problems. An easy way to do this is to heavily scrutinize potential concurrency issues before we actually sit down to write any code.
For example, let's say we work under the assumption that we're likely to encounter concurrency issues frequently, all throughout our code. This would mean that we'd have to spend a lot of time doing upfront concurrency design. Things like generators and promises make sense from the early stages of development, and they get us closer to our end goal. But other ideas, like functional programming, map/reduce, and web workers solve larger concurrency problems. Does this mean that we want to spend a lot of design time on issues like these that we have yet to actually experience in our application?
The other approach is to spend less time on upfront concurrency design. This is not to say that we ignore concurrency; that would defeat the whole premise of this book. Rather, we work under the assumptions that we don't yet have any concurrency issues, but there's a strong possibility that we will have them later on. Put differently, we continue to write code that's concurrent by default, without investing in solutions to concurrency problems that don't exist yet. The principles we've used throughout this book, again, help us solve the important problems first.
For instance, we want to parallelize our code where we can get the most out of multiple CPUs on the system. Thinking about this principle forces the question—do we really care about leveraging eight CPUs for something that's easily handled by one? With little effort, we can build our application in such a way that we don't end up paralyzing ourselves by bikeshedding on concurrency issues that aren't real. Think about how to facilitate concurrency in the early stages of development. Think, how does this implementation make future concurrency issues difficult to deal with, and what's a better approach? Later in the chapter, our demo application will aim to implement code in this fashion.
Retrofitting concurrency
Given that it's ill-advised to spend much time upfront thinking about concurrency issues, how do we go about fixing these issues once they happen? In some circumstances, the issues can be serious problems that render the interface unusable. For example, if we try to process a large amount of data, we could crash the browser tab by trying to allocate too much memory, or the UI could simply freeze. These are tough problems that require immediate attention, and they often don't come with the luxury of time.
The other circumstance that we're likely to find ourselves in is less-critical cases, where a concurrent implementation could objectively improve the user experience, but the application isn't going to fail if we don't fix it right away. For example, let's say that our application makes three API calls on the initial page load. Each call waits for the previous call to complete. But, it turns out that there's no actual dependency between the calls; they don't require response data from each other. Fixing these calls so that they all happen in parallel is relatively low-risk and improves the load time, possibly by more than a second.
The ultimate deciding factor on how easy or difficult these changes are to retrofit into our application depends on how the application was written. As mentioned in the preceding section, we don't want to spend a lot of time thinking about concurrency problems that don't exist. Instead, our initial focus should be on facilitating concurrency by default. So, when these circumstances arise, and we need to implement a concurrent solution that solves a tangible problem, it's not so difficult. We're already thinking concurrently because that's the way the code was written.
We're just as likely to find ourselves fixing an application that paid no mind to concurrency. These are trickier to handle when trying to fix issues that call for a concurrent solution. We'll often find that we need to refactor a lot of code just to fix something basic. This gets tough when we're under-the-gun time-wise, but generally-speaking, this can be a good thing. If a legacy application starts getting refactored for better concurrency facilitation one piece at a time, then we're better off. This just makes the next concurrency issue easier to fix, and it promotes a good style of coding—concurrency by default.
Application types
One thing you can and should pay close attention to during the initial phases of implementation is the type of application that we're building. There's no generic approach to writing code that facilitates concurrency. The reason for this is that every application is concurrent in its own unique way. There's obviously some overlap between concurrency scenarios, but in general, it's a good bet that our application is going to require its own special treatment.
For example, does it make sense to devote a lot of time and effort to designing abstractions around web workers? It wouldn't make sense to think about making API responses promised values if our application hardly makes any web requests at all. Finally, do we really want to think about inter-process communication design in our Node components if we don't have a high request/connectivity rate?
The trick isn't to ignore these lower-priority items, because as soon as we ignore some dimension of concurrency in our application, next week is when everything changes, and we'll be completely unprepared to handle the situation. Instead of completely ignoring these dimensions of our application in a concurrency context, we need to optimize for the common case. The most effective way to do this is to profoundly think about the nature of our application. By doing this, we can easily spot the best candidate problems to work on in our code as far as concurrency is concerned.
Requirements
Now it's time to turn our attention to actually building a concurrent application. In this section, we'll go through a brief overview of the chat application that we're going to build, starting with the overall goal of the application. Then, we'll break down the other requirements into the "API" and the "UI". We'll drive into some code momentarily, don't worry.
The overall goal
First things first, why yet another chat application? Well, for two reasons; first, it's not a real application, and we're not building it for the sake of reinventing the wheel; we're building it to learn about concurrent JavaScript in the context of an application. Second, a chat application has a lot of moving parts that help you demonstrate some of the concurrency mechanisms that you've learned about in this book. That being said, it will be a very simply chat application—we only have so much space in a chapter.
The chat concept that we'll implement is the same as with most other familiar chat applications out there. There's the chat itself, labeled with a topic, and there are the users and messages within. We'll implement these and not much else. Even the UI itself will be a stripped-down version of a typical chat window. Again, this is an effort to keep the code samples down to what's pertinent in a concurrency context.
To further simplify things, we won't actually persist the chats to disk; we'll just hold everything in memory. This way, we can keep our focus on other concurrency issues in the app, and it's easy to run without setting up storage or dealing with disk space. We'll also skip on the other common features of chats, such as typing notifications, emoji, etc. They're just not relevant to what we're trying to learn here. Even with all these functions removed, we'll see how involved concurrency design and implementation can get; larger projects are all the more challenging.
Finally, instead of using authentication, this chat app will serve more of a transient usage scenario, where users want to throw up a quick chat that doesn't require registration. So, the chat creator will create a chat, and this creates a unique URL that can be shared with participants.
The API
The API for our chat app will be implemented using a simple Node HTTP server. It doesn't use any web frameworks, only a couple small libraries. There's no reason for this other than the application is simple enough that using a framework doesn't enhance the examples in this chapter in any way. In the real world, by all means, use a Node web framework that simplifies your code—the lessons from this book—including this chapter—are still applicable.
The responses will be JSON strings of our chat data. Only the most basic API endpoints that are fundamental to the application will be implemented. Here's what we need in terms of API endpoints:
- Create a new chat
- Join an existing chat
- Post a new message to an existing chat
- Fetch an existing chat
Pretty simple, right? It's deceptively simple. Since there are no filtering capabilities, this needs to be handled in the front-end. This is on purpose; an API that's missing features is common, and a concurrent solution in the front-end is the likely outcome. We'll revisit this topic again when we start building the UI.
The NodeJS code implemented for this sample application also includes handlers for serving static files. This is really a convenience measure more than a reflection on what should be happening in production. It's more important to be able to easily run this application and play around with it, than replicate how static files are served in a production environment.
The UI
The user interface of our chat application will consist of a single HTML file and some accompanying JavaScript code. There are three pages within the HTML document—just simple div elements, and they are as follows:
- Create chat: user provides a topic and their name.
- Join chat: user provides their name and is redirected to the chat.
- View chat: user can view chat messages and send new messages.
The role of these pages is fairly self-explanatory. The most complex page is view chat, and even this isn't too bad. It displays a list of all messages sent from any participant, including ourselves, along with the list of users. We'll have to implement a polling mechanism to keep the content of this page synchronized with chat data. Style-wise, we're not doing much beyond some very basic layout and font adjustments.
Lastly, since users are likely to join chats frequently, they're transient and ad-hoc in nature. After all, it'd be nice if we didn't always have to enter our user name every time we create or join a chat. We'll add functionality that keeps the name of the user in browser local storage.
Alright, time to write some code, ready?
Building the API
We'll begin the implementation with the NodeJS back-end. This is where we'll build the necessary API endpoints. We don't necessarily have to start with building the back-end first. In fact, a lot of the time, the UI design drives the API design. Different development shops have different approaches; we're doing the back-end first for no particular reason.
We'll start by implementing the basic HTTP serving and request routing mechanisms. Then, we'll look at using coroutines as handler functions. We'll wrap up the section with a look at how each of our handler functions are implemented.
The HTTP server and routing
We're not going to use anything more than the core http Node module for handling HTTP requests. In a real application, where we're more likely to use a web framework that takes care of a lot of boilerplate code for us, we would probably have a router component at our disposal. Our requirements are very similar to what we'd find in these routers, so we'll just roll our own here for the sake of simplicity.
We'll use the commander library for parsing command line options but this is actually not so straightforward to do. The library is tiny and introducing it early on in our project just means it's easier to add new configuration options to our server.
The job of our main module is to launch the HTTP server and set up a handler function that does the routing. The routes themselves are a static mapping of regular expression to handler function. As we can see, the handler functions are stored in a separate module. So let's take a look at our main program now:
This is the extent of our handler routing mechanism. We have all our routes defined in the routes variable, and as our application changes over time, this is where the route changes happen. We can also see that getting options from the command line using commander is pretty straightforward. Adding new options here is easy.
The request handler function that we've given to our HTTP server will probably never need to change, because it doesn't actually fulfill any requests. All it does is iterate over the routes until the route regular expression matches the request URL. When this happens, the request is handed off to the handler function. So, let's turn our attention to the actual handler implementation.
coroutines as handlers
As we saw in earlier chapters of this book, it doesn't take much to introduce callback hell in our front-end JavaScript code. This is where promises come in handy, because they allow us to encapsulate nasty synchronization semantics. The result is clean and readable code in our components, where we try to implement product features. Do we have the same problem with Node HTTP request handlers?
In simpler handlers, no, we don't face this challenge. All we have to do is look at the request, figure out what to do about it, do it, and then update the response before sending it. In more complex scenarios, we have to do all kinds of asynchronous activities within our request handler before we're able to respond. In other words, callback hell is inevitable if we're not careful. For example, our handler might reach out to other web services for some data, it could issue a database query, or it could write to disk. In all these cases, we need to execute callbacks when the asynchronous action completes; otherwise, we'd never finish our responses.
In Chapter 9, Advanced NodeJS Concurrency: Coroutines with Co, we looked at implementing coroutines in Node using the Co library. What if we could do something similar with our request handler functions? That is, make them coroutines instead of plain callable functions.
The ultimate goal would be that the values we get from these services to behave as simple variables in our code. They don't have to be services; however, they could be any asynchronous action. For example, our chat application needs to parse form data that's posted from the UI. It's going to use the formidable library to do this, which is an asynchronous action. The parsed form fields are passed to a callback function. Let's wrap this action in a promise, and see what it looks like:
When we want form fields, we have a promise to work with, which is good. But now, we need to use the function in the context of a coroutine. Let's walk through each of our request handlers, and see how to use the formFields() function to treat the promised value as a synchronous value.
The create chat handler
The create chat handler is responsible for creating a new chat. It expects a topic and a user. It's going to use the formFields() function to parse the form data that's posted to this handler. After it stores the new chat in the global chat object (remember, this application stores everything in memory), the handler responds with the chat data as a JSON string. Let's take a look at the handler code:
We can see that the createChat() function is exported from this module, because it's used by our router in the main application module. We can also see that the handler function is a generator, and it's wrapped with co.wrap(). This is because we want it to be a coroutine instead of a regular function. The call to formFields() illustrates the ideas that we covered in the previous section. Notice that we yield the promise, and we get the resolved value in return. The function blocks while this is happening, and this is of key importance because it's how we're able to keep our code clean and free of excessive callbacks.
There are a few utility functions used by each of our handlers. These functions aren't covered here in the interest of page space. However, they're in the code that ships with this book, and they're documented in the comments.
The join chat handler
The join chat chandler is how a user is able to join a chat created by another user. The user first needs the URL of the chat shared with them. Then, they can provide their name and post to this endpoint, which has the chat ID encoded as part of the URL. The job of this handler is to push the new user onto the users array of the chat. Let's take a look at the handler code now:
We can probably notice many similarities between this handler and the create chat handler. We check for the correct HTTP method, return a JSON response, and wrap the handler function as a coroutine so that we can parse the form in a way that completely avoids callback functions. The main difference is that we update an existing chat, instead of creating a new one.
The code where we push the new user object to the users array would be considered storing the chat. In a real application, this would mean writing the data to disk somehow—likely a call to a database library. This would mean making an asynchronous request. Luckily, we can follow the same technique used with our form parsing—have it return a promise and leverage the coroutine that's already in place.
The load chat handler
The job of the load chat handler is exactly what it sounds like—load the given chat using an ID found in the URL and respond with the JSON string of this chat. Here's the code to do this:
There's no co.wrap() call for this function, nor a generator. This is because it's not needed. It's not that it's harmful to have this function be a generator that's wrapped as a coroutine, it's just wasteful.
This is actually an example of us, the developers, making a conscious decision to avoid concurrency where it isn't justified. This might change down the road with this handler, and if it does, we'll have work to do. However, the trade-off is the fact that we now have less code, and it runs faster. It's beneficial to others who read it as it doesn't look like an asynchronous function, and it shouldn't be treated as such.
The send message handler
The last major API endpoint that we need to implement is send message. This is how any user in a given chat is able to post a message that's available for all other chat participants to consume. This is similar to the join chat handler, except we're pushing a new message object onto the messages array. Let's take a look at the handler code; this pattern should start to look familiar by now:
The same idea applies when joining a chat. Modifying the chat object is likely an asynchronous action in a real application, and now, our coroutine handler pattern is all set up for us to make this change when the time is right. That's the key with these coroutine handlers, making it easy to add new asynchronous actions to handlers instead of overwhelmingly difficult.
Static handlers
The last group of handlers that make up our chat application are the static content handlers. These have the job of serving static files from the file system to the browser, such as the index.html document and our JavaScript source. Typically, this is handled outside of the node application, but we'll include them here because there are times where it's just easier to go batteries included:
Building the UI
We now have an API to target; it's time to start building the user interface for our chat. We'll start by thinking about talking to the API that we've just built, then implementing that piece. Next, we'll build the actual HTML we need to render the three pages used by this application. From here, we'll move onto perhaps the most challenging part of the front end—building the DOM event handlers and manipulators. Finally, we'll see if we can enhance the responsiveness of the application by throwing a web worker into the mix.
Talking to the API
The API communication paths in our UI are inherently concurrent—they send and receive data over a network connection. Therefore, it's in the best interest of our application architecture that we take time to hide the synchronization mechanisms from the rest of the system as best as we can. To communicate with our API, we'll use instances of the XMLHttpRequest class. However, as we've seen in earlier chapters of this book, this class can lead us toward callback hell.
The solution, as we know, is to use a promise to support a consistent interface to all our API data. This doesn't mean we need to abstract the XMLHttpRequest class over and over again. We create a simple utility function that handles the concurrency encapsulation for us, and then we create several smaller functions that are specific to a corresponding API endpoint.
This approach to talking with asynchronous API endpoints scales well, because adding new capabilities involves simply adding a small function. All the synchronization semantics are encapsulated within one api() function. Let's take a look at the code now:
This function is pretty easy to use and supports all our API usage scenarios. The smaller API functions that we'll implement shortly can simply return the promise that's returned by this api() function. There's no need to do anything fancier than this.
However, there is another thing we'll want to consider here. If we recall from the requirements of this application, the API doesn't have any filtering capabilities. This is a problem for the UI because we're not going to re-render the entire chat object. Messages can be posted frequently, and if we re-render a lot of messages, there's a good chance that the screen will flicker as we render the DOM elements. So, we obviously need to filter the chat messages and users in the browser; but where should this happen?
Let's think about this in the context of concurrency. Say we decide to perform the filtering in a component that directly manipulates the DOM. This is good in a sense because it means that we can have several independent components using the same data yet filtering it differently. It's also difficult to make any kind of adjustments for concurrency when the data transformations are this close to the DOM. For example, our application doesn't need flexibility. There's only one component that renders filtered data. But, it might benefit from concurrency.
There is another approach, where the API functionality that we implement performs the filtering. With this approach, the API functions are isolated enough from the DOM. We can introduce concurrency later on if we want. Let's look at some specific API functions now in addition to a filtering mechanism we can attach to the given API calls as needed:
The filterChat() function is straightforward enough. It just modifies the given chat object to include only new users and messages. New messages are those that have a timestamp greater than the timestamp variable used here. After the filtering is done, timestamp is updated based on the chat's timestamp property. This could be the same value if nothing has changed, but if something has changed, this value is updated so that duplicate values aren't returned.
We can see that in our specific API functions, the filterChat() function is passed to the promise as a resolver. So we do retain a level of flexibility here. For example, if a different component needs to filter the chat differently, we can introduce a new function that uses the same approach, and add a different promise resolver function that filters accordingly.
Implementing the HTML
Our UI needs some HTML in order to render. The chat application is simple enough to get away with just a single HTML page. We can organize the DOM structure into three div elements, each of which represents our page. The elements on each page are simple in themselves, because there aren't many moving parts at this stage in development. Our first priority is functionality—building features that work. At the same time, we should be thinking about concurrency design. These items are definitely more pertinent to building a resilient architecture than thinking about, say, widgets and virtual DOM rendering libraries. These are important considerations, but they're also easier to work around than a faulty concurrency design.
Let's take a look at the HTML source used with our UI. There are a few CSS styles defined for these elements. However, they're trivial and aren't covered here. For example, the hide class is used to toggle the visibility of a given page. By default, everything is hidden. It's up to our event handlers to handle the display of these elements—we'll cover these next:
<div id="create" class="hide">
<h1>Create Chat</h1>
<p>
<label for="topic">Topic:</label>
<input name="topic" id="topic" autofocus />
</p>
<p>
<label for="create-user">Your Name:</label>
<input name="create-user" id="create-user" />
</p>
<button>Create</button>
</div>
<div id="join" class="hide">
<h1>Join Chat</h1>
<p>
<label for="join-user">Your Name:</label>
<input name="join-user" id="join-user" autofocus />
</p>
<button>Join</button>
</div>
<div id="view" class="hide">
<h1></h1>
<div>
<div>
<ul id="messages"></ul>
<input placeholder="message" autofocus />
</div>
<ul id="users"></ul>
</div>
</div>
DOM events and manipulation
We now have some API communication mechanisms and DOM elements in place. Let's turn our attention to the event handlers of our application, and how they interact with the DOM. The most involved DOM manipulation activity for us to tackle is drawing the chat. That is, displaying messages and users participating in the chat. Let's start here. We'll implement a drawChat() function because it's likely going to be used in more than one place:
There are two important things to note about the drawChat() function. First, there's no chat filtering done here. It assumes that any message and user are new, and it simply appends them to the DOM. Second, we actually return the chat object after we've rendered the DOM. This may seem unnecessary at first, but we're actually going to use this function as a promise resolver. This means that if we want to add more resolvers to the then() chain, we have to pass the data along by returning it.
Let's take a look at the load event to highlight the previous point. After the chat has been rendered, we need to perform some more work. To do this, we can just chain the next function with another then() call:
This handler is called when the page first loads, and it first needs to check if there's a chat to load based on the current URL. If there is, then we make an API call to load the chat using drawChat() as the resolver. But, we also need to perform some additional functionality, and this is added to the next then() resolver in the chain. Its job is to make sure the user is actually part of the chat, and for this, it needs the chat we just loaded from the API, which is passed along from drawChat(). After we make further API calls to add the user to the chat, if necessary, we start the polling mechanism. This is how we keep the UI up-to-date with new messages and new users joining the chat:
You may have noticed that we're using a strange call almost like a web worker—api.postMessage(). This is because it is a web worker, and this is what we'll cover next.
In the interest of space, we're leaving out three other DOM event handlers related to creating chats, joining chats, and sending messages. There's nothing different about them in terms of concurrency compared to the load handler that we just covered.
Adding an API worker
Earlier, when we were implementing the API communication functions, we decided that having filtering components coupled with the API rather than the DOM made more sense from a concurrency perspective. It's now time to benefit from this decision and encapsulate our API code within a web worker. The main reason we want to do this is because the filterChat() function has the potential to lock up responsiveness. In other words, for larger chats, this would take longer to complete, and text inputs would stop responding to user input. For instance, there's no reason to prevent a user from sending a message while we try to render the updated list of messages.
First, we need to extend the worker API to have postMessage() return a promise. This is just as we did in Chapter 7, Abstracting Concurrency: Extending postMessage(). Take a look at the following code:
There's one minor detail that we didn't cover in Chapter 7, Abstracting Concurrency: Extending postMessage(); the technique of rejecting promises. For example, if the API call for some reason fails, we have to make sure that the promise in the main thread that's waiting on the worker is rejected; otherwise, strange bugs will start popping up.
Now, we need to make an addition to our ui-api.js module, where all our API functions are defined to accommodate for the fact that it's running inside a web worker. We just need to add the following event handler:
This message event handler is how we're able to communicate with the main thread. The action property is how we're able to determine which API endpoint to call. So now, whenever we perform any expensive filtering on our chat messages, it's in a separate thread.
Another consequence of introducing this worker is that it encapsulates the API functionality into a cohesive whole. The API web worker component can now be thought of as a smaller application within the larger UI as a whole.
Additions and improvements
And that's the extent of coverage we'll have on the development of our chat application. We didn't walk through every bit of code, but this is why the code is made available as a companion to this book to look through it in its entirety. The focus of the preceding sections has been through the lens of writing concurrent JavaScript code. We didn't utilize every last example from the chapters before this one, which would defeat the whole purpose of concurrency to fix issues that lead to a suboptimal user experience.
The focus of the chat application example was the facilitation of concurrency. This means making it possible to implement concurrent code when there's a need to do so as opposed to implementing concurrent code for the sake of it. The latter doesn't make our application any better than it is right now, nor does it leave us in a better position to fix concurrency issues that happen later on.
We'll wrap up the chapter with a few areas that might be worth considering for our chat application. You, the reader, are encouraged to work with the chat application code and see if any of these points that follow are applicable. How would you go about supporting them? Do we need to alter our design? The point is that concurrency design in our JavaScript applications isn't a one-time occurrence, it's an ever evolving design task that changes alongside our application.
Clustering the API
In Chapter 9, Advanced NodeJS Concurrency: Abstracting process pools, you were introduced to the cluster module in NodeJS. This transparently scales the request handling ability of our HTTP servers. This module works by forking the node process into several child processes. Since each of them are their own process, they have their own Event Loop. Furthermore, there's no additional communication synchronization code required.
It wouldn't take much effort on our behalf to add in these clustering capabilities to our app.js module. But here's the question—at what point do we decide that clustering is worthwhile? Do we wait until we actually have performance issues, or we just have it turned on automatically? These are the things that are difficult to know in advance. The reality is that it depends on how CPU-intensive our request handlers get. And these changes usually come about as a result of new features being added to the software.
Will our chat app ever need clustering? Perhaps, someday. But there's really no work being performed by the handlers. This can always change. Maybe we could go ahead and implement the clustering capabilities, but also add an option that let us turn it off.
Cleaning up chats
Our chat application doesn't have any persistent storage; it holds all the chat data in memory. This is fine for our particular use case, because it's meant for users that want to spin up a transient chat so that they can share a link with people and not have to go through a registration process. The problem here is that after the chat is no longer being used, its data still occupies memory. Eventually, this will be fatal to our Node process.
What if we decided to implement a cleanup service, whose job would be to periodically iterate over the chat data and chats that hadn't been modified in a given amount of time would be deleted? This would keep only active chats in memory.
Asynchronous entry points
We made the early decision to use coroutines for most of our request handlers. The only asynchronous action used by these handlers is the form parsing behavior. However, the likelihood of this remaining the only asynchronous action in any given handler is small. Especially as our application grows, we're going to start depending on more core NodeJS functionality, which means we're going to want to wrap in promises more asynchronous callback-style code. We'll probably start depending on external services too either our own or third-party software.
Can we take our asynchronous architecture a step further and provide entry points into these handlers for those that wish to extend the system? For example, if the request is a create-chat request, send requests to any before-create-chat extensions that have been provided. Something like this is quite the undertaking and is error prone. But for larger systems that have many moving parts, all of them being asynchronous, it's best to look at standardizing on asynchronous entry points into the system.
Who's typing?
Something we left out of our chat application is the typing state for a given user. This is the mechanism that informs all other members of the chat that a particular user is typing a message and is present on just about every modern chat system.
What would it take for us to implement such a feature, given our current design? Is the polling mechanism enough to deal with such a constantly-changing state? Would the data model have to change much, and would such a change bring about problems with our request handlers?
Leaving chats
Another feature missing from our chat application is removing users that are no longer participating in the chat. For example, does it really make sense for other chat participants to see users in the chat that aren't really there? Would listening to an unload event and implementing a new leave-chat API endpoint suffice, or is there a lot more to it than this?
Polling timeouts
The chat application that we've just built does little to no error handling. One case in particular that's worth fixing is killing the polling mechanism when it times out. By this, we're talking about preventing the client from repeating failed request attempts. Let's say the server is down, or the handler is simply failing because of a bug introduced; do we want the poller to just spin indefinitely? We don't want it to do this, and there's probably something that can be done about it.
For example, we would need to cancel the interval that's been set up when the polling starts with the call to setInterval(). Likewise, we would need a means to track the number of successive failed attempts, so we would know when to shut it off.