How would this work?
I admit I'm still a novice on web architectures having spent most of my time on mobile and now Windows clients, but I have a few ideas.
The paper Ralph linked to was my first exposure to CSP, and its rather interesting. Basically, you organize your program into a pipeline with each stage in a separate thread so that they can operate independently. This is the same technique a processor core uses to exploit parallelism without actually being parallel. As long as you can keep all of the stages active, you can increase your throughput without having to find a way to speed up the task itself.
What does this do for the web? Well, consider a request broken down into stages such as:
- Receiving the request
- Authorizing the user
- Looking up the data
- Rendering the template
- Returning the response
The question I'm still grappling with is what if you want to scale beyond five concurrent requests? Assuming you can't expand your pipeline into more stages, I can see partitioning the application so that different services are processed by different pipelines, or by replicating the pipeline. The second option eliminates the benefit of CSP by creating multiple processes that access the data in step three concurrently. The first option, on the other hand, can only be applied to completely independent services so it doesn't really help us to scale the original service.
So while CSP looks like it will help spread an application across multiple cores on a single machine, it still looks like we need concurrent access to data if we're going to scale a web application horizontally.
We know that simply storing objects in a flat file can work for a web application, and we know that providing a versioning mechanism can allow concurrent access to data. Perhaps we could simply serialize each object to disk on a file system that can be safely shared across machines? We would of course have to organize the files in a way that doesn't overwhelm the file system with too many files or files that are too large.