HTTPS SSH

Slick

This is a project which plays with a mid-level NIO server framework on sitting on top of XNIO. It started as a playground for Quasar fibers but have now evolved to a more alround framework: A simpler API on top of XNIO.

At the moment this is pre-alpha, so don't expect too much.

Building

There's no official release, so if you want to play you need to build it first. You need Gradle and Java 8 to do that.

hg clone https://bitbucket.org/fungrim/slick
cd slick
gradle install

Concepts

A channel in Slick is a simple alternative to NIO channels. They are created by Slick when someone connects, and is responsible for setting up channel listeners.

The channels issues read events when the underlying system have something to read. Or not, you may get read events when there's nothing to read. So you need to watch out, if the read returns 0 immediately it means you should halt because there's nothing there.

You can write to a channel at any time, but the write may be asynchronous, you'll have to add listeners for completion or errors.

The mid-level building blocks consists of faucets and pipes. With the XNIO channel to the left and the apploication code to the right, they go like this for a server:

 upstream ->
------------------------------------------
| xnio | faucet | pipe* | server handler |
------------------------------------------
                             <- downstream
  • A faucet sits between the XNIO channel and the pipe and is responsible for reading bytes in to something the pipe can consume, typically it chunks the byte stream into packets for the pipes to consume.
  • A pipe is a bi-directional event handler. It handles one or two types of events, one upstream towards the application code, and one downstream towards the network. Pipes are always symmetrical, ie has the same types for input and output in both directions.

A pipe effectively works like a filter and data transformer on top of bytes chunked by the faucet. There is one facuet per pipeline, and one or more pipes.

The server handler is where the application logic starts, it is morrored client side by a client:

 downstream ->
----------------------------------
| client | pipe* | faucet | xnio |
----------------------------------
                      <- upstream

Each channel has a separate copy of the entire pipeline, so no synchronization is needed.

Server Example

Here's how to write a simple echo server, which takes a byte array of fixed length and writes it back to the caller.

First the logic, we're going to use a faucet that reads the byte chunks were interested in and passit through to the pipeline. As a convenience there's ready-made faucets for fixed and variable frames (sizes) already provided. In this example we're going to use a frame of 8, ie. the number of bytes sent each request:

// fixed frame size of 8
FixedFrameFaucet framer = new FixedFrameFaucet(8);

This faucet will read bytes from XNIO, and when it has collected 8 it will pass them on to the first pipe. From that follows that the first pipe has its downstream input and output as byte buffers. Now, in our example we're not going to use any pipe at all, we'll just straight to the server handler:

public class EchoServerHandler extends ServerHandler<ByteBuffer> {

    @Override
    public void handle(ByteBuffer message) {
        output.offer(message);  
    }
}

The 'output' is the downstream sink which eventually will send the bytes to the client. This is an asynchronous operation, if you're interested in the result, you can listen to a future:

IoFuture<IoStatus> future = output.offer(message);  
future.setListener((status, value) -> {
    System.out.println("Done sending! Status: "+ status.getState());
});

Of course, you can also wait for the future:

IoFuture<IoStatus> future = output.offer(message);  
IoStatus status = future.get();
System.out.println("Done sending! Status: "+ status.getState());

We need to setup a complete pipeline for each channel that connects to the server, this way we avoid any synchronization problems along the way. We'll use the server builder:

InetSocketAddress bindAddress = new InetSocketAddress("localhost", PORT);
Server server = ServerBuilder.newBuilder(bindAddress)
                        .connect(() -> new FixedFrameFaucet(8))
                        .toHandler(() -> new EchoServerHandler()).build();

The above will create a server that runs on a cached tread pool. We can custimze this to run directly on the IO threads as we're not doing any blocking:

InetSocketAddress bindAddress = new InetSocketAddress("localhost", PORT);
Server server = ServerBuilder.newBuilder(bindAddress)
                        .connect(() -> new FixedFrameFaucet(8))
                        .toHandler(() -> new EchoServerHandler())
                        .withThreadManager().set(new DirectThreadManager())
                        .build();

// start server
server.start();

Client Example

Here's a simple client to the echo server above:

InetSocketAddress serverAddress = new InetSocketAddress("localhost", PORT);
Client<ByteBuffer> client = ClientBuilder.newBuilder(address)
                                .connect(new FixedFrameFaucet(8)).to((msg) -> System.out.println("Server response: " + msg))
                                .build();

// connect and wait for connection
// to finish
client.connect().get()
client.send(ByteBuffer.wrap(new byte[] { 0, 1, 2, 3, 4, 5, 6, 7 }));
// etc...