hmbdc
simplify-high-performance-messaging-programming
 All Classes Namespaces Functions Variables Friends Pages
Classes | Public Member Functions | List of all members
hmbdc::app::BlockingContext< MessageTuples > Class Template Reference

A BlockingContext is like a media object that facilitates the communications for the Clients that it is holding. Each Client is powered by a single OS thread. a Client needs to be started once and only once to a single BlockingContext before any messages sending happens - typically in the initialization stage in main(), undefined behavior otherwise. More...

#include <BlockingContext.hpp>

Classes

struct  can_handle
 
struct  createEntry
 
struct  deliverAll
 
struct  MCGen
 
struct  MCGen< std::tuple< Messages...> >
 
struct  setupConduit
 
struct  setupConduit< MsgConduits, DeliverPred, std::tuple< M, Messages...> >
 
struct  setupConduit< MsgConduits, void *, std::tuple< M, Messages...> >
 
struct  Transport
 
struct  TransportEntry
 

Public Member Functions

template<typename U = Requests>
 BlockingContext (typename std::enable_if< std::tuple_size< U >::value==0, void >::type *=nullptr)
 trivial ctor
 
template<typename Client , typename DeliverPred = deliverAll>
void start (Client &c, size_t capacity=1024, size_t maxItemSize=max_size_in_tuple< typename Client::Interests >::value, uint64_t cpuAffinity=0, time::Duration maxBlockingTime=time::Duration::seconds(1), DeliverPred &&pred=DeliverPred())
 start a client within the Context. The client is powered by a single OS thread. More...
 
template<typename LoadSharingClientPtrIt , typename DeliverPred = deliverAll>
void start (LoadSharingClientPtrIt begin, LoadSharingClientPtrIt end, size_t capacity=1024, size_t maxItemSize=max_size_in_tuple< typename std::decay< decltype(**LoadSharingClientPtrIt())>::type::Interests >::value, uint64_t cpuAffinity=0, time::Duration maxBlockingTime=time::Duration::seconds(1), DeliverPred &&pred=DeliverPred())
 start a group of clients within the Context that collectively processing messages in a load sharing manner. Each client is powered by a single OS thread More...
 
void stop ()
 stop the message dispatching - asynchronously More...
 
void join ()
 wait until all threads of the Context exit More...
 
template<MessageC Message>
std::enable_if< can_handle
< Message >::match, void >
::type 
send (Message &&m)
 send a message to the BlockingContext to dispatch More...
 
template<MessageC Message, typename... Args>
std::enable_if< can_handle
< Message >::match, void >
::type 
sendInPlace (Args &&...args)
 send a message by directly constructing the message in receiving message queue More...
 
template<MessageC Message>
std::enable_if< can_handle
< Message >::match, bool >
::type 
trySend (Message &&m, time::Duration timeout=time::Duration::seconds(0))
 best effort to send a message to via the BlockingContext More...
 
template<MessageC Message, typename... Args>
std::enable_if< can_handle
< Message >::match, bool >
::type 
trySendInPlace (Args &&...args)
 best effort to send a message by directly constructing the message in receiving message queue More...
 
template<MessageForwardIterC ForwardIt>
std::enable_if< can_handle
< decltype(*(ForwardIt()))>
::match, void >::type 
send (ForwardIt begin, size_t n)
 send a range of messages via the BlockingContext More...
 
template<MessageC M, typename ReplyHandler >
bool requestReply (REQUEST< M > &&request, ReplyHandler &replyHandler, time::Duration timeout=time::Duration::seconds(0xffffffff))
 part of synchronous request reply interface. The caller is blocked until the replies to this request are heard and processed. Just liek regular Messages REQUEST is sent out to Clients that are interested in this Context More...
 
template<ReplyC ReplyT>
void sendReply (ReplyT &&reply)
 part of synchronous request reply interface. Send a constructed REPLY to the Context, it will only reach to the origianl sender Client. ALL REPLY need to sent out using this interface to avoid undefined behavior. More...
 

Detailed Description

template<MessageTupleC... MessageTuples>
class hmbdc::app::BlockingContext< MessageTuples >

A BlockingContext is like a media object that facilitates the communications for the Clients that it is holding. Each Client is powered by a single OS thread. a Client needs to be started once and only once to a single BlockingContext before any messages sending happens - typically in the initialization stage in main(), undefined behavior otherwise.

a Client running in such a BlockingContext utilizing OS's blocking mechanism and takes less CPU time. The Client's responding time scales better when the number of Clients greatly exceeds the availlable CPUs in the system and the effective message rate for a Client tends to be low.

Template Parameters
MessageTuplesstd tuple capturing the Messages that the Context is supposed to deliver. Messages that not listed here are silently dropped to ensure loose coupling between senders and receivers

Member Function Documentation

template<MessageTupleC... MessageTuples>
void hmbdc::app::BlockingContext< MessageTuples >::join ( )
inline

wait until all threads of the Context exit

blocking call

template<MessageTupleC... MessageTuples>
template<MessageC M, typename ReplyHandler >
bool hmbdc::app::BlockingContext< MessageTuples >::requestReply ( REQUEST< M > &&  request,
ReplyHandler &  replyHandler,
time::Duration  timeout = time::Duration::seconds(0xffffffff) 
)
inline

part of synchronous request reply interface. The caller is blocked until the replies to this request are heard and processed. Just liek regular Messages REQUEST is sent out to Clients that are interested in this Context

Template Parameters
MessageCMessage the type wrapped in REQUEST<> template
typenameReplyHandler type for how to handle the replies. example of a handler expect two steps replies for requestReply call:
* struct ClientAlsoHandlesReplies
* : Client<ClientAlsoHandlesReplies, REPLY<Reply0>, REPLY<Reply1>, OtherMsg> {
* void handleReplyCb(Reply0 const&){
* // got step 1 reply
* }
* void handleReplyCb(Reply1 const&){
* // got step 2 reply
* throw hmbdc::ExitCode(0); // all replies received, now
* // throw to unblock the requestReply call
* }
*
Parameters
requestthe REQUEST sent out
replyHandlerthe callback logic to handle replies
timeouttimeout value
Returns
true if replies are received and handled, false if timedout
template<MessageTupleC... MessageTuples>
template<MessageC Message>
std::enable_if<can_handle<Message>::match, void>::type hmbdc::app::BlockingContext< MessageTuples >::send ( Message &&  m)
inline

send a message to the BlockingContext to dispatch

only the Clients that handles the Message will get it This function is threadsafe, which means you can call it anywhere in the code

Template Parameters
Messagetype
Parameters
mmessage
template<MessageTupleC... MessageTuples>
template<MessageForwardIterC ForwardIt>
std::enable_if<can_handle<decltype(*(ForwardIt()))>::match, void>::type hmbdc::app::BlockingContext< MessageTuples >::send ( ForwardIt  begin,
size_t  n 
)
inline

send a range of messages via the BlockingContext

  • only checking with diliverPred using the first message

only the Clients that handles the Message will get it of course This function is threadsafe, which means you can call it anywhere in the code

Parameters
begina forward iterator point at the start of the range
nlength of the range
template<MessageTupleC... MessageTuples>
template<MessageC Message, typename... Args>
std::enable_if<can_handle<Message>::match, void>::type hmbdc::app::BlockingContext< MessageTuples >::sendInPlace ( Args &&...  args)
inline

send a message by directly constructing the message in receiving message queue

since the message only exists after being deleivered, the DeliverPred is not called to decide if it should be delivered

Parameters
argsctor args
Template Parameters
Messagetype
typename... Args args
template<MessageTupleC... MessageTuples>
template<ReplyC ReplyT>
void hmbdc::app::BlockingContext< MessageTuples >::sendReply ( ReplyT &&  reply)
inline

part of synchronous request reply interface. Send a constructed REPLY to the Context, it will only reach to the origianl sender Client. ALL REPLY need to sent out using this interface to avoid undefined behavior.

Template Parameters
MessageCMessage the type wrapped in REPLY<> template
Parameters
replythe rvalue REPLY (previously constructed from a REQUEST) to sent out
template<MessageTupleC... MessageTuples>
template<typename Client , typename DeliverPred = deliverAll>
void hmbdc::app::BlockingContext< MessageTuples >::start ( Client c,
size_t  capacity = 1024,
size_t  maxItemSize = max_size_in_tuple<typename Client::Interests>::value,
uint64_t  cpuAffinity = 0,
time::Duration  maxBlockingTime = time::Duration::seconds(1),
DeliverPred &&  pred = DeliverPred() 
)
inline

start a client within the Context. The client is powered by a single OS thread.

it is ok if a Client is blocking, if its own buffer is not full, it doesn't affect other Client's capabilities of receiving Messages

Template Parameters
Clientactual Client type
DeliverPreda condition functor deciding if a message should be delivered to a Client, which provides filtering before the Message type filtering. It improves performance in the way it could potentially reduce the unblocking times of thread

Usage example: //the following Pred verifies srcId matches for Response, and let all other Messages types //deliver

* struct Pred {
* Pred(uint16_t srcId)
* : srcId(srcId){}
*
* bool operator()(Response const& resp) {
* return resp.srcId == srcId;
* }
* template <typename M>
* bool operator()(M const&) {return true;}
*
* uint16_t srcId;
* };
*
*
Parameters
cClient
capacitythe maximum messages this Client can buffer
maxItemSizethe max size of a message - when the Client is interetsed in JustBytes, this value MUST be big enough to hold the max size message to avoid truncation
cpuAffinitycpu affinity mask for this Client's thread
maxBlockingTimeit is recommended to limit the duration a Client blocks due to no messages to handle, so it can respond to things like Context is stopped, or generate heartbeats if applicable.
predsee DeliverPred documentation
template<MessageTupleC... MessageTuples>
template<typename LoadSharingClientPtrIt , typename DeliverPred = deliverAll>
void hmbdc::app::BlockingContext< MessageTuples >::start ( LoadSharingClientPtrIt  begin,
LoadSharingClientPtrIt  end,
size_t  capacity = 1024,
size_t  maxItemSize = max_size_in_tuple< typename std::decay<decltype(**LoadSharingClientPtrIt())>::type::Interests >::value,
uint64_t  cpuAffinity = 0,
time::Duration  maxBlockingTime = time::Duration::seconds(1),
DeliverPred &&  pred = DeliverPred() 
)
inline

start a group of clients within the Context that collectively processing messages in a load sharing manner. Each client is powered by a single OS thread

it is ok if a Client is blocking, if its own buffer is not full, it doesn't affect other Client's capabilities of receiving Messages

Template Parameters
LoadSharingClientPtrItiterator to a Client pointer
DeliverPreda condition functor deciding if a message should be delivered to these Clients, which provides filtering before the Message type filtering. It improves performance in the way it could potentially reduce the unblocking times of threads. An exception is for Client that are interested in JustBytes (basically every message), this filter does not apply - no filtering for these Clients
Parameters
beginan iterator, **begin should produce a Client& for the first Client
endan end iterator, [begin, end) is the range for Clients
capacitythe maximum messages this Client can buffer
maxItemSizethe max size of a message - when the Client is interetsed in JustBytes, this value MUST be big enough to hold the max size message to avoid truncation
cpuAffinitycpu affinity mask for this Client's thread
maxBlockingTimeit is recommended to limit the duration a Client blocks due to no messages to handle, so it can respond to things like Context is stopped, or generate heartbeats if applicable.
predsee DeliverPred documentation
template<MessageTupleC... MessageTuples>
void hmbdc::app::BlockingContext< MessageTuples >::stop ( )
inline

stop the message dispatching - asynchronously

asynchronously means not garanteed message dispatching stops immidiately after this non-blocking call

template<MessageTupleC... MessageTuples>
template<MessageC Message>
std::enable_if<can_handle<Message>::match, bool>::type hmbdc::app::BlockingContext< MessageTuples >::trySend ( Message &&  m,
time::Duration  timeout = time::Duration::seconds(0) 
)
inline

best effort to send a message to via the BlockingContext

this method is not recommended if more than one recipients are accepting this message since there is no garantee each one will receive it once and only once. this call does not block - return false when deliver doesn't reach all intended recipients This method is threadsafe, which means you can call it anywhere in the code

Parameters
mmessage
timeoutreturn false if cannot deliver in the specified time
Returns
true if send successfully to every intended receiver
template<MessageTupleC... MessageTuples>
template<MessageC Message, typename... Args>
std::enable_if<can_handle<Message>::match, bool>::type hmbdc::app::BlockingContext< MessageTuples >::trySendInPlace ( Args &&...  args)
inline

best effort to send a message by directly constructing the message in receiving message queue

since the message only exists after being deleivered, the DeliverPred is not called to decide if it should be delivered this method is not recommended if more than one recipients are accepting this message since it is hard to ensure each message is delivered once and only once. this call does not block - return false when delivery doesn't reach ALL intended recipients This method is threadsafe, which means you can call it anywhere in the code

Returns
true if send successfully to every intended receiver

The documentation for this class was generated from the following file: