- changed status to open
Remove globals from the nfer algorithm
Issue #51
resolved
Right now, there are exactly two globals that are used in the nfer monitoring algorithm. Both relate to so-call “performance optimizations”:
timestamp opt_window_size
– this defines the size of the window in which nfer compares intervals. The default valueNO_WINDOW
disables windowing.bool opt_full
– this disables the minimality restriction when it is true.
There are several reasons why these should not be globals:
- They are actually dependent on the specification and it should be possible to have more than one specification in memory
- Using globals violate a number of coding standards
- Globals are gross (kidding… mostly)
Fixing this should not be hard! All I need to do is to move the values to a config struct that could theoretically be extended to support more configuration. Then the algorithm parts that need it should take a pointer to such a struct as a parameter and refer to it. Then the calling code just needs to set up such a struct.
typedef struct _nfer_config {
timestamp opt_window_size;
bool opt_full;
} nfer_config;
Comments (2)
-
reporter -
reporter - changed status to resolved
Easy and done
- Log in to comment
This should be pretty quick. Started branch specopts