- ditch in-process dispatcher. unneccessary complexity.
- Sweep and bootstrap param ranges
- Revise the swappable evaluator function protocol to pass all state in - agents, firms and params (perhaps the World object?)
- move wage attribute into being a relational property between worker and firm
- move stats out of World
- key is very unstable - seems to vary with code changes that are not param changes. WTF?
- Implement firm death?
- Implement firm replacement?
- Groysberg's data is based on partial information?
- Actual data parameters:
- 20000 workers, 400 firms, ca. 30 timesteps (annual rankings)
- More plausible schemes for culture vectors might be that there is an unknown "correct" mean culture vector, which firms may try to approximate. This has the potential for many local optima.
- record aggregate per-firm culture vector value in logs
- Base effectiveness can be based on a weighted geometric mean of aggregate culture vector and mean skill. (will need a more sophisticated version of marginal utility ofa single firm)
The simplest worker valuation function I can think of is for firms to learn a ratio between public valuation and hiring valuation (plus optional noise?) and adjust their bids accordingly.
or we just get them to learn mean value of given agents based on their hires thus far, with some finite memory
- Can we get a variance estimate this way too?
- General function approximation problem here, categorical var (firm) plus continuous (profitability)
Can we, should we, get them to value workers for themselves versus everyone else and bid accordingly, i.e. strategically?
No firm-private information at the moment. Should this be an open-kimono sim? How do we get asymmetrical judgements then?
more complex schemes are alarming - we know the dimensionality of the vectors they try to estimate, and the number of data points available. If firms know the true form of the functions they can behave optimally, but it will be a massive exercise in CPU wasting to actually do the inference.
- although we have the luxury of knowing the data, so we could just use some convergence theorem to give the firms the true data plus a noise term. (notwithstanding that a "true" variation is dimensionally very large)
- but if we assume that the form is unknown to the firms, what the hell
- Shalizi-style dynamics of Bayesian updating? (i.e. replicator equation)
- raw info-theory one- we know how many bits there are to learn in each agent, so we can bound the convergence rate of the estimator if we know model structure and it is mere parameter estimation. Can we bound the estimation accuracy by considering approximation theory
real valuations would include the strategic component of the desirability of a worker to others (though a random bid in the tenable range is all I have done for now, and even that gets fiddly when the valuations are all different)
real valuations should be decision-theoretic, including a utility function
would it be plausible to have bids made in ignorance of current worker wages? probably not realistic.
implement risk aversion? (CRRA could lead to exp. growth)
- use ruffus to get repeat calcs under control