1. Team SimPy
  2. simpy
  3. simpy
  4. Issues
Issue #42 resolved

RealtimeEnvironment self.real_start set too early

Anonymous created an issue

WHen creating a simpy.rt.RealtimeEnvironment object, the self.real_start property is set upon initialization. This results in a huge sim_delta time in the first call to RealtimeEnvironment.step(), which in turn causes the delay value to come back as negative and finally resulting in either a thrown exception (if self.strict == True) or a sped-up environment run until the runtime clock catches up with the realtime clock (if self.strict == False).

I wrote a quick fix for myself (I needed a realtime clock simulator for a project) which sems to work fine. Basically, just set self.real_start in an overridden run() method, letting the realtime clock and runtime clock both start at approximately the same time. Thanks for your work!

def run(self, *args, **kwargs):
    self.real_start = time()
    super(RealtimeEnvironment, self).run(*args, **kwargs)

Comments (9)

  1. Stefan Scherfke

    As far as I can see, this is the intended behavior. Maybe not the desired behavior in your case, but intended. ;-)

    The real-time starts running as soon as you create the RtEnv. The time also keeps running if you don’t call run() or step() for a while so that the time of the RtEnv is always in sync with the actual wallclock time.

    Asume the following: You have a process, that creates timeout events with a delay of 1 each and it needs 30ms to do this. wt is wallclock (actual) time passed, rt is real-time from the enviroments view, st is simulation time.

    The behavior if we set real_start only in RtEnv.init:

    1. at = rt = 0ms – RealtimeEnvironment()
    2. at = rt = 30ms – env.run(until=1)
    3. Next event is at st=1, so sleep for 970ms
    4. at = rt = 1000ms – process event
    5. at = rt = 1030ms
    6. at = rt = 1040ms – call env.run(until=2) # Assuming we need 10ms between run() calls
    7. Next event is at st=2, so sleep for 960ms
    8. at = rt = 2000ms - process event
    9. at = rt = 2030ms

    If we reset real_start (and env_start!) in run():

    1. at = rt = 0ms – RealtimeEnvironment()
    2. at = 30ms, rt = 0ms – env.run(until=1)
    3. Next event is at st=1, so sleep for 1000ms
    4. at = 1030ms, rt = 1000ms – process event
    5. at = 1060ms, rt = 1030ms
    6. at = 1070ms, rt at 0ms – call env.run(until=2) # Assuming we need 10ms between run() calls
    7. Next event is at st=2, so sleep for 1000ms
    8. at = 2070ms rt = 1000ms - process event
    9. at = 2100ms, rt at 1030ms

    Now your code would have needed 2100ms seconds to execute and the actual simulation would have need 2060ms. The problem here is, that the second call to run() couldn’t take the time into account, that was need to process the events from the first run() call – we waited for 1000ms instead of 970ms.

    Imho, the behavior of the current version is more "right" and could only become a problem if you write your rtsim in a Python shell were you let quite a lot of time pass between creating the environment and your first run() call.

    If you really want to reset the real_start in every run() call, you also need to reset self.event_start to the current simulation time or you’ll have problems when you call run() multiple times.

  2. Stefan Scherfke

    Something like that should do it for you:

    class ResettingRealtimeEnvironment(RealtimeEnvironment):
        def run(self, *args, **kwargs):
            self.env_start = self.now
            self.real_start = time()
            super().run(*args, **kwargs)
    
  3. luensdorf

    I agree with Stefan but also acknowledge that there's a problem if you've got a time consuming simulation initialization phase. The environment is probably often created very early as it must be passed into many objects. Consequently, the initialization time of those objects is also taken into account.

    If you only want to ignore the initialization time, it should be safe to just set env.real_start to time() before your first call to step or run:

    env = Environment()
    
    # ... Initialization stuff.
    
    env.real_start = time()
    env.run()
    
  4. luensdorf

    I can think of two ways to solve this more elegantly:

    The first one is close to the one above, just provide a method to conveniently synchronize both clocks again:

    env = Environment()
    
    # ... Initialization stuff.
    
    env.sync()
    env.run()
    

    Secondly: Based on the idea of anonymous, assume that run() usually called only once, we could add a sync flag it:

    RealtimeEnvironment.run(until=None, sync=True)

    If sync is True, run() will synchronize the clocks just before processing events.

    What do you think? Maybe even combine both ideas?

  5. luensdorf

    run() needs some way to synchronize. This synchronization can be exposed as the method sync() to the user should she/he work with step() instead of run().

  6. Log in to comment