Issue #12 new

setup_node should build virtualenv remotely

Solomon Hykes
created an issue

setup_node builds a virtualenv locally, then pushes it to the node. That means

Applications with large dependencies take forever to transfer

Identical architecture and binary format are required on client & server. I can't push from MacOSX for example.

It would make much more sense to copy the application, THEN build the virtualenv on the node.

Comments (5)

  1. ericmoritz

    +1 to this. I've gotten bit by this as well. My local machine is a 32 bit but my server is 64 bit so python never works when it's copied up to the server.

  2. Ian Bicking repo owner

    Well, one point of accuracy, is it would be one virtualenv per app, i.e., in silver update.

    The files that get sync'd over from the virtualenv are essentially cruft; they are not really intentional. Any binary packages should be installed globally on your development system (e.g., using Homebrew) and installed remotely by using the packages setting in app.ini (finding the equivalent Ubuntu package for your library).

    I have extended silver run in a recent commit, so you can run any script, e.g., silver run LOCATION src/myapp/manage.py.

    It is unfortunately hard to get pip to install into different locations for transfer and not transferring packages, because of how distutils options work (or don't work). So... it would be ideal if you could install packages in a particular location for transfer, and another location for dev-local.

  3. Nate Aune

    I'm running into this problem as well because I want to run celeryd on the server, which is invoke with the command:

    $ ./manage.py celeryd
    
    

    This results in an error that it can't find django.core. This makes sense because it's trying to use the /usr/bin/python that doesn't know anything about Django, since Django is not installed globally, but only in the virtualenv. But if I try to activate the virtualenv, and try to run manage.py using the local python, then it complains because the bin/python that got rsynced over is 32-bit and the Rackspace machine is 64-bit.

    So my question is, what's the best way to run long-running processes on the server that require all the environment variables to be set? If we use the silver run command, that would typically have to be invoked from our local machine, and not the server.

  4. Nate Aune

    It seems that one way to do this is with this command on the server:

    nohup /usr/local/share/silverlining/mgr-scripts/run-command.py some.domain.name NONE manage.py celeryd
    
    

    But this seems like a weird way to do it, rather than just running the ./manage.py command in the virtualenv.

  5. Log in to comment