1. cherrypy
  2. CherryPy

Commits

fumanchu  committed 16ed577

Changed page WhatsNewIn21

  • Participants
  • Parent commits a6561ce
  • Branches default

Comments (0)

Files changed (1)

File WhatsNewIn21.wiki

View file
 }}}
 
 
- 5. Changes to [wiki:StaticContent21 static content]
- 6. New [wiki:ServerEnvironment "server.environment" modes]
- 7. New [wiki:FileUpload21 file upload behavior]
- 8. New [wiki:HTTPServers21 HTTP servers], and WSGI support
- 9. New [wiki:Profiler21 profiling] tool
- 10. New, reusable [wiki:TestSuite21 test suite]
+= Changes to static content =
+
+= New "server.environment" modes =
+
+= New file upload behavior =
+
+= New HTTP servers, and WSGI support =
+
+= New Profiler module =
+
+CherryPy 2.1 has a new profiler module. "Profiling" is a technique to improve your application by examining the time spent in each (Python) function. The CherryPy profiler module does three things:
+
+== 1. Collects profiling data for requests ==
+
+You can easily enable profiling per HTTP-request in the config:
+
+{{{
+[global]
+profiling.on: True
+profiling.path: "/path/to/profile/dir"
+}}}
+
+When enabled, the profiler collects timing data. For each HTTP request, it creates a new file in the folder you specify. If you do not specify a path, profiling data is placed in cherrypy/lib/profile. The path you specify must be an absolute path. Each new request is given a filename with an incrementing number: the first file is named "cp_0001.prof", the second, "cp_0002.prof", and so on.
+
+Whenever you restart the CP process, the numbering will start again at 0001, and new profiling data will overwrite the old data. Be aware that CherryPy 2.1's autoreloader module may restart your process unexpectedly; set the config value "server.environment" to "production" to disable the autoreloader.
+
+In addition, it is important that you understand exactly what function calls are being examined. The profiling process must begin and end at some point; when you use the built-in profiling functionality, this does not include the time spent by the HTTP server in I/O processes, in request handling before passing control to CherryPy, or in response handling after CherryPy has finished producing output. In addition, HTTP/1.1 requests which are served using generators (with the "yield" statement) will not collect timing information for those generator functions (they are resolved outside of the CherryPy request, in the HTTP server or gateway).
+
+Finally, you should also be aware that the test suite included with CherryPy 2.1 collects profiling data by default. If you've never turned on the profiler, but find files in cherrypy/lib/profile, they may have been generated from a run of the test suite.
+
+== 2. Shows profiling data in your browser ==
+
+Once you have finished collecting profiling data, you ''may'' use the hotshot module, provided by Python, to examine and analyze it. However, that takes a bit of setup each time. For quick analysis, the CherryPy profiler module can format your data and show it to you in a browser. Simply execute cherrypy/lib/profiler.py as a script, and it will begin serving profiling data on port 8080 by default.
+
+If you set a profiling.path when collecting data, you need to provide that same path to the profiler.py script as a command arg. For example, if you set profiling.path to "/var/www/myapp/profile", you should execute the profiler script as:
+
+{{{python profiler.py /var/www/myapp/profile 8000}}}
+
+As you can see, you may also specify a port other than 8080 in the second arg.
+
+Then, point your browser to http://localhost:8080. You'll be served a frameset, with a sidebar to navigate between profiling results (files), and a main frame to show you those results. Results are displayed in 'cumulative' order; that is, the functions which took the most time (including any calls ''they'' made) are displayed at the top of the list. The 'cum' column shows you these times. The 'time' column shows you the time each function took, ''without'' including the time any sub-functions took.
+
+== 3. Offers generic profiling services ==
+
+The profiler module is designed to be used in other parts of your application, should you desire it. In general, you should import the module and create a Profiler object as needed, then use the "run" method of that Profiler object. Example:
+
+{{{
+#!python
+from cherrypy.lib import profiler
+
+class Root:
+    p = profiler.Profiler("/path/to/profile/dir")
+    
+    def index(self):
+        self.p.run(self._index)
+    index.exposed = True
+    
+    def _index(self):
+        return "Hello, world!"
+
+cherrypy.root = Root()
+}}}
+
+It's not limited to CherryPy page handlers! Use it for any process for which you want to show profiling results in your browser. Here's an example which tests multiple CP requests; the CP core is so fast, even the slow functions complete in less than a hundredth of a second--this aggregates 100 requests to produce more meaningful times:
+
+{{{
+#!python
+import cherrypy
+from cherrypy.lib import profiler
+
+
+class HelloWorld:
+    def index(self):
+        return "Hello world!"
+    index.exposed = True
+
+cherrypy.root = HelloWorld()
+conf = {'server.logToScreen': False,
+        'server.environment': 'production',
+        }
+cherrypy.config.update({'global': conf})
+cherrypy.server.start(initOnly=True)
+
+
+HOST = "127.0.0.1"
+PORT = 8000
+
+def run_requests():
+    for x in xrange(100):
+        cherrypy.server.request(HOST, HOST, "GET / HTTP/1.0",
+                                [("Host", "%s:%s" % (HOST, PORT))],
+                                "", "http")
+
+if __name__ == "__main__":
+    p = profiler.Profiler()
+    p.run(run_requests)
+}}}
+
+= New, reusable test suite =
+
+The test suite for !CherryPy 2.1 has been greatly improved. There are '''many''' new tests for both basic and advanced functionality. There are new command-line options to control which tests are run, and the output they produce. There are new debugging tools specifically for web page tests. Finally, many of the test suite components are reusable by your applications, so you can start developing your own !CherryPy applications using all of these benefits.
+
+== Running CherryPy tests ==
+
+If you look in the cherrypy/test directory, you'll see a number of test files. You may certainly run any of those scripts on their own. However, the "test.py" script allows you to run any or all of them at once, and gives you some extra options.
+
+=== Which tests to run ===
+
+If you wish to run all of the tests in the suite, simply run test.py. If you wish to run a single test from the suite, provide the name of that test as a command-line argument. Multiple test names can be provided:
+
+{{{python cherrypy\test\test.py --test_core --test_baseurl_filter}}}
+
+=== Which servers to use ===
+
+By default, test.py uses CherryPy's builtin WSGI server to run the test suite. You may optionally run the test suite without an HTTP server ("--serverless"), with the older HTTP server ("--native"), a combination of any of the three servers, or all three with the "--all" argument. Example:
+
+{{{python cherrypy\test\test.py --serverless}}}
+
+In addition, the three server modes can all be run in HTTP/1.0 mode (the default), or in HTTP/1.1 mode by including the "--1.1" argument:
+
+{{{python cherrypy\test\test.py --all --1.1}}}
+
+=== Additional tools ===
+
+==== Code coverage ====
+!CherryPy 2.1 includes a code-coverage tool. To include this output when running the test suite, use the "--cover" argument. Note that you cannot run the profiler (see below) at the same time. To use the coverage tool, you need to download "coverage.py" (either Gareth Rees' [http://www.garethrees.org/2001/12/04/python-coverage/ original implementation] or Ned Batchelder's [http://www.nedbatchelder.com/code/modules/coverage.html enhanced version]), and place it in your PYTHONPATH.
+
+{{{python cherrypy\test\test.py --cover}}}
+
+When all tests have run, you'll get a report showing what percentage of the code was exercised, like this:
+
+{{{
+#!python
+CODE COVERAGE (this might take a while).........................
+Total: 3663 Covered: 2447 Percent: 66%
+}}}
+
+Statistics are collected for all Python modules, but reported only for the cherrypy package, by default. If you're using the coverage tool with your own !CherryPy application, you'll want to report on your own package instead. Use the --basedir=path command-line argument:
+
+{{{python cherrypy\test\test.py --cover --basedir=myapp}}}
+
+If "path" is relative, it will be considered relative to the current working directory. 
+
+The coverage tool dumps its output into cherrypy/lib, and you can play with the data in an interactive session, if you like. A '''much''' easier way to see the results is to ask !CherryPy to serve the data to you in your browser! Run cherrypy/lib/covercp.py, and browse the complete data at localhost:8080. This works just as well for your own applications (see below).
+
+==== Profiling ====
+!CherryPy 2.1 includes a profiling tool. To include this output when running the test suite, use the "--profile" argument. Note that you cannot run the coverage tool (see above) at the same time.
+
+{{{python cherrypy\test\test.py --profile}}}
+
+The profiler will dump its output into cherrypy/lib, and you can play with the data in an interactive session, if you like. A '''much''' easier way to see the results is to ask !CherryPy to serve the data to you in your browser! Run cherrypy/lib/profiler.py, and browse the data at localhost:8080. This works just as well for your own applications (see below).
+
+== Debugging tools ==
+
+The !CherryPy test suite has a module named "webtest.py", that does two things.
+
+First, webtest helps when server errors occur. Web tests are hard to debug, because web test suites usually run in one process, sending HTTP requests to a separate server process. When errors occur on the server, they are often lost, because that information cannot be returned to the client. !CherryPy, however, runs both the client and server sides of a test in the same process. Therefore, when an error is encountered on the server side of a test, we can print a nice traceback.
+
+Second, once a response has been received by the client, it is tested against various assertions. Rather than simply fail the assertion and raise an error, webtest gives you an interactive prompt, with several options:
+
+{{{    Show: [B]ody [H]eaders [S]tatus [U]RL; [I]gnore, [R]aise, or sys.e[X]it >> }}}
+
+Select Body, Header, or Status to show the content of the response. Large response bodies will be output in a Unix "more" style, 30 lines at a time (you can change this number via !WebCase.console_height). Hit "q" to stop scrolling the body.
+
+The "URL" option will show you which URL was requested most recently. Note that these webtest features are '''not''' threadsafe; you must run all such tests from a single thread (but that's normal).
+
+The "Raise" option will proceed normally, and the usual !AssertionError will be raised, stopping the test. If you want to continue the current test, choose "Ignore" and no error will be raised. Choose "eXit" to invoke sys.exit() (which will probably proceed to the next test anyway).
+
+== Reusing the test suite ==
+
+The webtest module doesn't reference !CherryPy in any way. Feel free to use it with other web frameworks, or whatever web testing needs you have.
+
+=== Using the test tools with your own CherryPy applications ===
+
+Here is an example test script for one of my own applications, "Mission Control" (minus some database-handling code):
+
+{{{
+#!python
+import os, sys
+localDir = os.path.dirname(__file__)
+testConf = os.path.join(localDir, "test.conf")
+
+from cherrypy.test import test
+
+class MControlTestHarness(test.TestHarness):
+    
+    def _run_all_servers(self, conf):
+        # By importing here, we ensure imports occur after coverage has started.
+        from mcontrol.http import cp21
+        cp21.init()
+        
+        import endue
+        endue.login = lambda: "test"
+        test.TestHarness._run_all_servers(self, conf)
+
+if __name__ == '__main__':
+    # Place our current directory's parent (mcontrol/) at the beginning
+    # of sys.path, so that all imports are from our current directory.
+    curpath = os.path.normpath(os.path.join(os.getcwd(), localDir))
+    sys.path.insert(0, os.path.normpath(os.path.join(curpath, '../../')))
+    
+    testList = ["test_root",
+                "test_incident",
+                "test_vehicle",
+                "test_volunteer",
+                "test_missiontrip",
+                "test_project",
+                # test_materialorder should come after project,
+                # because it duplicates some calls (without asserting).
+                "test_materialorder",
+                # test_job should come after missiontrip and project,
+                # because it duplicates some calls (without asserting).
+                "test_job",
+                ]
+    MControlTestHarness(testList).run(testConf)
+}}}
+
+I'm reusing the TestHarness class from cherrypy/test/test.py, but with my own list of tests, and my own test config. I get all of the server options, plus coverage and profiling, for free.