Note: at the end of 2005 I switched over to GNU source-highlight and stopped wasting my time on my own parser. This is kept merely for reference.
My web authoring "philosophy" leads me to producing static pages of straight-up data. However, I have seen the value of presentational markup, and am gradually revising this site. As a starting point, I've written Yet Another Python Formatter.
Rather than setting up a dynamic scripting environment, I'm using simple makefiles, like this:
fakeparse.py.html: fakeparse.py
python fakeparse.py fakeparse.py > fakeparse.py.html
The code is still somewhat rough, but improvements will appear here. Now that the regexp-based one has gotten sufficiently complex as to be unwieldy (but generate pretty and heavily linked output), I've started on a "tokenize"-based version, partly to clean it up and partly to explore the tokenize package, and to make the identification of variables (and string quoting handling) more deterministic. Also, the "tokenparse" version will probably handle more of "other people's code" than the regexp-based one which is more tuned to my personal style.