bug 37020
viewvc 1.1.0-beta1 initial commit git-svn-id: svn://svn.office.custis.ru/3rdparty/viewvc.org/trunk@4 6955db30-a419-402b-8a0d-67ecbb4d7f56custis-r2243
|
@ -0,0 +1,269 @@
|
|||
Version 1.1.0 (released ??-???-????)
|
||||
|
||||
* add support for full content diffs (issue #153)
|
||||
* make many more data dictionary items available to all views
|
||||
* various rcsparse and tparse module fixes
|
||||
* add daemon mode to standalone.py (issue #235)
|
||||
* rework helper application configuration options (issues #229, #62)
|
||||
* teach standalone.py to recognize Subversion repositories via -r option
|
||||
* now interpret relative paths in "viewvc.conf" as relative to that file
|
||||
* add 'purge' subcommand to cvsdbadmin and svndbadmin (issue #271)
|
||||
* fix orphaned data bug in cvsdbadmin/svndbadmin rebuild (issue #271)
|
||||
* add support for query by log message (issues #22, #121)
|
||||
* fix bug parsing 'svn blame' output with too-long author names (issue #221)
|
||||
* fix default standalone.py port to be within private IANA range (issue #234)
|
||||
* add support for integration with GNU source-highlight (issue #285)
|
||||
* add unified configury of allowed views
|
||||
* add support for disabling the checkout view (now the default state)
|
||||
* add support for ranges of revisions to svndbadmin (issue #224)
|
||||
* make the query handling more forgiving of malformatted subdirs (issue #244)
|
||||
* add support for per-root configuration overrides (issue #371)
|
||||
* add support for optional email address mangling (issue #290)
|
||||
* extensible path-based authorization subsystem (issue #268), supporting:
|
||||
- Subversion authz files (new)
|
||||
- regexp-based path hiding (for compat with 1.0.x)
|
||||
- file glob top-level directory hiding (for compat with 1.0.x)
|
||||
* allow default file view to be "markup" (issue #305)
|
||||
* add support for displaying file/directory properties (issue #39)
|
||||
* pagination improvements
|
||||
* add gzip output encoding support for template-driven pages
|
||||
* fix cache control bugs (issue #259)
|
||||
* add RSS feed URL generation for file history
|
||||
* add support for remote creation of ViewVC checkins database
|
||||
* add integration with Pygments for syntax highlighting
|
||||
* preserve executability of Subversion files in tarballs (issue #233)
|
||||
* add ability to set Subversion runtime config dir (issue #351, issue #339)
|
||||
* show RSS/query links only for roots found in commits database (issue #357)
|
||||
* recognize Subversion svn:mime-type property values (issue #364)
|
||||
* hide CVS files when viewing tags/branches on which they don't exist
|
||||
* add support for hiding errorful entries from the directory view (issue #105)
|
||||
|
||||
Version 1.0.7 (released 14-Oct-2008)
|
||||
|
||||
* fix regression in the 'as text' download view (issue #373)
|
||||
|
||||
Version 1.0.6 (released 16-Sep-2008)
|
||||
|
||||
* security fix: ignore arbitrary user-provided MIME types (issue #354)
|
||||
* fix bug in regexp search filter when used with sticky tag (issue #346)
|
||||
* fix bug in handling of certain 'co' output (issue #348)
|
||||
* fix regexp search filter template bug
|
||||
* fix annotate code syntax error
|
||||
* fix mod_python import cycle (issue #369)
|
||||
|
||||
Version 1.0.5 (released 28-Feb-2008)
|
||||
|
||||
* security fix: omit commits of all-forbidden files from query results
|
||||
* security fix: disallow direct URL navigation to hidden CVSROOT folder
|
||||
* security fix: strip forbidden paths from revision view
|
||||
* security fix: don't traverse log history thru forbidden locations
|
||||
* security fix: honor forbiddenness via diff view path parameters
|
||||
* new 'forbiddenre' regexp-based path authorization feature
|
||||
* fix root name conflict resolution inconsistencies (issue #287)
|
||||
* fix an oversight in the CVS 1.12.9 loginfo-handler support
|
||||
* fix RSS feed content type to be more specific (issue #306)
|
||||
* fix entity escaping problems in RSS feed data (issue #238)
|
||||
* fix bug in tarball generation for remote Subversion repositories
|
||||
* fix query interface file-count-limiting logic
|
||||
* fix query results plus/minus count to ignore forbidden files
|
||||
* fix blame error caused by 'svn' unable to create runtime config dir
|
||||
|
||||
Version 1.0.4 (released 10-Apr-2007)
|
||||
|
||||
* fix some markup bugs in query views (issue #266)
|
||||
* fix loginfo-handler's support for CVS 1.12.9 (issues #151, #257)
|
||||
* make viewvc-install able to run from an arbitrary location
|
||||
* update viewvc-install's output for readability
|
||||
* fix bug writing commits to non-MyISAM databases (issue #262)
|
||||
* allow long paths in generated tarballs (issue #12)
|
||||
* fix bug interpreting EZT substitute patterns
|
||||
* fix broken markup view disablement
|
||||
* fix broken directory view link generation in directory log view
|
||||
* fix Windows-specific viewvc-install bugs
|
||||
* fix broke query result links for Subversion deleted items (issue #296)
|
||||
* fix some output XHTML validation buglets
|
||||
* fix database query cache staleness problems (issue #180)
|
||||
|
||||
Version 1.0.3 (released 13-Oct-2006)
|
||||
|
||||
* fix bug in path shown for Subversion deleted-under-copy items (issue #265)
|
||||
* security fix: declare charset for views to avoid IE UTF7 XSS attack
|
||||
|
||||
Version 1.0.2 (released 29-Sep-2006)
|
||||
|
||||
* minor documentation fixes
|
||||
* fix Subversion annotate functionality on Windows (issue #18)
|
||||
* fix annotate assertions on uncanonicalized #include paths (issue #208)
|
||||
* make RSS URL method match the method used to generate it (issue #245)
|
||||
* fix Subversion annotation to run non-interactively, preventing hangs
|
||||
* fix bug in custom syntax highlighter fallback logic
|
||||
* fix bug in PHP CGI hack to avoid force-cgi-redirect errors
|
||||
|
||||
Version 1.0.1 (released 20-Jul-2006)
|
||||
|
||||
* fix exception on log page when use_pagesize is enabled
|
||||
* fix an XHTML validation bug in the footer template (issue #239)
|
||||
* fix handling of single-component CVS revision numbers (issue #237)
|
||||
* fix bug in download-as-text URL link generation (issue #241)
|
||||
* fix query.cgi bug, missing 'rss_href' template data item (issue #249)
|
||||
* no longer omit empty Subversion directories from tarballs (issue #250)
|
||||
* use actual modification time for Subversion directories in tarballs
|
||||
|
||||
Version 1.0 (released 01-May-2006)
|
||||
|
||||
* add support for viewing Subversion repositories
|
||||
* add support for running on MS Windows
|
||||
* generate strict XHTML output
|
||||
* add support for caching by sending "Last-Modified", "Expires",
|
||||
"ETag", and "Cache-Control" headers
|
||||
* add support for Mod_Python on Apache 2.x and ASP on IIS
|
||||
* Several changes to standalone.py:
|
||||
- -h commandline option to specify hostname for non local use.
|
||||
- -r commandline option may be repeated to use more than repository
|
||||
before actually installing ViewCVS.
|
||||
- New GUI field to test paging.
|
||||
* add new, better-integrated query interface
|
||||
* add integrated RSS feeds
|
||||
* add new "root_as_url_component" option to embed root names as
|
||||
path components in ViewCVS URLs for a more natural URL scheme
|
||||
in ViewCVS configurations with multiple repositories.
|
||||
* add new "use_localtime" option to display local times instead of UTC times
|
||||
* add new "root_parents" option to make it possible to add and
|
||||
remove repositories without modifying the ViewCVS configuration
|
||||
* add new "template_dir" option to facilitate switching between sets of
|
||||
templates
|
||||
* add new "sort_group_dirs" option to disable grouping of
|
||||
directories in directory listings
|
||||
* add new "port" option to connect to a MySQL database on a nonstandard port
|
||||
* make "default_root" option optional. When no root is specified,
|
||||
show a page listing all available repositories
|
||||
* add "default_file_view" option to make it possible for relative
|
||||
links and image paths in checked out HTML files to work without
|
||||
the need for special /*checkout*/ prefixes in URLs. Deprecate
|
||||
"checkout_magic" option and disable by default
|
||||
* add "limit_changes" option to limit number of changed files shown per
|
||||
commit by default in query results and in the Subversion revision view
|
||||
* hide CVS "Attic" directories and add simple toggle for showing
|
||||
dead files in directory listings
|
||||
* show Unified, Context and Side-by-side diffs in HTML instead of
|
||||
in bare text pages
|
||||
* make View/Download links work the same for all file types
|
||||
* add links to tip of selected branch on log page
|
||||
* allow use of "Highlight" program for colorizing
|
||||
* enable enscript colorizing for more file types
|
||||
* add sorting arrows for directory views
|
||||
* get rid of popup windows for checkout links
|
||||
* obfuscate email addresses in html output by encoding @ symbol
|
||||
with an HTML character reference
|
||||
* add paging capability
|
||||
* Improvements to templates
|
||||
- add new template authoring guide
|
||||
- increase coverage, use templates to produce HTML for diff pages,
|
||||
markup pages, annotate pages, and error pages
|
||||
- move more common page elements into includes
|
||||
- add new template variables providing ViewCVS URLs for more
|
||||
links between related pages and less URL generation inside
|
||||
templates
|
||||
* add new [define] EZT directive for assigning variables within templates
|
||||
* add command line argument parsing to install script to allow
|
||||
non-interactive installs
|
||||
* add stricter parameter validation to lower likelihood of cross-site
|
||||
scripting vulnerabilities
|
||||
* add support for cvsweb's "mime_type=text/x-cvsweb-markup" URLs
|
||||
* fix incompatibility with enscript 1.6.3
|
||||
* fix bug in parsing FreeBSD rlog output
|
||||
* work around rlog assumption all two digit years in RCS files are
|
||||
relative to the year 1900.
|
||||
* change loginfo-handler to cope with spaces in filenames and
|
||||
support a simpler command line invocation from CVS
|
||||
* make cvsdbadmin work properly when invoked on CVS subdirectory
|
||||
paths instead of top-level CVS root paths
|
||||
* show diff error when comparing two binary files
|
||||
* make regular expression search skip binary files
|
||||
* make regular expression search skip nonversioned files in CVS
|
||||
directories instead of choking on them
|
||||
* fix tarball generator so it doesn't include forbidden modules
|
||||
* output "404 Not Found" errors instead of "403 Forbidden" errors
|
||||
to not reveal whether forbidden paths exist
|
||||
* fix sorting bug in directory view
|
||||
* reset log and directory page numbers when leaving those pages
|
||||
* reset sort direction in directory listing when clicking new columns
|
||||
* fix "Accept-Language" handling for Netscape 4.x browsers
|
||||
* fix file descriptor leak in standalone server
|
||||
* clean up zombie processes from running enscript
|
||||
* fix mysql "Too many connections" error in cvsdbadmin
|
||||
* get rid of mxDateTime dependency for query database
|
||||
* store query database times in UTC instead of local time
|
||||
* fix daylight saving time bugs in various parts of the code
|
||||
|
||||
Version 0.9.4 (released 17-Aug-2005)
|
||||
|
||||
* security fix: omit forbidden/hidden modules from query results.
|
||||
|
||||
Version 0.9.3 (released 17-May-2005)
|
||||
|
||||
* security fix: disallow bad "content-type" input [CAN-2004-1062]
|
||||
* security fix: disallow bad "sortby" and "cvsroot" input [CAN-2002-0771]
|
||||
* security fix: omit forbidden/hidden modules from tarballs [CAN-2004-0915]
|
||||
|
||||
Version 0.9.2 (released 15-Jan-2002)
|
||||
|
||||
* fix redirects to Attic for diffs
|
||||
* fix diffs that have no changes (causing an infinite loop)
|
||||
|
||||
Version 0.9.1 (released 26-Dec-2001)
|
||||
|
||||
* fix a problem with some syntax in ndiff.py which isn't compatible
|
||||
with Python 1.5.2 (causing problems at install time)
|
||||
* remove a debug statement left in the code which continues to
|
||||
append lines to /tmp/log
|
||||
|
||||
Version 0.9 (released 23-Dec-2001)
|
||||
|
||||
* create templates for the rest of the pages: markup pages, graphs,
|
||||
annotation, and diff.
|
||||
* add multiple language support and dynamic selection based on the
|
||||
Accept-Language request header
|
||||
* add support for key/value files to provide a way for user-defined
|
||||
variables within templates
|
||||
* add optional regex searching for file contents
|
||||
* add new templates for the navigation header and the footer
|
||||
* EZT changes:
|
||||
- add formatting into print directives
|
||||
- add parameters to [include] directives
|
||||
- relax what can go in double quotes
|
||||
- [include] directives are now relative to the current template
|
||||
- throw an exception for unclosed blocks
|
||||
* changes to standalone.py: add flag for regex search
|
||||
* add more help pages
|
||||
* change installer to optionally show diffs
|
||||
* fix to log.ezt and log_table.ezt to select "Side by Side" properly
|
||||
* create dir_alternate.ezt for the flipped rev/name links
|
||||
* various UI tweaks for the directory pages
|
||||
|
||||
Version 0.8 (released 10-Dec-2001)
|
||||
|
||||
* add EZT templating mechanism for generating output pages
|
||||
* big update of cvs commit database
|
||||
- updated MySQL support
|
||||
- new CGI
|
||||
- better database caching
|
||||
- switch from old templates to new EZT templates (and integration
|
||||
of look-and-feel)
|
||||
* optional usage of CVSGraph is now builtin
|
||||
* standalone server (for testing) is now provided
|
||||
* shifted some options from viewcvs.conf to the templates
|
||||
* the help at the top of the pages has been shifted to separate help
|
||||
pages, so experienced users don't have to keep seeing it
|
||||
* paths in viewcvs.conf don't require trailing slashes any more
|
||||
* tweak the colorizing for Pascal and Fortran files
|
||||
* fix file readability problem where the user had access via the
|
||||
group, but the process' group did not match that group
|
||||
* some Daylight Savings Time fixes in the CVS commit database
|
||||
* fix tarball generation (the file name) for the root dir
|
||||
* changed default human-readable-diff colors to "stoplight" metaphor
|
||||
* web site and doc revamps
|
||||
* fix the mime types on the download, view, etc links
|
||||
* improved error response when the cvs root is missing
|
||||
* don't try to process vhosts if the config section is not present
|
||||
* various bug fixes and UI tweaks
|
|
@ -0,0 +1,28 @@
|
|||
The following people have commit access to the ViewVC sources.
|
||||
Note that this is not a full list of ViewVC's authors, however --
|
||||
for that, you'd need to look over the log messages to see all the
|
||||
patch contributors.
|
||||
|
||||
If you have a question or comment, it's probably best to mail
|
||||
dev@viewvc.tigris.org, rather than mailing any of these people
|
||||
directly.
|
||||
|
||||
gstein Greg Stein <gstein@lyra.org>
|
||||
jpaint Jay Painter <???>
|
||||
akr Tanaka Akira <???>
|
||||
timcera Tim Cera <???>
|
||||
pefu Peter Funk <???>
|
||||
lbruand Lucas Bruand <???>
|
||||
cmpilato C. Michael Pilato <cmpilato@collab.net>
|
||||
rey4 Russell Yanofsky <rey4@columbia.edu>
|
||||
mharig Mark Harig <???>
|
||||
northeye Takuo Kitame <???>
|
||||
jamesh James Henstridge <???>
|
||||
maxb Max Bowsher <maxb1@ukf.net>
|
||||
eh Erik Hülsmann <e.huelsmann@gmx.net>
|
||||
mhagger Michael Haggerty <mhagger@alum.mit.edu>
|
||||
|
||||
## Local Variables:
|
||||
## coding:utf-8
|
||||
## End:
|
||||
## vim:encoding=utf8
|
|
@ -0,0 +1,520 @@
|
|||
CONTENTS
|
||||
--------
|
||||
TO THE IMPATIENT
|
||||
SECURITY INFORMATION
|
||||
INSTALLING VIEWVC
|
||||
APACHE CONFIGURATION
|
||||
UPGRADING VIEWVC
|
||||
SQL CHECKIN DATABASE
|
||||
ENABLING SYNTAX COLORATION
|
||||
CVSGRAPH CONFIGURATION
|
||||
IF YOU HAVE PROBLEMS...
|
||||
|
||||
|
||||
TO THE IMPATIENT
|
||||
----------------
|
||||
Congratulations on getting this far. :-)
|
||||
|
||||
Required Software And Configuration Needed To Run ViewVC:
|
||||
|
||||
For CVS Support:
|
||||
|
||||
* Python 1.5.2 or later
|
||||
(http://www.python.org/)
|
||||
* RCS, Revision Control System
|
||||
(http://www.cs.purdue.edu/homes/trinkle/RCS/)
|
||||
* GNU-diff to replace diff implementations without the -u option
|
||||
(http://www.gnu.org/software/diffutils/diffutils.html)
|
||||
* read-only, physical access to a CVS repository
|
||||
(See http://www.cvshome.org/ for more information)
|
||||
|
||||
For Subversion Support:
|
||||
|
||||
* Python 2.0 or later
|
||||
(http://www.python.org/)
|
||||
* Subversion, Version Control System, 1.3.1 or later
|
||||
(binary installation and Python bindings)
|
||||
(http://subversion.tigris.org/)
|
||||
|
||||
Optional:
|
||||
|
||||
* a web server capable of running CGI programs
|
||||
(for example, Apache at http://httpd.apache.org/)
|
||||
* MySQL 3.22 and MySQLdb 0.9.0 or later to create a commit database
|
||||
(http://www.mysql.com/)
|
||||
(http://sourceforge.net/projects/mysql-python)
|
||||
* Pygments 0.9 or later, syntax highlighting engine
|
||||
(http://pygments.org)
|
||||
* CvsGraph 1.5.0 or later, graphical CVS revision tree generator
|
||||
(http://www.akhphd.au.dk/~bertho/cvsgraph/)
|
||||
|
||||
Quick sanity check:
|
||||
|
||||
If you just want to see what your repository looks like when seen
|
||||
through ViewVC, type:
|
||||
|
||||
$ bin/standalone.py -r /PATH/TO/REPOSITORY
|
||||
|
||||
This will start a tiny ViewVC server at http://localhost:49152/viewvc/,
|
||||
to which you can connect with your browser.
|
||||
|
||||
Standard operation:
|
||||
|
||||
To start installing right away (on UNIX): type "./viewvc-install"
|
||||
in the current directory and answer the prompts. When it
|
||||
finishes, edit the file viewvc.conf in the installation directory
|
||||
to tell ViewVC the paths to your CVS and Subversion repositories.
|
||||
Next, configure your web server (in the way appropriate to that browser)
|
||||
to run <VIEWVC_INSTALLATION_DIRECTORY>/bin/cgi/viewvc.cgi. The section
|
||||
`INSTALLING VIEWVC' below is still recommended reading.
|
||||
|
||||
|
||||
SECURITY INFORMATION
|
||||
--------------------
|
||||
|
||||
ViewVC provides a feature which allows version controlled content to
|
||||
be served to web browsers just like static web server content. So, if
|
||||
you have a directory full of interrelated HTML files that is housed in
|
||||
your version control repository, ViewVC can serve those files as HTML.
|
||||
You'll see in your web browser what you'd see if the files were part
|
||||
of your website, with working references to stylesheets and images and
|
||||
links to other pages.
|
||||
|
||||
It is important to realize, however, that as useful as that feature
|
||||
is, there is some risk security-wise in its use. Essentially, anyone
|
||||
with commit access to the CVS or Subversion repositories served by
|
||||
ViewVC has the ability to affect site content. If a discontented or
|
||||
ignorant user commits malicious HTML to a version controlled file
|
||||
(perhaps just by way of documenting examples of such), that malicious
|
||||
HTML is effectively published and live on your ViewVC instance.
|
||||
Visitors viewing those versioned controlled documents get the
|
||||
malicious code, too, which might not be what the original author
|
||||
intended.
|
||||
|
||||
For this reason, ViewVC's "checkout" view is disabled by default. If
|
||||
you wish to enable it, simply add "co" to the list of views enabled in
|
||||
the allowed_views configuration option.
|
||||
|
||||
|
||||
INSTALLING VIEWVC
|
||||
------------------
|
||||
|
||||
NOTE: Windows users can refer to windows/README for Windows-specific
|
||||
installation instructions.
|
||||
|
||||
1) To get viewvc.cgi to work, make sure that you have Python installed
|
||||
and a webserver which is capable of executing CGI scripts (either
|
||||
based on the .cgi extension, or by placing the script within a specific
|
||||
directory).
|
||||
|
||||
Note that to browse CVS repositories, the viewvc.cgi script needs to
|
||||
have READ-ONLY, physical access to the repository (or a copy of it).
|
||||
Therefore, rsh/ssh or pserver access to the repository will not work.
|
||||
And you need to have the RCS utilities installed, specifically "rlog",
|
||||
"rcsdiff", and "co".
|
||||
|
||||
2) Installation is handled by the ./viewvc-install script. Run this
|
||||
script and you will be prompted for a installation root path.
|
||||
The default is /usr/local/viewvc-VERSION, where VERSION is
|
||||
the version of this ViewVC release. The installer sets the install
|
||||
path in some of the files, and ViewVC cannot be moved to a
|
||||
different path after the install.
|
||||
|
||||
Note: while 'root' is usually required to create /usr/local/viewvc,
|
||||
ViewVC does not have to be installed as root, nor does it run as root.
|
||||
It is just as valid to place ViewVC in a home directory, too.
|
||||
|
||||
Note: viewvc-install will create directories if needed. It will
|
||||
prompt before overwriting files that may have been modified (such
|
||||
as viewvc.conf), thus making it safe to install over the top of
|
||||
a previous installation. It will always overwrite program files,
|
||||
however.
|
||||
|
||||
3) Edit <VIEWVC_INSTALLATION_DIRECTORY>/viewvc.conf for your specific
|
||||
configuration. In particular, examine the following configuration options:
|
||||
|
||||
cvs_roots (for CVS)
|
||||
svn_roots (for Subversion)
|
||||
root_parents (for CVS or Subversion)
|
||||
default_root
|
||||
root_as_url_component
|
||||
rcs_dir
|
||||
mime_types_file
|
||||
|
||||
There are some other options that are usually nice to change. See
|
||||
viewvc.conf for more information. ViewVC provides a working,
|
||||
default look. However, if you want to customize the look of ViewVC
|
||||
then edit the files in <VIEWVC_INSTALLATION_DIRECTORY>/templates.
|
||||
You need knowledge about HTML to edit the templates.
|
||||
|
||||
4) The CGI programs are in <VIEWVC_INSTALLATION_DIRECTORY>/bin/cgi/. You can
|
||||
symlink to this directory from somewhere in your published HTTP server
|
||||
path if your webserver is configured to follow symbolic links. You can
|
||||
also copy the installed <VIEWVC_INSTALLATION_DIRECTORY>/bin/cgi/*.cgi
|
||||
scripts after the install (unlike the other files in ViewVC, the scripts
|
||||
under bin/ can be moved).
|
||||
|
||||
If you are using Apache, then see below at the section titled
|
||||
APACHE CONFIGURATION.
|
||||
|
||||
NOTE: for security reasons, it is not advisable to install ViewVC
|
||||
directly into your published HTTP directory tree (due to the MySQL
|
||||
passwords in viewvc.conf).
|
||||
|
||||
That's it for repository browsing. Instructions for getting the SQL
|
||||
checkin database working are below.
|
||||
|
||||
|
||||
APACHE CONFIGURATION
|
||||
--------------------
|
||||
|
||||
1) Find out where the web server configuration file is kept. Typical
|
||||
locations are /etc/httpd/httpd.conf, /etc/httpd/conf/httpd.conf,
|
||||
and /etc/apache/httpd.conf. Depending on how apache was installed,
|
||||
you may also look under /usr/local/etc or /etc/local. Use the vendor
|
||||
documentation or the find utility if in doubt.
|
||||
|
||||
Either METHOD A:
|
||||
2) The ScriptAlias directive is very useful for pointing
|
||||
directly to the viewvc.cgi script. Simply insert a line containing
|
||||
|
||||
ScriptAlias /viewvc <VIEWVC_INSTALLATION_DIRECTORY>/bin/cgi/viewvc.cgi
|
||||
|
||||
into your httpd.conf file. Choose the location in httpd.conf where
|
||||
also the other ScriptAlias lines reside. Some examples:
|
||||
|
||||
ScriptAlias /viewvc /usr/local/viewvc-1.0/bin/cgi/viewvc.cgi
|
||||
ScriptAlias /query /usr/local/viewvc-1.0/bin/cgi/query.cgi
|
||||
|
||||
continue with step 3).
|
||||
|
||||
or alternatively METHOD B:
|
||||
2) Copy the CGI scripts from
|
||||
<VIEWVC_INSTALLATION_DIRECTORY>/bin/cgi/*.cgi
|
||||
to the /cgi-bin/ directory configured in your httpd.conf file.
|
||||
|
||||
continue with step 3).
|
||||
|
||||
and then there's METHOD C:
|
||||
2) Copy the CGI scripts from
|
||||
<VIEWVC_INSTALLATION_DIRECTORY>/bin/cgi/*.cgi
|
||||
to the directory of your choosing in the Document Root adding the following
|
||||
apache directives for the directory in httpd.conf or an .htaccess file:
|
||||
|
||||
Options +ExecCGI
|
||||
AddHandler cgi-script .cgi
|
||||
|
||||
(Note: For this to work mod_cgi has to be loaded. And for the .htaccess file
|
||||
to be effective, "AllowOverride All" or "AllowOverride Options FileInfo"
|
||||
need to have been specified for the directory.)
|
||||
|
||||
continue with step 3).
|
||||
|
||||
or if you've got Mod_Python installed you can use METHOD D:
|
||||
2) Copy the Python scripts and .htaccess file from
|
||||
<VIEWVC_INSTALLATION_DIRECTORY>/bin/mod_python/
|
||||
to a directory being served by apache.
|
||||
|
||||
In httpd.conf, make sure that "AllowOverride All" or at least
|
||||
"AllowOverride FileInfo Options" are enabled for the directory
|
||||
you copied the files to.
|
||||
|
||||
Note: If you are using Mod_Python under Apache 1.3 the tarball generation
|
||||
feature may not work because it uses multithreading. This works fine
|
||||
under Apache 2.
|
||||
|
||||
continue with step 3).
|
||||
|
||||
3) Restart apache. The commands to do this vary. "httpd -k restart" and
|
||||
"apache -k restart" are two common variants. On RedHat Linux it is
|
||||
done using the command "/sbin/service httpd restart" and on SuSE Linux
|
||||
it is done with "rcapache restart"
|
||||
|
||||
4) Optional: Add access control.
|
||||
|
||||
In your httpd.conf you can control access to certain modules by adding
|
||||
directives like this:
|
||||
|
||||
<Location "<url to viewvc.cgi>/<modname_you_wish_to_access_ctl>">
|
||||
AllowOverride None
|
||||
AuthUserFile /path/to/passwd/file
|
||||
AuthName "Client Access"
|
||||
AuthType Basic
|
||||
require valid-user
|
||||
</Location>
|
||||
|
||||
WARNING: If you enable the "checkout_magic" or "allow_tar" options, you
|
||||
will need to add additional location directives to prevent people
|
||||
from sneaking in with URLs like:
|
||||
|
||||
http://<server_name>/viewvc/*checkout*/<module_name>
|
||||
http://<server_name>/viewvc/~checkout~/<module_name>
|
||||
http://<server_name>/viewvc/<module_name>.tar.gz?view=tar
|
||||
|
||||
5) Optional: Protect your ViewVC instance from server-whacking webcrawlers.
|
||||
|
||||
As ViewVC is a web-based application which each page containing various
|
||||
links to other pages and views, you can expect your server's performance
|
||||
to suffer if a webcrawler finds your ViewVC instance and begins
|
||||
traversing those links. We highly recommend that you add your ViewVC
|
||||
location to a site-wide robots.txt file. Visit the Wikipedia page
|
||||
for Robots.txt (http://en.wikipedia.org/wiki/Robots.txt) for more
|
||||
information.
|
||||
|
||||
|
||||
UPGRADING VIEWVC
|
||||
-----------------
|
||||
|
||||
Please read the file upgrading-howto.html in the docs/ subdirectory.
|
||||
|
||||
|
||||
SQL CHECKIN DATABASE
|
||||
--------------------
|
||||
|
||||
This feature is a clone of the Mozilla Project's Bonsai database. It
|
||||
catalogs every commit in the CVS or Subversion repository into a SQL
|
||||
database. In fact, the databases are 100% compatible.
|
||||
|
||||
Various queries can be performed on the database. After installing ViewVC,
|
||||
there are some additional steps required to get the database working.
|
||||
|
||||
1) You need MySQL and MySQLdb (a Python DBAPI 2.0 module) installed.
|
||||
|
||||
2) You need to create a MySQL user who has permission to create databases.
|
||||
Optionally, you can create a second user with read-only access to the
|
||||
database.
|
||||
|
||||
3) Run the <VIEWVC_INSTALLATION_DIRECTORY>/bin/make-database script. It will
|
||||
prompt you for your MySQL user, password, and the name of database you
|
||||
want to create. The database name defaults to "ViewVC". This script
|
||||
creates the database and sets up the empty tables. If you run this on a
|
||||
existing ViewVC database, you will lose all your data!
|
||||
|
||||
4) Edit your <VIEWVC_INSTALLATION_DIRECTORY>/viewvc.conf file.
|
||||
There is a [cvsdb] section. You will need to set:
|
||||
|
||||
enabled = 1 # Whether to enable query support in viewvc.cgi
|
||||
host = # MySQL database server host
|
||||
port = # MySQL database server port (default is 3306)
|
||||
database_name = # name of database you created with make-database
|
||||
user = # read/write database user
|
||||
passwd = # password for read/write database user
|
||||
readonly_user = # read-only database user
|
||||
readonly_passwd = # password for the read-only user
|
||||
|
||||
Note that it's pretty safe in this instance for your read-only user
|
||||
and your read-write user to be the same.
|
||||
|
||||
5) At this point, you need to tell your version control system(s) to
|
||||
publish their commit information to the database. This is done
|
||||
using utilities that ViewVC provides.
|
||||
|
||||
To publish CVS commits into the database:
|
||||
|
||||
Two programs are provided for updating the checkin database from
|
||||
a CVS repository, cvsdbadmin and loginfo-handler. They serve
|
||||
two different purposes. The cvsdbadmin program walks through
|
||||
your CVS repository and adds every commit in every file. This
|
||||
is commonly used for initializing the database from a repository
|
||||
which has been in use. The loginfo-handler script is executed
|
||||
by the CVS server's CVSROOT/loginfo system upon each commit. It
|
||||
makes real-time updates to the checkin database as commits are
|
||||
made to the repository.
|
||||
|
||||
To build a database of all the commits in the CVS repository
|
||||
/home/cvs, invoke: "./cvsdbadmin rebuild /home/cvs". If you
|
||||
want to update the checkin database, invoke: "./cvsdbadmin
|
||||
update /home/cvs". The update mode checks to see if a commit is
|
||||
already in the database, and only adds it if it is absent.
|
||||
|
||||
To get real-time updates, you'll want to checkout the CVSROOT
|
||||
module from your CVS repository and edit CVSROOT/loginfo. For
|
||||
folks running CVS 1.12 or better, add this line:
|
||||
|
||||
ALL <VIEWVC_INSTALLATION_DIRECTORY>/bin/loginfo-handler %p %{sVv}
|
||||
|
||||
If you are running CVS 1.11 or earlier, you'll want a slightly
|
||||
different command line in CVSROOT/loginfo:
|
||||
|
||||
ALL <VIEWVC_INSTALLATION_DIRECTORY>/bin/loginfo-handler %{sVv}
|
||||
|
||||
If you have other scripts invoked by CVSROOT/loginfo, you will
|
||||
want to make sure to change any running under the "DEFAULT"
|
||||
keyword to "ALL" like the loginfo handler, and probably
|
||||
carefully read the execution rules for CVSROOT/loginfo from the
|
||||
CVS manual.
|
||||
|
||||
If you are running the Unix port of CVS-NT, the handler script
|
||||
need to know about it. CVS-NT delivers commit information to
|
||||
loginfo scripts differently than the way mainstream CVS does.
|
||||
Your command line should look like this:
|
||||
|
||||
ALL <VIEWVC_INSTALLATION_DIRECTORY>/bin/loginfo-handler %{sVv} cvsnt
|
||||
|
||||
To publish Subversion commits into the database:
|
||||
|
||||
To build a database of all the commits in the Subversion
|
||||
repository /home/svn, invoke: "./svndbadmin rebuild /home/svn".
|
||||
If you want to update the checkin database, invoke:
|
||||
"./svndbadmin update /home/svn".
|
||||
|
||||
To get real time updates, you will need to add a post-commit
|
||||
hook (for the repository example above, the script should go in
|
||||
/home/svn/hooks/post-commit). The script should look something
|
||||
like this:
|
||||
|
||||
#!/bin/sh
|
||||
REPOS="$1"
|
||||
REV="$2"
|
||||
<VIEWVC_INSTALLATION_DIRECTORY>/bin/svndbadmin update \
|
||||
"$REPOS" "$REV"
|
||||
|
||||
If you allow revision property changes in your repository,
|
||||
create a post-revprop-change hook script which uses the same
|
||||
'svndbadmin update' command as the post-commit script, except
|
||||
with the addition of the --force option:
|
||||
|
||||
#!/bin/sh
|
||||
REPOS="$1"
|
||||
REV="$2"
|
||||
<VIEWVC_INSTALLATION_DIRECTORY>/bin/svndbadmin update --force \
|
||||
"$REPOS" "$REV"
|
||||
|
||||
This will make sure that the checkin database stays consistent
|
||||
when you change the svn:log, svn:author or svn:date revision
|
||||
properties.
|
||||
|
||||
You should be ready to go. Click one of the "Query revision history"
|
||||
links in ViewVC directory listings and give it a try.
|
||||
|
||||
|
||||
ENABLING SYNTAX COLORATION
|
||||
--------------------------
|
||||
|
||||
ViewVC uses Pygments (http://pygments.org) for syntax coloration. You
|
||||
need only install a suitable version of that module, and if ViewVC
|
||||
finds it in your Python module path, it will use it (unless you
|
||||
specifically disable the feature by setting use_pygments = 0 in your
|
||||
viewvc.conf file).
|
||||
|
||||
|
||||
CVSGRAPH CONFIGURATION
|
||||
----------------------
|
||||
|
||||
CvsGraph is a program that can display a clickable, graphical tree
|
||||
of files in a CVS repository.
|
||||
|
||||
WARNING: Under certain circumstances (many revisions of a file
|
||||
or many branches or both) CvsGraph can generate very huge images.
|
||||
Especially on thin clients these images may crash the Web-Browser.
|
||||
Currently there is no known way to avoid this behavior of CvsGraph.
|
||||
So you have been warned!
|
||||
|
||||
Nevertheless, CvsGraph can be quite helpful on repositories with
|
||||
a reasonable number of revisions and branches.
|
||||
|
||||
1) Install CvsGraph using your system's package manager or downloading
|
||||
from the project home page.
|
||||
|
||||
2) Set the 'use_cvsgraph' options in viewvc.conf to 1.
|
||||
|
||||
3) You may also need to set the 'cvsgraph_path' option if the
|
||||
CvsGraph executable is not located on the system PATH.
|
||||
|
||||
4) There is a file <VIEWVC_INSTALLATION_DIRECTORY>/cvsgraph.conf that
|
||||
you may want to edit if desired to set color and font characteristics.
|
||||
See the cvsgraph.conf documentation. No edits are required in
|
||||
cvsgraph.conf for operation with ViewVC.
|
||||
|
||||
|
||||
SUBVERSION INTEGRATION
|
||||
----------------------
|
||||
|
||||
Unlike the CVS integration, which simply wraps the RCS and CVS utility
|
||||
programs, the Subversion integration requires additional Python
|
||||
libraries. To use ViewVC with Subversion, make sure you have both
|
||||
Subversion itself and the Subversion Python bindings installed. These
|
||||
can be obtained through typical package distribution mechanisms, or
|
||||
may be build from source. (See the files 'INSTALL' and
|
||||
'subversion/bindings/swig/INSTALL' in the Subversion source tree for
|
||||
more details on how to build and install Subversion and its Python
|
||||
bindings.)
|
||||
|
||||
Generally speaking, you'll know when your installation of Subversion's
|
||||
bindings has been successful if you can import the 'svn.core' module
|
||||
from within your Python interpreter. Here's an example of doing so
|
||||
which doubles as a quick way to check what version of the Subversion
|
||||
Python binding you have:
|
||||
|
||||
% python
|
||||
Python 2.2.2 (#1, Oct 29 2002, 02:47:30)
|
||||
[GCC 2.96 20000731 (Red Hat Linux 7.2 2.96-108.7.2)] on linux2
|
||||
Type "help", "copyright", "credits" or "license" for more information.
|
||||
>>> from svn.core import *
|
||||
>>> "%s.%s.%s" % (SVN_VER_MAJOR, SVN_VER_MINOR, SVN_VER_PATCH)
|
||||
'1.3.1'
|
||||
>>>
|
||||
|
||||
Note that by default, Subversion installs its bindings in a location
|
||||
that is not in Python's default module search path (for example, on
|
||||
Linux systems the default is usually /usr/local/lib/svn-python). You
|
||||
need to remedy this, either by adding this path to Python's module
|
||||
search path, or by relocating the bindings to some place in that
|
||||
search path.
|
||||
|
||||
For example, you might want to create .pth file in your Python
|
||||
installation directory's site-packages area which tells Python where
|
||||
to find additional modules (in this case, you Subversion Python
|
||||
bindings). You would do this as follows (and as root):
|
||||
|
||||
$ echo "/path/to/svn/bindings" > /path/to/python/site-packages/svn.pth
|
||||
|
||||
(Though, obviously, with the correct paths specified.)
|
||||
|
||||
Configuration of the Subversion repositories happens in much the same
|
||||
way as with CVS repositories, only with the 'svn_roots' configuration
|
||||
variable instead of the 'cvs_roots' one.
|
||||
|
||||
|
||||
IF YOU HAVE PROBLEMS ...
|
||||
------------------------
|
||||
|
||||
If nothing seems to work:
|
||||
|
||||
* Check if you can execute CGI-scripts (Apache needs to have an
|
||||
ScriptAlias /cgi-bin or cgi-script Handler defined). Try to
|
||||
execute a simple CGI-script that often comes with the distribution
|
||||
of the webserver; locate the logfiles and try to find hints which
|
||||
explain the malfunction
|
||||
|
||||
* View the entries in the webserver's error.log
|
||||
|
||||
If ViewVC seems to work but doesn't show the expected result (Typical
|
||||
error: you can't see any files)
|
||||
|
||||
* Check whether the CGI-script has read-permissions to your
|
||||
CVS-Repository. The CGI-script generally runs as the same user
|
||||
that the web server does, often user 'nobody' or 'httpd'.
|
||||
|
||||
* Does ViewVC find your RCS utilities? (edit rcs_dir)
|
||||
|
||||
If something else happens or you can't get it to work:
|
||||
|
||||
* Check the ViewVC home page:
|
||||
|
||||
http://viewvc.org/
|
||||
|
||||
* Review the ViewVC mailing list archive to see if somebody else had
|
||||
the same problem, and it was solved:
|
||||
|
||||
http://viewvc.tigris.org/servlets/SummarizeList?listName=users
|
||||
|
||||
* Check the ViewVC issue database to see if the problem you are
|
||||
seeing is the result of a known bug:
|
||||
|
||||
http://viewvc.tigris.org/issues/query.cgi
|
||||
|
||||
* Send mail to the ViewVC mailing list, users@viewvc.tigris.org.
|
||||
NOTE: make sure you provide an accurate description of the problem
|
||||
-- including the version of ViewVC you are using -- and any
|
||||
relevant tracebacks or error logs.
|
|
@ -0,0 +1,65 @@
|
|||
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
|
||||
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
|
||||
<html>
|
||||
<head>
|
||||
<title>ViewVC: License v1</title>
|
||||
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
|
||||
</head>
|
||||
<body>
|
||||
|
||||
<p>The following text constitutes the license agreement for the <a
|
||||
href="http://www.viewvc.org/">ViewVC</a> software (formerly known
|
||||
as ViewCVS). It is an agreement between <a
|
||||
href="http://www.viewvc.org/who.html#sec-viewcvs-group">The ViewCVS
|
||||
Group</a> and the users of ViewVC.</p>
|
||||
|
||||
<blockquote>
|
||||
|
||||
<p><strong>Copyright © 1999-2008 The ViewCVS Group. All rights
|
||||
reserved.</strong></p>
|
||||
|
||||
<p>By using ViewVC, you agree to the terms and conditions set forth
|
||||
below:</p>
|
||||
|
||||
<p>Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions
|
||||
are met:</p>
|
||||
|
||||
<ol>
|
||||
<li>Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following
|
||||
disclaimer.</li>
|
||||
<li>Redistributions in binary form must reproduce the above
|
||||
copyright notice, this list of conditions and the following
|
||||
disclaimer in the documentation and/or other materials provided
|
||||
with the distribution.</li>
|
||||
</ol>
|
||||
|
||||
<p>THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
|
||||
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
|
||||
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR
|
||||
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
|
||||
USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
|
||||
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
|
||||
OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGE.</p>
|
||||
|
||||
</blockquote>
|
||||
|
||||
<hr />
|
||||
|
||||
<p>The following changes have occured to this license over time:</p>
|
||||
<ul>
|
||||
<li>May 12, 2001 — copyright years updated</li>
|
||||
<li>September 5, 2002 — copyright years updated</li>
|
||||
<li>March 17, 2006 — software renamed from "ViewCVS"</li>
|
||||
<li>April 10, 2007 — copyright years updated</li>
|
||||
<li>February 22, 2008 — copyright years updated</li>
|
||||
</ul>
|
||||
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,6 @@
|
|||
ViewVC -- Viewing the content of CVS/SVN repositories with a Webbrowser.
|
||||
|
||||
Please read the file INSTALL for more information.
|
||||
|
||||
And see windows/README for more information on running ViewVC on
|
||||
Microsoft Windows.
|
|
@ -0,0 +1,61 @@
|
|||
<%@ LANGUAGE = Python %>
|
||||
<%
|
||||
|
||||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2006 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# query.asp: View CVS/SVN commit database by web browser
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# This is a teeny stub to launch the main ViewVC app. It checks the load
|
||||
# average, then loads the (precompiled) query.py file and runs it.
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
|
||||
#########################################################################
|
||||
#
|
||||
# INSTALL-TIME CONFIGURATION
|
||||
#
|
||||
# These values will be set during the installation process. During
|
||||
# development, they will remain None.
|
||||
#
|
||||
|
||||
LIBRARY_DIR = None
|
||||
CONF_PATHNAME = None
|
||||
|
||||
#########################################################################
|
||||
#
|
||||
# Adjust sys.path to include our library directory
|
||||
#
|
||||
|
||||
import sys
|
||||
|
||||
if LIBRARY_DIR:
|
||||
if not LIBRARY_DIR in sys.path:
|
||||
sys.path.insert(0, LIBRARY_DIR)
|
||||
|
||||
#########################################################################
|
||||
|
||||
import sapi
|
||||
import viewvc
|
||||
import query
|
||||
|
||||
server = sapi.AspServer(Server, Request, Response, Application)
|
||||
try:
|
||||
cfg = viewvc.load_config(CONF_PATHNAME, server)
|
||||
query.main(server, cfg, "viewvc.asp")
|
||||
finally:
|
||||
s.close()
|
||||
|
||||
%>
|
|
@ -0,0 +1,65 @@
|
|||
<%@ LANGUAGE = Python %>
|
||||
<%
|
||||
|
||||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2006 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# viewvc: View CVS/SVN repositories via a web browser
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# This is a teeny stub to launch the main ViewVC app. It checks the load
|
||||
# average, then loads the (precompiled) viewvc.py file and runs it.
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
|
||||
#########################################################################
|
||||
#
|
||||
# INSTALL-TIME CONFIGURATION
|
||||
#
|
||||
# These values will be set during the installation process. During
|
||||
# development, they will remain None.
|
||||
#
|
||||
|
||||
LIBRARY_DIR = None
|
||||
CONF_PATHNAME = None
|
||||
|
||||
#########################################################################
|
||||
#
|
||||
# Adjust sys.path to include our library directory
|
||||
#
|
||||
|
||||
import sys
|
||||
|
||||
if LIBRARY_DIR:
|
||||
if not LIBRARY_DIR in sys.path:
|
||||
sys.path.insert(0, LIBRARY_DIR)
|
||||
|
||||
#########################################################################
|
||||
|
||||
### add code for checking the load average
|
||||
|
||||
#########################################################################
|
||||
|
||||
# go do the work
|
||||
import sapi
|
||||
import viewvc
|
||||
|
||||
server = sapi.AspServer(Server, Request, Response, Application)
|
||||
try:
|
||||
cfg = viewvc.load_config(CONF_PATHNAME, server)
|
||||
viewvc.main(server, cfg)
|
||||
finally:
|
||||
s.close()
|
||||
|
||||
%>
|
|
@ -0,0 +1,57 @@
|
|||
#!/usr/bin/env python
|
||||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2006 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# query.cgi: View CVS/SVN commit database by web browser
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# This is a teeny stub to launch the main ViewVC app. It checks the load
|
||||
# average, then loads the (precompiled) viewvc.py file and runs it.
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
|
||||
#########################################################################
|
||||
#
|
||||
# INSTALL-TIME CONFIGURATION
|
||||
#
|
||||
# These values will be set during the installation process. During
|
||||
# development, they will remain None.
|
||||
#
|
||||
|
||||
LIBRARY_DIR = None
|
||||
CONF_PATHNAME = None
|
||||
|
||||
#########################################################################
|
||||
#
|
||||
# Adjust sys.path to include our library directory
|
||||
#
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
if LIBRARY_DIR:
|
||||
sys.path.insert(0, LIBRARY_DIR)
|
||||
else:
|
||||
sys.path.insert(0, os.path.abspath(os.path.join(sys.argv[0],
|
||||
"../../../lib")))
|
||||
|
||||
#########################################################################
|
||||
|
||||
import sapi
|
||||
import viewvc
|
||||
import query
|
||||
|
||||
server = sapi.CgiServer()
|
||||
cfg = viewvc.load_config(CONF_PATHNAME, server)
|
||||
query.main(server, cfg, "viewvc.cgi")
|
|
@ -0,0 +1,8 @@
|
|||
#!/bin/sh
|
||||
#
|
||||
# Set this script up with something like:
|
||||
#
|
||||
# ScriptAlias /viewvc-strace /home/gstein/src/viewvc/cgi/viewvc-strace.sh
|
||||
#
|
||||
thisdir="`dirname $0`"
|
||||
exec strace -q -r -o /tmp/v-strace.log "${thisdir}/viewvc.cgi"
|
|
@ -0,0 +1,61 @@
|
|||
#!/usr/bin/env python
|
||||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2006 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# viewvc: View CVS/SVN repositories via a web browser
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# This is a teeny stub to launch the main ViewVC app. It checks the load
|
||||
# average, then loads the (precompiled) viewvc.py file and runs it.
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
|
||||
#########################################################################
|
||||
#
|
||||
# INSTALL-TIME CONFIGURATION
|
||||
#
|
||||
# These values will be set during the installation process. During
|
||||
# development, they will remain None.
|
||||
#
|
||||
|
||||
LIBRARY_DIR = None
|
||||
CONF_PATHNAME = None
|
||||
|
||||
#########################################################################
|
||||
#
|
||||
# Adjust sys.path to include our library directory
|
||||
#
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
if LIBRARY_DIR:
|
||||
sys.path.insert(0, LIBRARY_DIR)
|
||||
else:
|
||||
sys.path.insert(0, os.path.abspath(os.path.join(sys.argv[0],
|
||||
"../../../lib")))
|
||||
|
||||
#########################################################################
|
||||
|
||||
### add code for checking the load average
|
||||
|
||||
#########################################################################
|
||||
|
||||
# go do the work
|
||||
import sapi
|
||||
import viewvc
|
||||
|
||||
server = sapi.CgiServer()
|
||||
cfg = viewvc.load_config(CONF_PATHNAME, server)
|
||||
viewvc.main(server, cfg)
|
|
@ -0,0 +1,191 @@
|
|||
#!/usr/bin/env python
|
||||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2008 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# administrative program for CVSdb; this is primarily
|
||||
# used to add/rebuild CVS repositories to the database
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
|
||||
#########################################################################
|
||||
#
|
||||
# INSTALL-TIME CONFIGURATION
|
||||
#
|
||||
# These values will be set during the installation process. During
|
||||
# development, they will remain None.
|
||||
#
|
||||
|
||||
LIBRARY_DIR = None
|
||||
CONF_PATHNAME = None
|
||||
|
||||
# Adjust sys.path to include our library directory
|
||||
import sys
|
||||
import os
|
||||
|
||||
if LIBRARY_DIR:
|
||||
sys.path.insert(0, LIBRARY_DIR)
|
||||
else:
|
||||
sys.path.insert(0, os.path.abspath(os.path.join(sys.argv[0], "../../lib")))
|
||||
|
||||
#########################################################################
|
||||
|
||||
import os
|
||||
import string
|
||||
import cvsdb
|
||||
import viewvc
|
||||
import vclib.ccvs
|
||||
|
||||
|
||||
def UpdateFile(db, repository, path, update, quiet_level):
|
||||
try:
|
||||
if update:
|
||||
commit_list = cvsdb.GetUnrecordedCommitList(repository, path, db)
|
||||
else:
|
||||
commit_list = cvsdb.GetCommitListFromRCSFile(repository, path)
|
||||
except cvsdb.error, e:
|
||||
print '[ERROR] %s' % (e)
|
||||
return
|
||||
|
||||
file = string.join(path, "/")
|
||||
printing = 0
|
||||
if update:
|
||||
if quiet_level < 1 or (quiet_level < 2 and len(commit_list)):
|
||||
printing = 1
|
||||
print '[%s [%d new commits]]' % (file, len(commit_list)),
|
||||
else:
|
||||
if quiet_level < 2:
|
||||
printing = 1
|
||||
print '[%s [%d commits]]' % (file, len(commit_list)),
|
||||
|
||||
## add the commits into the database
|
||||
for commit in commit_list:
|
||||
db.AddCommit(commit)
|
||||
if printing:
|
||||
sys.stdout.write('.')
|
||||
sys.stdout.flush()
|
||||
if printing:
|
||||
print
|
||||
|
||||
|
||||
def RecurseUpdate(db, repository, directory, update, quiet_level):
|
||||
for entry in repository.listdir(directory, None, {}):
|
||||
path = directory + [entry.name]
|
||||
|
||||
if entry.errors:
|
||||
continue
|
||||
|
||||
if entry.kind is vclib.DIR:
|
||||
RecurseUpdate(db, repository, path, update, quiet_level)
|
||||
continue
|
||||
|
||||
if entry.kind is vclib.FILE:
|
||||
UpdateFile(db, repository, path, update, quiet_level)
|
||||
|
||||
def RootPath(path, quiet_level):
|
||||
"""Break os path into cvs root path and other parts"""
|
||||
root = os.path.abspath(path)
|
||||
path_parts = []
|
||||
|
||||
p = root
|
||||
while 1:
|
||||
if os.path.exists(os.path.join(p, 'CVSROOT')):
|
||||
root = p
|
||||
if quiet_level < 2:
|
||||
print "Using repository root `%s'" % root
|
||||
break
|
||||
|
||||
p, pdir = os.path.split(p)
|
||||
if not pdir:
|
||||
del path_parts[:]
|
||||
if quiet_level < 1:
|
||||
print "Using repository root `%s'" % root
|
||||
print "Warning: CVSROOT directory not found."
|
||||
break
|
||||
|
||||
path_parts.append(pdir)
|
||||
|
||||
root = cvsdb.CleanRepository(root)
|
||||
path_parts.reverse()
|
||||
return root, path_parts
|
||||
|
||||
def usage():
|
||||
cmd = os.path.basename(sys.argv[0])
|
||||
sys.stderr.write(
|
||||
"""Administer the ViewVC checkins database data for the CVS repository
|
||||
located at REPOS-PATH.
|
||||
|
||||
Usage: 1. %s [[-q] -q] rebuild REPOS-PATH
|
||||
2. %s [[-q] -q] update REPOS-PATH
|
||||
3. %s [[-q] -q] purge REPOS-PATH
|
||||
|
||||
1. Rebuild the commit database information for the repository located
|
||||
at REPOS-PATH, after first purging information specific to that
|
||||
repository (if any).
|
||||
|
||||
2. Update the commit database information for all unrecorded commits
|
||||
in the repository located at REPOS-PATH.
|
||||
|
||||
3. Purge information specific to the repository located at REPOS-PATH
|
||||
from the database.
|
||||
|
||||
Use the -q flag to cause this script to be less verbose; use it twice to
|
||||
invoke a peaceful state of noiselessness.
|
||||
|
||||
""" % (cmd, cmd, cmd))
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
## main
|
||||
if __name__ == '__main__':
|
||||
args = sys.argv
|
||||
|
||||
# check the quietness level (0 = verbose, 1 = new commits, 2 = silent)
|
||||
quiet_level = 0
|
||||
while 1:
|
||||
try:
|
||||
index = args.index('-q')
|
||||
quiet_level = quiet_level + 1
|
||||
del args[index]
|
||||
except ValueError:
|
||||
break
|
||||
|
||||
# validate the command
|
||||
if len(args) <= 2:
|
||||
usage()
|
||||
command = args[1].lower()
|
||||
if command not in ('rebuild', 'update', 'purge'):
|
||||
sys.stderr.write('ERROR: unknown command %s\n' % command)
|
||||
usage()
|
||||
|
||||
# get repository and path, and do the work
|
||||
root, path_parts = RootPath(args[2], quiet_level)
|
||||
rootpath = vclib.ccvs.canonicalize_rootpath(root)
|
||||
try:
|
||||
cfg = viewvc.load_config(CONF_PATHNAME)
|
||||
db = cvsdb.ConnectDatabase(cfg)
|
||||
|
||||
if command in ('rebuild', 'purge'):
|
||||
if quiet_level < 2:
|
||||
print "Purging existing data for repository root `%s'" % root
|
||||
db.PurgeRepository(root)
|
||||
|
||||
if command in ('rebuild', 'update'):
|
||||
repository = vclib.ccvs.CVSRepository(None, rootpath, None,
|
||||
cfg.utilities, 0)
|
||||
RecurseUpdate(db, repository, path_parts,
|
||||
command == 'update', quiet_level)
|
||||
except KeyboardInterrupt:
|
||||
print
|
||||
print '** break **'
|
||||
|
||||
sys.exit(0)
|
|
@ -0,0 +1,318 @@
|
|||
#!/usr/bin/env python
|
||||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2008 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# updates SQL database with new commit records
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
|
||||
#########################################################################
|
||||
#
|
||||
# INSTALL-TIME CONFIGURATION
|
||||
#
|
||||
# These values will be set during the installation process. During
|
||||
# development, they will remain None.
|
||||
#
|
||||
|
||||
LIBRARY_DIR = None
|
||||
CONF_PATHNAME = None
|
||||
|
||||
# Adjust sys.path to include our library directory
|
||||
import sys
|
||||
import os
|
||||
|
||||
if LIBRARY_DIR:
|
||||
sys.path.insert(0, LIBRARY_DIR)
|
||||
else:
|
||||
sys.path.insert(0, os.path.abspath(os.path.join(sys.argv[0], "../../lib")))
|
||||
|
||||
#########################################################################
|
||||
|
||||
import os
|
||||
import string
|
||||
import getopt
|
||||
import re
|
||||
import cvsdb
|
||||
import viewvc
|
||||
import vclib.ccvs
|
||||
|
||||
DEBUG_FLAG = 0
|
||||
|
||||
## output functions
|
||||
def debug(text):
|
||||
if DEBUG_FLAG:
|
||||
if type(text) != (type([])):
|
||||
text = [text]
|
||||
for line in text:
|
||||
line = line.rstrip('\n\r')
|
||||
print 'DEBUG(viewvc-loginfo):', line
|
||||
|
||||
def warning(text):
|
||||
print 'WARNING(viewvc-loginfo):', text
|
||||
|
||||
def error(text):
|
||||
print 'ERROR(viewvc-loginfo):', text
|
||||
sys.exit(1)
|
||||
|
||||
_re_revisions = re.compile(
|
||||
r",(?P<old>(?:\d+\.\d+)(?:\.\d+\.\d+)*|NONE)" # comma and first revision
|
||||
r",(?P<new>(?:\d+\.\d+)(?:\.\d+\.\d+)*|NONE)" # comma and second revision
|
||||
r"(?:$| )" # space or end of string
|
||||
)
|
||||
|
||||
def Cvs1Dot12ArgParse(args):
|
||||
"""CVS 1.12 introduced a new loginfo format while provides the various
|
||||
pieces of interesting version information to the handler script as
|
||||
individual arguments instead of as a single string."""
|
||||
|
||||
if args[1] == '- New directory':
|
||||
return None, None
|
||||
elif args[1] == '- Imported sources':
|
||||
return None, None
|
||||
else:
|
||||
directory = args.pop(0)
|
||||
files = []
|
||||
while len(args) >= 3:
|
||||
files.append(args[0:3])
|
||||
args = args[3:]
|
||||
return directory, files
|
||||
|
||||
def HeuristicArgParse(s, repository):
|
||||
"""Older versions of CVS (except for CVSNT) do not escape spaces in file
|
||||
and directory names that are passed to the loginfo handler. Since the input
|
||||
to loginfo is a space separated string, this can lead to ambiguities. This
|
||||
function attempts to guess intelligently which spaces are separators and
|
||||
which are part of file or directory names. It disambiguates spaces in
|
||||
filenames from the separator spaces between files by assuming that every
|
||||
space which is preceded by two well-formed revision numbers is in fact a
|
||||
separator. It disambiguates the first separator space from spaces in the
|
||||
directory name by choosing the longest possible directory name that
|
||||
actually exists in the repository"""
|
||||
|
||||
if (s[-16:] == ' - New directory'
|
||||
or s[:26] == ' - New directory,NONE,NONE'):
|
||||
return None, None
|
||||
|
||||
if (s[-19:] == ' - Imported sources'
|
||||
or s[-29:] == ' - Imported sources,NONE,NONE'):
|
||||
return None, None
|
||||
|
||||
file_data_list = []
|
||||
start = 0
|
||||
|
||||
while 1:
|
||||
m = _re_revisions.search(s, start)
|
||||
|
||||
if start == 0:
|
||||
if m is None:
|
||||
error('Argument "%s" does not contain any revision numbers' \
|
||||
% s)
|
||||
|
||||
directory, filename = FindLongestDirectory(s[:m.start()],
|
||||
repository)
|
||||
if directory is None:
|
||||
error('Argument "%s" does not start with a valid directory' \
|
||||
% s)
|
||||
|
||||
debug('Directory name is "%s"' % directory)
|
||||
|
||||
else:
|
||||
if m is None:
|
||||
warning('Failed to interpret past position %i in the loginfo '
|
||||
'argument, leftover string is "%s"' \
|
||||
% start, pos[start:])
|
||||
|
||||
filename = s[start:m.start()]
|
||||
|
||||
old_version, new_version = m.group('old', 'new')
|
||||
|
||||
file_data_list.append((filename, old_version, new_version))
|
||||
|
||||
debug('File "%s", old revision %s, new revision %s'
|
||||
% (filename, old_version, new_version))
|
||||
|
||||
start = m.end()
|
||||
|
||||
if start == len(s): break
|
||||
|
||||
return directory, file_data_list
|
||||
|
||||
def FindLongestDirectory(s, repository):
|
||||
"""Splits the first part of the argument string into a directory name
|
||||
and a file name, either of which may contain spaces. Returns the longest
|
||||
possible directory name that actually exists"""
|
||||
|
||||
parts = string.split(s, " ")
|
||||
|
||||
for i in range(len(parts)-1, 0, -1):
|
||||
directory = string.join(parts[:i])
|
||||
filename = string.join(parts[i:])
|
||||
if os.path.isdir(os.path.join(repository, directory)):
|
||||
return directory, filename
|
||||
|
||||
return None, None
|
||||
|
||||
_re_cvsnt_revisions = re.compile(
|
||||
r"(?P<filename>.*)" # comma and first revision
|
||||
r",(?P<old>(?:\d+\.\d+)(?:\.\d+\.\d+)*|NONE)" # comma and first revision
|
||||
r",(?P<new>(?:\d+\.\d+)(?:\.\d+\.\d+)*|NONE)" # comma and second revision
|
||||
r"$" # end of string
|
||||
)
|
||||
|
||||
def CvsNtArgParse(s, repository):
|
||||
"""CVSNT escapes all spaces in filenames and directory names with
|
||||
backslashes"""
|
||||
|
||||
if s[-18:] == r' -\ New\ directory':
|
||||
return None, None
|
||||
|
||||
if s[-21:] == r' -\ Imported\ sources':
|
||||
return None, None
|
||||
|
||||
file_data_list = []
|
||||
directory, pos = NextFile(s)
|
||||
|
||||
debug('Directory name is "%s"' % directory)
|
||||
|
||||
while 1:
|
||||
fileinfo, pos = NextFile(s, pos)
|
||||
if fileinfo is None:
|
||||
break
|
||||
|
||||
m = _re_cvsnt_revisions.match(fileinfo)
|
||||
if m is None:
|
||||
warning('Can\'t parse file information in "%s"' % fileinfo)
|
||||
continue
|
||||
|
||||
file_data = m.group('filename', 'old', 'new')
|
||||
file_data_list.append(file_data)
|
||||
|
||||
debug('File "%s", old revision %s, new revision %s' % file_data)
|
||||
|
||||
return directory, file_data_list
|
||||
|
||||
def NextFile(s, pos = 0):
|
||||
escaped = 0
|
||||
ret = ''
|
||||
i = pos
|
||||
while i < len(s):
|
||||
c = s[i]
|
||||
if escaped:
|
||||
ret += c
|
||||
escaped = 0
|
||||
elif c == '\\':
|
||||
escaped = 1
|
||||
elif c == ' ':
|
||||
return ret, i + 1
|
||||
else:
|
||||
ret += c
|
||||
i += 1
|
||||
|
||||
return ret or None, i
|
||||
|
||||
def ProcessLoginfo(rootpath, directory, files):
|
||||
cfg = viewvc.load_config(CONF_PATHNAME)
|
||||
db = cvsdb.ConnectDatabase(cfg)
|
||||
repository = vclib.ccvs.CVSRepository(None, rootpath, None,
|
||||
cfg.utilities, 0)
|
||||
|
||||
# split up the directory components
|
||||
dirpath = filter(None, string.split(os.path.normpath(directory), os.sep))
|
||||
|
||||
## build a list of Commit objects
|
||||
commit_list = []
|
||||
for filename, old_version, new_version in files:
|
||||
filepath = dirpath + [filename]
|
||||
|
||||
## XXX: this is nasty: in the case of a removed file, we are not
|
||||
## given enough information to find it in the rlog output!
|
||||
## So instead, we rlog everything in the removed file, and
|
||||
## add any commits not already in the database
|
||||
if new_version == 'NONE':
|
||||
commits = cvsdb.GetUnrecordedCommitList(repository, filepath, db)
|
||||
else:
|
||||
commits = cvsdb.GetCommitListFromRCSFile(repository, filepath,
|
||||
new_version)
|
||||
|
||||
commit_list.extend(commits)
|
||||
|
||||
## add to the database
|
||||
db.AddCommitList(commit_list)
|
||||
|
||||
|
||||
## MAIN
|
||||
if __name__ == '__main__':
|
||||
## get the repository from the environment
|
||||
try:
|
||||
repository = os.environ['CVSROOT']
|
||||
except KeyError:
|
||||
error('CVSROOT not in environment')
|
||||
|
||||
debug('Repository name is "%s"' % repository)
|
||||
|
||||
## parse arguments
|
||||
|
||||
argc = len(sys.argv)
|
||||
debug('Got %d arguments:' % (argc))
|
||||
debug(map(lambda x: ' ' + x, sys.argv))
|
||||
|
||||
# if we have more than 3 arguments, we are likely using the
|
||||
# newer loginfo format introduced in CVS 1.12:
|
||||
#
|
||||
# ALL <path>/bin/loginfo-handler %p %{sVv}
|
||||
if argc > 3:
|
||||
directory, files = Cvs1Dot12ArgParse(sys.argv[1:])
|
||||
else:
|
||||
if len(sys.argv) > 1:
|
||||
# the first argument should contain file version information
|
||||
arg = sys.argv[1]
|
||||
else:
|
||||
# if there are no arguments, read version information from
|
||||
# first line of input like old versions of ViewCVS did
|
||||
arg = string.rstrip(sys.stdin.readline())
|
||||
|
||||
if len(sys.argv) > 2:
|
||||
# if there is a second argument it indicates which parser
|
||||
# should be used to interpret the version information
|
||||
if sys.argv[2] == 'cvs':
|
||||
fun = HeuristicArgParse
|
||||
elif sys.argv[2] == 'cvsnt':
|
||||
fun = CvsNtArgParse
|
||||
else:
|
||||
error('Bad arguments')
|
||||
else:
|
||||
# if there is no second argument, guess which parser to use based
|
||||
# on the operating system. Since CVSNT now runs on Windows and
|
||||
# Linux, the guess isn't necessarily correct
|
||||
if sys.platform == "win32":
|
||||
fun = CvsNtArgParse
|
||||
else:
|
||||
fun = HeuristicArgParse
|
||||
|
||||
directory, files = fun(arg, repository)
|
||||
|
||||
debug('Discarded from stdin:')
|
||||
debug(map(lambda x: ' ' + x, sys.stdin.readlines())) # consume stdin
|
||||
|
||||
repository = cvsdb.CleanRepository(repository)
|
||||
|
||||
debug('Repository: %s' % (repository))
|
||||
debug('Directory: %s' % (directory))
|
||||
debug('Files: %s' % (str(files)))
|
||||
|
||||
if files is None:
|
||||
debug('Not a checkin, nothing to do')
|
||||
else:
|
||||
ProcessLoginfo(repository, directory, files)
|
||||
|
||||
sys.exit(0)
|
|
@ -0,0 +1,160 @@
|
|||
#!/usr/bin/env python
|
||||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2007 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# administrative program for CVSdb; creates a clean database in
|
||||
# MySQL 3.22 or later
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
import os, sys, string
|
||||
import popen2
|
||||
|
||||
INTRO_TEXT = """\
|
||||
This script creates the database and tables in MySQL used by the
|
||||
ViewVC checkin database. You will be prompted for: database server
|
||||
hostname, database user, database user password, and database name.
|
||||
This script will use the 'mysql' program to create the database for
|
||||
you. You will then need to set the appropriate parameters in the
|
||||
[cvsdb] section of your viewvc.conf file.
|
||||
"""
|
||||
|
||||
DATABASE_SCRIPT="""\
|
||||
DROP DATABASE IF EXISTS <dbname>;
|
||||
CREATE DATABASE <dbname>;
|
||||
|
||||
USE <dbname>;
|
||||
|
||||
DROP TABLE IF EXISTS branches;
|
||||
CREATE TABLE branches (
|
||||
id mediumint(9) NOT NULL auto_increment,
|
||||
branch varchar(64) binary DEFAULT '' NOT NULL,
|
||||
PRIMARY KEY (id),
|
||||
UNIQUE branch (branch)
|
||||
) TYPE=MyISAM;
|
||||
|
||||
DROP TABLE IF EXISTS checkins;
|
||||
CREATE TABLE checkins (
|
||||
type enum('Change','Add','Remove'),
|
||||
ci_when datetime DEFAULT '0000-00-00 00:00:00' NOT NULL,
|
||||
whoid mediumint(9) DEFAULT '0' NOT NULL,
|
||||
repositoryid mediumint(9) DEFAULT '0' NOT NULL,
|
||||
dirid mediumint(9) DEFAULT '0' NOT NULL,
|
||||
fileid mediumint(9) DEFAULT '0' NOT NULL,
|
||||
revision varchar(32) binary DEFAULT '' NOT NULL,
|
||||
stickytag varchar(255) binary DEFAULT '' NOT NULL,
|
||||
branchid mediumint(9) DEFAULT '0' NOT NULL,
|
||||
addedlines int(11) DEFAULT '0' NOT NULL,
|
||||
removedlines int(11) DEFAULT '0' NOT NULL,
|
||||
descid mediumint(9),
|
||||
UNIQUE repositoryid (repositoryid,dirid,fileid,revision),
|
||||
KEY ci_when (ci_when),
|
||||
KEY whoid (whoid),
|
||||
KEY repositoryid_2 (repositoryid),
|
||||
KEY dirid (dirid),
|
||||
KEY fileid (fileid),
|
||||
KEY branchid (branchid)
|
||||
) TYPE=MyISAM;
|
||||
|
||||
DROP TABLE IF EXISTS descs;
|
||||
CREATE TABLE descs (
|
||||
id mediumint(9) NOT NULL auto_increment,
|
||||
description text,
|
||||
hash bigint(20) DEFAULT '0' NOT NULL,
|
||||
PRIMARY KEY (id),
|
||||
KEY hash (hash)
|
||||
) TYPE=MyISAM;
|
||||
|
||||
DROP TABLE IF EXISTS dirs;
|
||||
CREATE TABLE dirs (
|
||||
id mediumint(9) NOT NULL auto_increment,
|
||||
dir varchar(255) binary DEFAULT '' NOT NULL,
|
||||
PRIMARY KEY (id),
|
||||
UNIQUE dir (dir)
|
||||
) TYPE=MyISAM;
|
||||
|
||||
DROP TABLE IF EXISTS files;
|
||||
CREATE TABLE files (
|
||||
id mediumint(9) NOT NULL auto_increment,
|
||||
file varchar(255) binary DEFAULT '' NOT NULL,
|
||||
PRIMARY KEY (id),
|
||||
UNIQUE file (file)
|
||||
) TYPE=MyISAM;
|
||||
|
||||
DROP TABLE IF EXISTS people;
|
||||
CREATE TABLE people (
|
||||
id mediumint(9) NOT NULL auto_increment,
|
||||
who varchar(128) binary DEFAULT '' NOT NULL,
|
||||
PRIMARY KEY (id),
|
||||
UNIQUE who (who)
|
||||
) TYPE=MyISAM;
|
||||
|
||||
DROP TABLE IF EXISTS repositories;
|
||||
CREATE TABLE repositories (
|
||||
id mediumint(9) NOT NULL auto_increment,
|
||||
repository varchar(64) binary DEFAULT '' NOT NULL,
|
||||
PRIMARY KEY (id),
|
||||
UNIQUE repository (repository)
|
||||
) TYPE=MyISAM;
|
||||
|
||||
DROP TABLE IF EXISTS tags;
|
||||
CREATE TABLE tags (
|
||||
repositoryid mediumint(9) DEFAULT '0' NOT NULL,
|
||||
branchid mediumint(9) DEFAULT '0' NOT NULL,
|
||||
dirid mediumint(9) DEFAULT '0' NOT NULL,
|
||||
fileid mediumint(9) DEFAULT '0' NOT NULL,
|
||||
revision varchar(32) binary DEFAULT '' NOT NULL,
|
||||
UNIQUE repositoryid (repositoryid,dirid,fileid,branchid,revision),
|
||||
KEY repositoryid_2 (repositoryid),
|
||||
KEY dirid (dirid),
|
||||
KEY fileid (fileid),
|
||||
KEY branchid (branchid)
|
||||
) TYPE=MyISAM;
|
||||
"""
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
print INTRO_TEXT
|
||||
|
||||
# Prompt for necessary information
|
||||
host = raw_input("MySQL Hostname [default: localhost]: ") or ""
|
||||
user = raw_input("MySQL User: ")
|
||||
passwd = raw_input("MySQL Password: ")
|
||||
dbase = raw_input("ViewVC Database Name [default: ViewVC]: ") or "ViewVC"
|
||||
|
||||
# Create the database
|
||||
dscript = string.replace(DATABASE_SCRIPT, "<dbname>", dbase)
|
||||
host_option = host and "--host=%s" % (host) or ""
|
||||
if sys.platform == "win32":
|
||||
cmd = "mysql --user=%s --password=%s %s "\
|
||||
% (user, passwd, host_option)
|
||||
mysql = os.popen(cmd, "w") # popen2.Popen3 is not provided on windows
|
||||
mysql.write(dscript)
|
||||
status = mysql.close()
|
||||
else:
|
||||
cmd = "{ mysql --user=%s --password=%s %s ; } 2>&1" \
|
||||
% (user, passwd, host_option)
|
||||
pipes = popen2.Popen3(cmd)
|
||||
pipes.tochild.write(dscript)
|
||||
pipes.tochild.close()
|
||||
print pipes.fromchild.read()
|
||||
status = pipes.wait()
|
||||
|
||||
if status:
|
||||
print "[ERROR] the database did not create sucessfully."
|
||||
sys.exit(1)
|
||||
|
||||
print "Database created successfully."
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
sys.exit(0)
|
||||
|
|
@ -0,0 +1,3 @@
|
|||
AddHandler python-program .py
|
||||
PythonHandler handler
|
||||
PythonDebug On
|
|
@ -0,0 +1,31 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2006 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# Mod_Python handler based on mod_python.publisher
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
from mod_python import apache
|
||||
import os.path
|
||||
|
||||
def handler(req):
|
||||
path, module_name = os.path.split(req.filename)
|
||||
module_name, module_ext = os.path.splitext(module_name)
|
||||
try:
|
||||
module = apache.import_module(module_name, path=[path])
|
||||
except ImportError:
|
||||
raise apache.SERVER_RETURN, apache.HTTP_NOT_FOUND
|
||||
|
||||
req.add_common_vars()
|
||||
module.index(req)
|
||||
|
||||
return apache.OK
|
|
@ -0,0 +1,71 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2006 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# ViewVC: View CVS/SVN repositories via a web browser
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# This is a teeny stub to launch the main ViewVC app. It checks the load
|
||||
# average, then loads the (precompiled) viewvc.py file and runs it.
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
|
||||
#########################################################################
|
||||
#
|
||||
# INSTALL-TIME CONFIGURATION
|
||||
#
|
||||
# These values will be set during the installation process. During
|
||||
# development, they will remain None.
|
||||
#
|
||||
|
||||
LIBRARY_DIR = None
|
||||
CONF_PATHNAME = None
|
||||
|
||||
#########################################################################
|
||||
#
|
||||
# Adjust sys.path to include our library directory
|
||||
#
|
||||
|
||||
import sys
|
||||
|
||||
if LIBRARY_DIR:
|
||||
sys.path.insert(0, LIBRARY_DIR)
|
||||
|
||||
import sapi
|
||||
import imp
|
||||
|
||||
# Import real ViewVC module
|
||||
fp, pathname, description = imp.find_module('viewvc', [LIBRARY_DIR])
|
||||
try:
|
||||
viewvc = imp.load_module('viewvc', fp, pathname, description)
|
||||
finally:
|
||||
if fp:
|
||||
fp.close()
|
||||
|
||||
# Import real ViewVC Query modules
|
||||
fp, pathname, description = imp.find_module('query', [LIBRARY_DIR])
|
||||
try:
|
||||
query = imp.load_module('query', fp, pathname, description)
|
||||
finally:
|
||||
if fp:
|
||||
fp.close()
|
||||
|
||||
cfg = viewvc.load_config(CONF_PATHNAME)
|
||||
|
||||
def index(req):
|
||||
server = sapi.ModPythonServer(req)
|
||||
try:
|
||||
query.main(server, cfg, "viewvc.py")
|
||||
finally:
|
||||
server.close()
|
||||
|
|
@ -0,0 +1,61 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2006 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# viewvc: View CVS/SVN repositories via a web browser
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# This is a teeny stub to launch the main ViewVC app. It checks the load
|
||||
# average, then loads the (precompiled) viewvc.py file and runs it.
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
|
||||
#########################################################################
|
||||
#
|
||||
# INSTALL-TIME CONFIGURATION
|
||||
#
|
||||
# These values will be set during the installation process. During
|
||||
# development, they will remain None.
|
||||
#
|
||||
|
||||
LIBRARY_DIR = None
|
||||
CONF_PATHNAME = None
|
||||
|
||||
#########################################################################
|
||||
#
|
||||
# Adjust sys.path to include our library directory
|
||||
#
|
||||
|
||||
import sys
|
||||
|
||||
if LIBRARY_DIR:
|
||||
sys.path.insert(0, LIBRARY_DIR)
|
||||
|
||||
import sapi
|
||||
import imp
|
||||
|
||||
# Import real ViewVC module
|
||||
fp, pathname, description = imp.find_module('viewvc', [LIBRARY_DIR])
|
||||
try:
|
||||
viewvc = imp.load_module('viewvc', fp, pathname, description)
|
||||
finally:
|
||||
if fp:
|
||||
fp.close()
|
||||
|
||||
def index(req):
|
||||
server = sapi.ModPythonServer(req)
|
||||
cfg = viewvc.load_config(CONF_PATHNAME, server)
|
||||
try:
|
||||
viewvc.main(server, cfg)
|
||||
finally:
|
||||
server.close()
|
|
@ -0,0 +1,672 @@
|
|||
#!/usr/bin/env python
|
||||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2007 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
"""Run "standalone.py -p <port>" to start an HTTP server on a given port
|
||||
on the local machine to generate ViewVC web pages.
|
||||
"""
|
||||
|
||||
__author__ = "Peter Funk <pf@artcom-gmbh.de>"
|
||||
__date__ = "11 November 2001"
|
||||
__version__ = "$Revision: 1962 $"
|
||||
__credits__ = """Guido van Rossum, for an excellent programming language.
|
||||
Greg Stein, for writing ViewCVS in the first place.
|
||||
Ka-Ping Yee, for the GUI code and the framework stolen from pydoc.py.
|
||||
"""
|
||||
|
||||
# INSTALL-TIME CONFIGURATION
|
||||
#
|
||||
# These values will be set during the installation process. During
|
||||
# development, they will remain None.
|
||||
#
|
||||
|
||||
LIBRARY_DIR = None
|
||||
CONF_PATHNAME = None
|
||||
|
||||
import sys
|
||||
import os
|
||||
import os.path
|
||||
import stat
|
||||
import string
|
||||
import urllib
|
||||
import rfc822
|
||||
import socket
|
||||
import select
|
||||
import BaseHTTPServer
|
||||
|
||||
if LIBRARY_DIR:
|
||||
sys.path.insert(0, LIBRARY_DIR)
|
||||
else:
|
||||
sys.path.insert(0, os.path.abspath(os.path.join(sys.argv[0], "../../lib")))
|
||||
|
||||
import sapi
|
||||
import viewvc
|
||||
import compat; compat.for_standalone()
|
||||
|
||||
|
||||
class Options:
|
||||
port = 49152 # default TCP/IP port used for the server
|
||||
start_gui = 0 # No GUI unless requested.
|
||||
daemon = 0 # stay in the foreground by default
|
||||
repositories = {} # use default repositories specified in config
|
||||
if sys.platform == 'mac':
|
||||
host = '127.0.0.1'
|
||||
else:
|
||||
host = 'localhost'
|
||||
script_alias = 'viewvc'
|
||||
config_file = None
|
||||
|
||||
# --- web browser interface: ----------------------------------------------
|
||||
|
||||
class StandaloneServer(sapi.CgiServer):
|
||||
def __init__(self, handler):
|
||||
sapi.CgiServer.__init__(self, inheritableOut = sys.platform != "win32")
|
||||
self.handler = handler
|
||||
|
||||
def header(self, content_type='text/html', status=None):
|
||||
if not self.headerSent:
|
||||
self.headerSent = 1
|
||||
if status is None:
|
||||
statusCode = 200
|
||||
statusText = 'OK'
|
||||
else:
|
||||
p = string.find(status, ' ')
|
||||
if p < 0:
|
||||
statusCode = int(status)
|
||||
statusText = ''
|
||||
else:
|
||||
statusCode = int(status[:p])
|
||||
statusText = status[p+1:]
|
||||
self.handler.send_response(statusCode, statusText)
|
||||
self.handler.send_header("Content-type", content_type)
|
||||
for (name, value) in self.headers:
|
||||
self.handler.send_header(name, value)
|
||||
self.handler.end_headers()
|
||||
|
||||
|
||||
def serve(host, port, callback=None):
|
||||
"""start a HTTP server on the given port. call 'callback' when the
|
||||
server is ready to serve"""
|
||||
|
||||
class ViewVC_Handler(BaseHTTPServer.BaseHTTPRequestHandler):
|
||||
|
||||
def do_GET(self):
|
||||
"""Serve a GET request."""
|
||||
if not self.path or self.path == "/":
|
||||
self.redirect()
|
||||
elif self.is_viewvc():
|
||||
try:
|
||||
self.run_viewvc()
|
||||
except IOError:
|
||||
# ignore IOError: [Errno 32] Broken pipe
|
||||
pass
|
||||
else:
|
||||
self.send_error(404)
|
||||
|
||||
def do_POST(self):
|
||||
"""Serve a POST request."""
|
||||
if self.is_viewvc():
|
||||
self.run_viewvc()
|
||||
else:
|
||||
self.send_error(501, "Can only POST to %s"
|
||||
% (options.script_alias))
|
||||
|
||||
def is_viewvc(self):
|
||||
"""Check whether self.path is, or is a child of, the ScriptAlias"""
|
||||
if self.path == '/' + options.script_alias:
|
||||
return 1
|
||||
if self.path[:len(options.script_alias)+2] == \
|
||||
'/' + options.script_alias + '/':
|
||||
return 1
|
||||
if self.path[:len(options.script_alias)+2] == \
|
||||
'/' + options.script_alias + '?':
|
||||
return 1
|
||||
return 0
|
||||
|
||||
def redirect(self):
|
||||
"""redirect the browser to the viewvc URL"""
|
||||
new_url = self.server.url + options.script_alias + '/'
|
||||
self.send_response(301, "Moved (redirection follows)")
|
||||
self.send_header("Content-type", "text/html")
|
||||
self.send_header("Location", new_url)
|
||||
self.end_headers()
|
||||
self.wfile.write("""<html>
|
||||
<head>
|
||||
<meta http-equiv="refresh" content="1; URL=%s">
|
||||
</head>
|
||||
<body>
|
||||
<h1>Redirection to <a href="%s">ViewVC</a></h1>
|
||||
Wait a second. You will be automatically redirected to <b>ViewVC</b>.
|
||||
If this doesn't work, please click on the link above.
|
||||
</body>
|
||||
</html>
|
||||
""" % tuple([new_url]*2))
|
||||
|
||||
def run_viewvc(self):
|
||||
"""This is a quick and dirty cut'n'rape from Python's
|
||||
standard library module CGIHTTPServer."""
|
||||
scriptname = '/' + options.script_alias
|
||||
assert string.find(self.path, scriptname) == 0
|
||||
viewvc_url = self.server.url[:-1] + scriptname
|
||||
rest = self.path[len(scriptname):]
|
||||
i = string.rfind(rest, '?')
|
||||
if i >= 0:
|
||||
rest, query = rest[:i], rest[i+1:]
|
||||
else:
|
||||
query = ''
|
||||
# sys.stderr.write("Debug: '"+scriptname+"' '"+rest+"' '"+query+"'\n")
|
||||
env = os.environ
|
||||
# Since we're going to modify the env in the parent, provide empty
|
||||
# values to override previously set values
|
||||
for k in env.keys():
|
||||
if k[:5] == 'HTTP_':
|
||||
del env[k]
|
||||
for k in ('QUERY_STRING', 'REMOTE_HOST', 'CONTENT_LENGTH',
|
||||
'HTTP_USER_AGENT', 'HTTP_COOKIE'):
|
||||
if env.has_key(k):
|
||||
env[k] = ""
|
||||
# XXX Much of the following could be prepared ahead of time!
|
||||
env['SERVER_SOFTWARE'] = self.version_string()
|
||||
env['SERVER_NAME'] = self.server.server_name
|
||||
env['GATEWAY_INTERFACE'] = 'CGI/1.1'
|
||||
env['SERVER_PROTOCOL'] = self.protocol_version
|
||||
env['SERVER_PORT'] = str(self.server.server_port)
|
||||
env['REQUEST_METHOD'] = self.command
|
||||
uqrest = urllib.unquote(rest)
|
||||
env['PATH_INFO'] = uqrest
|
||||
env['SCRIPT_NAME'] = scriptname
|
||||
if query:
|
||||
env['QUERY_STRING'] = query
|
||||
env['HTTP_HOST'] = self.server.address[0]
|
||||
host = self.address_string()
|
||||
if host != self.client_address[0]:
|
||||
env['REMOTE_HOST'] = host
|
||||
env['REMOTE_ADDR'] = self.client_address[0]
|
||||
# AUTH_TYPE
|
||||
# REMOTE_USER
|
||||
# REMOTE_IDENT
|
||||
if self.headers.typeheader is None:
|
||||
env['CONTENT_TYPE'] = self.headers.type
|
||||
else:
|
||||
env['CONTENT_TYPE'] = self.headers.typeheader
|
||||
length = self.headers.getheader('content-length')
|
||||
if length:
|
||||
env['CONTENT_LENGTH'] = length
|
||||
accept = []
|
||||
for line in self.headers.getallmatchingheaders('accept'):
|
||||
if line[:1] in string.whitespace:
|
||||
accept.append(string.strip(line))
|
||||
else:
|
||||
accept = accept + string.split(line[7:], ',')
|
||||
env['HTTP_ACCEPT'] = string.joinfields(accept, ',')
|
||||
ua = self.headers.getheader('user-agent')
|
||||
if ua:
|
||||
env['HTTP_USER_AGENT'] = ua
|
||||
modified = self.headers.getheader('if-modified-since')
|
||||
if modified:
|
||||
env['HTTP_IF_MODIFIED_SINCE'] = modified
|
||||
etag = self.headers.getheader('if-none-match')
|
||||
if etag:
|
||||
env['HTTP_IF_NONE_MATCH'] = etag
|
||||
# XXX Other HTTP_* headers
|
||||
decoded_query = string.replace(query, '+', ' ')
|
||||
|
||||
# Preserve state, because we execute script in current process:
|
||||
save_argv = sys.argv
|
||||
save_stdin = sys.stdin
|
||||
save_stdout = sys.stdout
|
||||
save_stderr = sys.stderr
|
||||
# For external tools like enscript we also need to redirect
|
||||
# the real stdout file descriptor. (On windows, reassigning the
|
||||
# sys.stdout variable is sufficient because pipe_cmds makes it
|
||||
# the standard output for child processes.)
|
||||
if sys.platform != "win32": save_realstdout = os.dup(1)
|
||||
try:
|
||||
try:
|
||||
sys.stdout = self.wfile
|
||||
if sys.platform != "win32":
|
||||
os.dup2(self.wfile.fileno(), 1)
|
||||
sys.stdin = self.rfile
|
||||
viewvc.main(StandaloneServer(self), cfg)
|
||||
finally:
|
||||
sys.argv = save_argv
|
||||
sys.stdin = save_stdin
|
||||
sys.stdout.flush()
|
||||
if sys.platform != "win32":
|
||||
os.dup2(save_realstdout, 1)
|
||||
os.close(save_realstdout)
|
||||
sys.stdout = save_stdout
|
||||
sys.stderr = save_stderr
|
||||
except SystemExit, status:
|
||||
self.log_error("ViewVC exit status %s", str(status))
|
||||
else:
|
||||
self.log_error("ViewVC exited ok")
|
||||
|
||||
class ViewVC_Server(BaseHTTPServer.HTTPServer):
|
||||
def __init__(self, host, port, callback):
|
||||
self.address = (host, port)
|
||||
self.url = 'http://%s:%d/' % (host, port)
|
||||
self.callback = callback
|
||||
BaseHTTPServer.HTTPServer.__init__(self, self.address,
|
||||
self.handler)
|
||||
|
||||
def serve_until_quit(self):
|
||||
self.quit = 0
|
||||
while not self.quit:
|
||||
rd, wr, ex = select.select([self.socket.fileno()], [], [], 1)
|
||||
if rd:
|
||||
self.handle_request()
|
||||
|
||||
def server_activate(self):
|
||||
BaseHTTPServer.HTTPServer.server_activate(self)
|
||||
if self.callback:
|
||||
self.callback(self)
|
||||
|
||||
def server_bind(self):
|
||||
# set SO_REUSEADDR (if available on this platform)
|
||||
if hasattr(socket, 'SOL_SOCKET') \
|
||||
and hasattr(socket, 'SO_REUSEADDR'):
|
||||
self.socket.setsockopt(socket.SOL_SOCKET,
|
||||
socket.SO_REUSEADDR, 1)
|
||||
BaseHTTPServer.HTTPServer.server_bind(self)
|
||||
|
||||
ViewVC_Server.handler = ViewVC_Handler
|
||||
|
||||
try:
|
||||
# XXX Move this code out of this function.
|
||||
# Early loading of configuration here. Used to
|
||||
# allow tinkering with some configuration settings:
|
||||
handle_config(options.config_file)
|
||||
if options.repositories:
|
||||
cfg.general.default_root = "Development"
|
||||
for repo_name in options.repositories.keys():
|
||||
repo_path = os.path.normpath(options.repositories[repo_name])
|
||||
if os.path.exists(os.path.join(repo_path, "CVSROOT",
|
||||
"config")):
|
||||
cfg.general.cvs_roots[repo_name] = repo_path
|
||||
elif os.path.exists(os.path.join(repo_path, "format")):
|
||||
cfg.general.svn_roots[repo_name] = repo_path
|
||||
elif cfg.general.cvs_roots.has_key("Development") and \
|
||||
not os.path.isdir(cfg.general.cvs_roots["Development"]):
|
||||
sys.stderr.write("*** No repository found. Please use the -r option.\n")
|
||||
sys.stderr.write(" Use --help for more info.\n")
|
||||
raise KeyboardInterrupt # Hack!
|
||||
os.close(0) # To avoid problems with shell job control
|
||||
|
||||
# always use default docroot location
|
||||
cfg.options.docroot = None
|
||||
|
||||
# if cvsnt isn't found, fall back to rcs
|
||||
if (cfg.conf_path is None and cfg.utilities.cvsnt):
|
||||
import popen
|
||||
cvsnt_works = 0
|
||||
try:
|
||||
fp = popen.popen(cfg.utilities.cvsnt, ['--version'], 'rt')
|
||||
try:
|
||||
while 1:
|
||||
line = fp.readline()
|
||||
if not line: break
|
||||
if string.find(line, "Concurrent Versions System (CVSNT)")>=0:
|
||||
cvsnt_works = 1
|
||||
while fp.read(4096):
|
||||
pass
|
||||
break
|
||||
finally:
|
||||
fp.close()
|
||||
except:
|
||||
pass
|
||||
if not cvsnt_works:
|
||||
cfg.utilities.cvsnt = None
|
||||
|
||||
ViewVC_Server(host, port, callback).serve_until_quit()
|
||||
except (KeyboardInterrupt, select.error):
|
||||
pass
|
||||
print 'server stopped'
|
||||
|
||||
def handle_config(config_file):
|
||||
global cfg
|
||||
cfg = viewvc.load_config(config_file or CONF_PATHNAME)
|
||||
|
||||
# --- graphical interface: --------------------------------------------------
|
||||
|
||||
def nogui(missing_module):
|
||||
sys.stderr.write(
|
||||
"Sorry! Your Python was compiled without the %s module"%missing_module+
|
||||
" enabled.\nI'm unable to run the GUI part. Please omit the '-g'\n"+
|
||||
"and '--gui' options or install another Python interpreter.\n")
|
||||
raise SystemExit, 1
|
||||
|
||||
def gui(host, port):
|
||||
"""Graphical interface (starts web server and pops up a control window)."""
|
||||
class GUI:
|
||||
def __init__(self, window, host, port):
|
||||
self.window = window
|
||||
self.server = None
|
||||
self.scanner = None
|
||||
|
||||
try:
|
||||
import Tkinter
|
||||
except ImportError:
|
||||
nogui("Tkinter")
|
||||
|
||||
self.server_frm = Tkinter.Frame(window)
|
||||
self.title_lbl = Tkinter.Label(self.server_frm,
|
||||
text='Starting server...\n ')
|
||||
self.open_btn = Tkinter.Button(self.server_frm,
|
||||
text='open browser', command=self.open, state='disabled')
|
||||
self.quit_btn = Tkinter.Button(self.server_frm,
|
||||
text='quit serving', command=self.quit, state='disabled')
|
||||
|
||||
|
||||
self.window.title('ViewVC standalone')
|
||||
self.window.protocol('WM_DELETE_WINDOW', self.quit)
|
||||
self.title_lbl.pack(side='top', fill='x')
|
||||
self.open_btn.pack(side='left', fill='x', expand=1)
|
||||
self.quit_btn.pack(side='right', fill='x', expand=1)
|
||||
|
||||
# Early loading of configuration here. Used to
|
||||
# allow tinkering with configuration settings through the gui:
|
||||
handle_config()
|
||||
if not LIBRARY_DIR:
|
||||
cfg.options.cvsgraph_conf = "../cgi/cvsgraph.conf.dist"
|
||||
|
||||
self.options_frm = Tkinter.Frame(window)
|
||||
|
||||
# cvsgraph toggle:
|
||||
self.cvsgraph_ivar = Tkinter.IntVar()
|
||||
self.cvsgraph_ivar.set(cfg.options.use_cvsgraph)
|
||||
self.cvsgraph_toggle = Tkinter.Checkbutton(self.options_frm,
|
||||
text="enable cvsgraph (needs binary)", var=self.cvsgraph_ivar,
|
||||
command=self.toggle_use_cvsgraph)
|
||||
self.cvsgraph_toggle.pack(side='top', anchor='w')
|
||||
|
||||
# enscript toggle:
|
||||
self.enscript_ivar = Tkinter.IntVar()
|
||||
self.enscript_ivar.set(cfg.options.use_enscript)
|
||||
self.enscript_toggle = Tkinter.Checkbutton(self.options_frm,
|
||||
text="enable enscript (needs binary)", var=self.enscript_ivar,
|
||||
command=self.toggle_use_enscript)
|
||||
self.enscript_toggle.pack(side='top', anchor='w')
|
||||
|
||||
# show_subdir_lastmod toggle:
|
||||
self.subdirmod_ivar = Tkinter.IntVar()
|
||||
self.subdirmod_ivar.set(cfg.options.show_subdir_lastmod)
|
||||
self.subdirmod_toggle = Tkinter.Checkbutton(self.options_frm,
|
||||
text="show subdir last mod (dir view)", var=self.subdirmod_ivar,
|
||||
command=self.toggle_subdirmod)
|
||||
self.subdirmod_toggle.pack(side='top', anchor='w')
|
||||
|
||||
# use_re_search toggle:
|
||||
self.useresearch_ivar = Tkinter.IntVar()
|
||||
self.useresearch_ivar.set(cfg.options.use_re_search)
|
||||
self.useresearch_toggle = Tkinter.Checkbutton(self.options_frm,
|
||||
text="allow regular expr search", var=self.useresearch_ivar,
|
||||
command=self.toggle_useresearch)
|
||||
self.useresearch_toggle.pack(side='top', anchor='w')
|
||||
|
||||
# use_localtime toggle:
|
||||
self.use_localtime_ivar = Tkinter.IntVar()
|
||||
self.use_localtime_ivar.set(cfg.options.use_localtime)
|
||||
self.use_localtime_toggle = Tkinter.Checkbutton(self.options_frm,
|
||||
text="use localtime (instead of UTC)",
|
||||
var=self.use_localtime_ivar,
|
||||
command=self.toggle_use_localtime)
|
||||
self.use_localtime_toggle.pack(side='top', anchor='w')
|
||||
|
||||
# use_pagesize integer var:
|
||||
self.usepagesize_lbl = Tkinter.Label(self.options_frm,
|
||||
text='Paging (number of items per page, 0 disables):')
|
||||
self.usepagesize_lbl.pack(side='top', anchor='w')
|
||||
self.use_pagesize_ivar = Tkinter.IntVar()
|
||||
self.use_pagesize_ivar.set(cfg.options.use_pagesize)
|
||||
self.use_pagesize_entry = Tkinter.Entry(self.options_frm,
|
||||
width=10, textvariable=self.use_pagesize_ivar)
|
||||
self.use_pagesize_entry.bind('<Return>', self.set_use_pagesize)
|
||||
self.use_pagesize_entry.pack(side='top', anchor='w')
|
||||
|
||||
# directory view template:
|
||||
self.dirtemplate_lbl = Tkinter.Label(self.options_frm,
|
||||
text='Choose HTML Template for the Directory pages:')
|
||||
self.dirtemplate_lbl.pack(side='top', anchor='w')
|
||||
self.dirtemplate_svar = Tkinter.StringVar()
|
||||
self.dirtemplate_svar.set(cfg.templates.directory)
|
||||
self.dirtemplate_entry = Tkinter.Entry(self.options_frm,
|
||||
width = 40, textvariable=self.dirtemplate_svar)
|
||||
self.dirtemplate_entry.bind('<Return>', self.set_templates_directory)
|
||||
self.dirtemplate_entry.pack(side='top', anchor='w')
|
||||
self.templates_dir = Tkinter.Radiobutton(self.options_frm,
|
||||
text="directory.ezt", value="templates/directory.ezt",
|
||||
var=self.dirtemplate_svar, command=self.set_templates_directory)
|
||||
self.templates_dir.pack(side='top', anchor='w')
|
||||
self.templates_dir_alt = Tkinter.Radiobutton(self.options_frm,
|
||||
text="dir_alternate.ezt", value="templates/dir_alternate.ezt",
|
||||
var=self.dirtemplate_svar, command=self.set_templates_directory)
|
||||
self.templates_dir_alt.pack(side='top', anchor='w')
|
||||
|
||||
# log view template:
|
||||
self.logtemplate_lbl = Tkinter.Label(self.options_frm,
|
||||
text='Choose HTML Template for the Log pages:')
|
||||
self.logtemplate_lbl.pack(side='top', anchor='w')
|
||||
self.logtemplate_svar = Tkinter.StringVar()
|
||||
self.logtemplate_svar.set(cfg.templates.log)
|
||||
self.logtemplate_entry = Tkinter.Entry(self.options_frm,
|
||||
width = 40, textvariable=self.logtemplate_svar)
|
||||
self.logtemplate_entry.bind('<Return>', self.set_templates_log)
|
||||
self.logtemplate_entry.pack(side='top', anchor='w')
|
||||
self.templates_log = Tkinter.Radiobutton(self.options_frm,
|
||||
text="log.ezt", value="templates/log.ezt",
|
||||
var=self.logtemplate_svar, command=self.set_templates_log)
|
||||
self.templates_log.pack(side='top', anchor='w')
|
||||
self.templates_log_table = Tkinter.Radiobutton(self.options_frm,
|
||||
text="log_table.ezt", value="templates/log_table.ezt",
|
||||
var=self.logtemplate_svar, command=self.set_templates_log)
|
||||
self.templates_log_table.pack(side='top', anchor='w')
|
||||
|
||||
# query view template:
|
||||
self.querytemplate_lbl = Tkinter.Label(self.options_frm,
|
||||
text='Template for the database query page:')
|
||||
self.querytemplate_lbl.pack(side='top', anchor='w')
|
||||
self.querytemplate_svar = Tkinter.StringVar()
|
||||
self.querytemplate_svar.set(cfg.templates.query)
|
||||
self.querytemplate_entry = Tkinter.Entry(self.options_frm,
|
||||
width = 40, textvariable=self.querytemplate_svar)
|
||||
self.querytemplate_entry.bind('<Return>', self.set_templates_query)
|
||||
self.querytemplate_entry.pack(side='top', anchor='w')
|
||||
self.templates_query = Tkinter.Radiobutton(self.options_frm,
|
||||
text="query.ezt", value="templates/query.ezt",
|
||||
var=self.querytemplate_svar, command=self.set_templates_query)
|
||||
self.templates_query.pack(side='top', anchor='w')
|
||||
|
||||
# pack and set window manager hints:
|
||||
self.server_frm.pack(side='top', fill='x')
|
||||
self.options_frm.pack(side='top', fill='x')
|
||||
|
||||
self.window.update()
|
||||
self.minwidth = self.window.winfo_width()
|
||||
self.minheight = self.window.winfo_height()
|
||||
self.expanded = 0
|
||||
self.window.wm_geometry('%dx%d' % (self.minwidth, self.minheight))
|
||||
self.window.wm_minsize(self.minwidth, self.minheight)
|
||||
|
||||
try:
|
||||
import threading
|
||||
except ImportError:
|
||||
nogui("thread")
|
||||
threading.Thread(target=serve,
|
||||
args=(host, port, self.ready)).start()
|
||||
|
||||
def toggle_use_cvsgraph(self, event=None):
|
||||
cfg.options.use_cvsgraph = self.cvsgraph_ivar.get()
|
||||
|
||||
def toggle_use_enscript(self, event=None):
|
||||
cfg.options.use_enscript = self.enscript_ivar.get()
|
||||
|
||||
def toggle_use_localtime(self, event=None):
|
||||
cfg.options.use_localtime = self.use_localtime_ivar.get()
|
||||
|
||||
def toggle_subdirmod(self, event=None):
|
||||
cfg.options.show_subdir_lastmod = self.subdirmod_ivar.get()
|
||||
|
||||
def toggle_useresearch(self, event=None):
|
||||
cfg.options.use_re_search = self.useresearch_ivar.get()
|
||||
|
||||
def set_use_pagesize(self, event=None):
|
||||
cfg.options.use_pagesize = self.use_pagesize_ivar.get()
|
||||
|
||||
def set_templates_log(self, event=None):
|
||||
cfg.templates.log = self.logtemplate_svar.get()
|
||||
|
||||
def set_templates_directory(self, event=None):
|
||||
cfg.templates.directory = self.dirtemplate_svar.get()
|
||||
|
||||
def set_templates_query(self, event=None):
|
||||
cfg.templates.query = self.querytemplate_svar.get()
|
||||
|
||||
def ready(self, server):
|
||||
"""used as callback parameter to the serve() function"""
|
||||
self.server = server
|
||||
self.title_lbl.config(
|
||||
text='ViewVC standalone server at\n' + server.url)
|
||||
self.open_btn.config(state='normal')
|
||||
self.quit_btn.config(state='normal')
|
||||
|
||||
def open(self, event=None, url=None):
|
||||
"""opens a browser window on the local machine"""
|
||||
url = url or self.server.url
|
||||
try:
|
||||
import webbrowser
|
||||
webbrowser.open(url)
|
||||
except ImportError: # pre-webbrowser.py compatibility
|
||||
if sys.platform == 'win32':
|
||||
os.system('start "%s"' % url)
|
||||
elif sys.platform == 'mac':
|
||||
try:
|
||||
import ic
|
||||
ic.launchurl(url)
|
||||
except ImportError: pass
|
||||
else:
|
||||
rc = os.system('netscape -remote "openURL(%s)" &' % url)
|
||||
if rc: os.system('netscape "%s" &' % url)
|
||||
|
||||
def quit(self, event=None):
|
||||
if self.server:
|
||||
self.server.quit = 1
|
||||
self.window.quit()
|
||||
|
||||
import Tkinter
|
||||
try:
|
||||
gui = GUI(Tkinter.Tk(), host, port)
|
||||
Tkinter.mainloop()
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
|
||||
# --- command-line interface: ----------------------------------------------
|
||||
|
||||
def cli(argv):
|
||||
"""Command-line interface (looks at argv to decide what to do)."""
|
||||
import getopt
|
||||
class BadUsage(Exception): pass
|
||||
|
||||
try:
|
||||
opts, args = getopt.getopt(argv[1:], 'gdc:p:r:h:s:',
|
||||
['gui', 'daemon', 'config-file=', 'host=',
|
||||
'port=', 'repository=', 'script-alias='])
|
||||
for opt, val in opts:
|
||||
if opt in ('-g', '--gui'):
|
||||
options.start_gui = 1
|
||||
elif opt in ('-r', '--repository'):
|
||||
if options.repositories: # option may be used more than once:
|
||||
num = len(options.repositories.keys())+1
|
||||
symbolic_name = "Repository"+str(num)
|
||||
options.repositories[symbolic_name] = val
|
||||
else:
|
||||
options.repositories["Development"] = val
|
||||
elif opt in ('-d', '--daemon'):
|
||||
options.daemon = 1
|
||||
elif opt in ('-p', '--port'):
|
||||
try:
|
||||
options.port = int(val)
|
||||
except ValueError:
|
||||
raise BadUsage, "Port '%s' is not a valid port number" \
|
||||
% (val)
|
||||
elif opt in ('-h', '--host'):
|
||||
options.host = val
|
||||
elif opt in ('-s', '--script-alias'):
|
||||
options.script_alias = \
|
||||
string.join(filter(None, string.split(val, '/')), '/')
|
||||
elif opt in ('-c', '--config-file'):
|
||||
options.config_file = val
|
||||
if options.start_gui and options.config_file:
|
||||
raise BadUsage, "--config-file option is not valid in GUI mode."
|
||||
if not options.start_gui and not options.port:
|
||||
raise BadUsage, "You must supply a valid port, or run in GUI mode."
|
||||
if options.daemon:
|
||||
pid = os.fork()
|
||||
if pid != 0:
|
||||
sys.exit()
|
||||
if options.start_gui:
|
||||
gui(options.host, options.port, options.config_file)
|
||||
return
|
||||
elif options.port:
|
||||
def ready(server):
|
||||
print 'server ready at %s%s' % (server.url,
|
||||
options.script_alias)
|
||||
serve(options.host, options.port, ready)
|
||||
return
|
||||
except (getopt.error, BadUsage), err:
|
||||
cmd = os.path.basename(sys.argv[0])
|
||||
port = options.port
|
||||
host = options.host
|
||||
script_alias = options.script_alias
|
||||
if str(err):
|
||||
sys.stderr.write("ERROR: %s\n\n" % (str(err)))
|
||||
sys.stderr.write("""Usage: %(cmd)s [OPTIONS]
|
||||
|
||||
Run a simple, standalone HTTP server configured to serve up ViewVC
|
||||
requests.
|
||||
|
||||
Options:
|
||||
|
||||
--config-file=PATH (-c) Use the file at PATH as the ViewVC configuration
|
||||
file. If not specified, ViewVC will try to use
|
||||
the configuration file in its installation tree;
|
||||
otherwise, built-in default values are used.
|
||||
(Not valid in GUI mode.)
|
||||
|
||||
--daemon (-d) Background the server process.
|
||||
|
||||
--host=HOST (-h) Start the server listening on HOST. You need
|
||||
to provide the hostname if you want to
|
||||
access the standalone server from a remote
|
||||
machine. [default: %(host)s]
|
||||
|
||||
--port=PORT (-p) Start the server on the given PORT.
|
||||
[default: %(port)d]
|
||||
|
||||
--repository=PATH (-r) Serve up the Subversion or CVS repository located
|
||||
at PATH. This option may be used more than once.
|
||||
|
||||
--script-alias=PATH (-s) Specify the ScriptAlias, the artificial path
|
||||
location that at which ViewVC appears to be
|
||||
located. For example, if your ScriptAlias is
|
||||
"cgi-bin/viewvc", then ViewVC will be accessible
|
||||
at "http://%(host)s:%(port)s/cgi-bin/viewvc".
|
||||
[default: %(script_alias)s]
|
||||
|
||||
--gui (-g) Pop up a graphical interface for serving and
|
||||
testing ViewVC. NOTE: this requires a valid
|
||||
X11 display connection.
|
||||
""" % locals())
|
||||
|
||||
if __name__ == '__main__':
|
||||
options = Options()
|
||||
cli(sys.argv)
|
|
@ -0,0 +1,360 @@
|
|||
#!/usr/bin/env python
|
||||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 2004-2007 James Henstridge
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# administrative program for loading Subversion revision information
|
||||
# into the checkin database. It can be used to add a single revision
|
||||
# to the database, or rebuild/update all revisions.
|
||||
#
|
||||
# To add all the checkins from a Subversion repository to the checkin
|
||||
# database, run the following:
|
||||
# /path/to/svndbadmin rebuild /path/to/repo
|
||||
#
|
||||
# This script can also be called from the Subversion post-commit hook,
|
||||
# something like this:
|
||||
# REPOS="$1"
|
||||
# REV="$2"
|
||||
# /path/to/svndbadmin update "$REPOS" "$REV"
|
||||
#
|
||||
# If you allow changes to revision properties in your repository, you
|
||||
# might also want to set up something similar in the
|
||||
# post-revprop-change hook using "update" with the --force option to
|
||||
# keep the checkin database consistent with the repository.
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
|
||||
#########################################################################
|
||||
#
|
||||
# INSTALL-TIME CONFIGURATION
|
||||
#
|
||||
# These values will be set during the installation process. During
|
||||
# development, they will remain None.
|
||||
#
|
||||
|
||||
LIBRARY_DIR = None
|
||||
CONF_PATHNAME = None
|
||||
|
||||
# Adjust sys.path to include our library directory
|
||||
import sys
|
||||
import os
|
||||
|
||||
if LIBRARY_DIR:
|
||||
sys.path.insert(0, LIBRARY_DIR)
|
||||
else:
|
||||
sys.path.insert(0, os.path.abspath(os.path.join(sys.argv[0], "../../lib")))
|
||||
|
||||
#########################################################################
|
||||
|
||||
import os
|
||||
import string
|
||||
import re
|
||||
|
||||
import svn.core
|
||||
import svn.repos
|
||||
import svn.fs
|
||||
import svn.delta
|
||||
|
||||
import cvsdb
|
||||
import viewvc
|
||||
import vclib
|
||||
|
||||
class SvnRepo:
|
||||
"""Class used to manage a connection to a SVN repository."""
|
||||
def __init__(self, path):
|
||||
self.path = path
|
||||
self.repo = svn.repos.svn_repos_open(path)
|
||||
self.fs = svn.repos.svn_repos_fs(self.repo)
|
||||
self.rev_max = svn.fs.youngest_rev(self.fs)
|
||||
def __getitem__(self, rev):
|
||||
if rev is None:
|
||||
rev = self.rev_max
|
||||
elif rev < 0:
|
||||
rev = rev + self.rev_max + 1
|
||||
assert 0 <= rev <= self.rev_max
|
||||
rev = SvnRev(self, rev)
|
||||
return rev
|
||||
|
||||
_re_diff_change_command = re.compile('(\d+)(?:,(\d+))?([acd])(\d+)(?:,(\d+))?')
|
||||
|
||||
def _get_diff_counts(diff_fp):
|
||||
"""Calculate the plus/minus counts by parsing the output of a
|
||||
normal diff. The reasons for choosing Normal diff format are:
|
||||
- the output is short, so should be quicker to parse.
|
||||
- only the change commands need be parsed to calculate the counts.
|
||||
- All file data is prefixed, so won't be mistaken for a change
|
||||
command.
|
||||
This code is based on the description of the format found in the
|
||||
GNU diff manual."""
|
||||
|
||||
plus, minus = 0, 0
|
||||
line = diff_fp.readline()
|
||||
while line:
|
||||
match = re.match(_re_diff_change_command, line)
|
||||
if match:
|
||||
# size of first range
|
||||
if match.group(2):
|
||||
count1 = int(match.group(2)) - int(match.group(1)) + 1
|
||||
else:
|
||||
count1 = 1
|
||||
cmd = match.group(3)
|
||||
# size of second range
|
||||
if match.group(5):
|
||||
count2 = int(match.group(5)) - int(match.group(4)) + 1
|
||||
else:
|
||||
count2 = 1
|
||||
|
||||
if cmd == 'a':
|
||||
# LaR - insert after line L of file1 range R of file2
|
||||
plus = plus + count2
|
||||
elif cmd == 'c':
|
||||
# FcT - replace range F of file1 with range T of file2
|
||||
minus = minus + count1
|
||||
plus = plus + count2
|
||||
elif cmd == 'd':
|
||||
# RdL - remove range R of file1, which would have been
|
||||
# at line L of file2
|
||||
minus = minus + count1
|
||||
line = diff_fp.readline()
|
||||
return plus, minus
|
||||
|
||||
|
||||
class SvnRev:
|
||||
"""Class used to hold information about a particular revision of
|
||||
the repository."""
|
||||
def __init__(self, repo, rev):
|
||||
self.repo = repo
|
||||
self.rev = rev
|
||||
self.rev_roots = {} # cache of revision roots
|
||||
|
||||
# revision properties ...
|
||||
revprops = svn.fs.revision_proplist(repo.fs, rev)
|
||||
self.author = str(revprops.get(svn.core.SVN_PROP_REVISION_AUTHOR,''))
|
||||
self.date = str(revprops.get(svn.core.SVN_PROP_REVISION_DATE, ''))
|
||||
self.log = str(revprops.get(svn.core.SVN_PROP_REVISION_LOG, ''))
|
||||
|
||||
# convert the date string to seconds since epoch ...
|
||||
try:
|
||||
self.date = svn.core.svn_time_from_cstring(self.date) / 1000000
|
||||
except:
|
||||
self.date = None
|
||||
|
||||
# get a root for the current revisions
|
||||
fsroot = self._get_root_for_rev(rev)
|
||||
|
||||
# find changes in the revision
|
||||
editor = svn.repos.RevisionChangeCollector(repo.fs, rev)
|
||||
e_ptr, e_baton = svn.delta.make_editor(editor)
|
||||
svn.repos.svn_repos_replay(fsroot, e_ptr, e_baton)
|
||||
|
||||
self.changes = []
|
||||
for path, change in editor.changes.items():
|
||||
# skip non-file changes
|
||||
if change.item_kind != svn.core.svn_node_file:
|
||||
continue
|
||||
|
||||
# deal with the change types we handle
|
||||
base_root = None
|
||||
if change.base_path:
|
||||
base_root = self._get_root_for_rev(change.base_rev)
|
||||
|
||||
if not change.path:
|
||||
action = 'remove'
|
||||
elif change.added:
|
||||
action = 'add'
|
||||
else:
|
||||
action = 'change'
|
||||
|
||||
diffobj = svn.fs.FileDiff(base_root and base_root or None,
|
||||
base_root and change.base_path or None,
|
||||
change.path and fsroot or None,
|
||||
change.path and change.path or None)
|
||||
diff_fp = diffobj.get_pipe()
|
||||
plus, minus = _get_diff_counts(diff_fp)
|
||||
self.changes.append((path, action, plus, minus))
|
||||
|
||||
def _get_root_for_rev(self, rev):
|
||||
"""Fetch a revision root from a cache of such, or a fresh root
|
||||
(which is then cached for later use."""
|
||||
if not self.rev_roots.has_key(rev):
|
||||
self.rev_roots[rev] = svn.fs.revision_root(self.repo.fs, rev)
|
||||
return self.rev_roots[rev]
|
||||
|
||||
|
||||
def handle_revision(db, command, repo, rev, verbose, force=0):
|
||||
"""Adds a particular revision of the repository to the checkin database."""
|
||||
revision = repo[rev]
|
||||
committed = 0
|
||||
|
||||
if verbose: print "Building commit info for revision %d..." % (rev),
|
||||
|
||||
if not revision.changes:
|
||||
if verbose: print "skipped (no changes)."
|
||||
return
|
||||
|
||||
for (path, action, plus, minus) in revision.changes:
|
||||
directory, file = os.path.split(path)
|
||||
commit = cvsdb.CreateCommit()
|
||||
commit.SetRepository(repo.path)
|
||||
commit.SetDirectory(directory)
|
||||
commit.SetFile(file)
|
||||
commit.SetRevision(str(rev))
|
||||
commit.SetAuthor(revision.author)
|
||||
commit.SetDescription(revision.log)
|
||||
commit.SetTime(revision.date)
|
||||
commit.SetPlusCount(plus)
|
||||
commit.SetMinusCount(minus)
|
||||
commit.SetBranch(None)
|
||||
|
||||
if action == 'add':
|
||||
commit.SetTypeAdd()
|
||||
elif action == 'remove':
|
||||
commit.SetTypeRemove()
|
||||
elif action == 'change':
|
||||
commit.SetTypeChange()
|
||||
|
||||
if command == 'update':
|
||||
result = db.CheckCommit(commit)
|
||||
if result and not force:
|
||||
continue # already recorded
|
||||
|
||||
# commit to database
|
||||
db.AddCommit(commit)
|
||||
committed = 1
|
||||
|
||||
if verbose:
|
||||
if committed:
|
||||
print "done."
|
||||
else:
|
||||
print "skipped (already recorded)."
|
||||
|
||||
def main(command, repository, revs=[], verbose=0, force=0):
|
||||
cfg = viewvc.load_config(CONF_PATHNAME)
|
||||
db = cvsdb.ConnectDatabase(cfg)
|
||||
|
||||
if command in ('rebuild', 'purge'):
|
||||
if verbose:
|
||||
print "Purging commit info for repository root `%s'" % repository
|
||||
db.PurgeRepository(repository)
|
||||
|
||||
repo = SvnRepo(repository)
|
||||
if command == 'rebuild' or (command == 'update' and not revs):
|
||||
for rev in range(repo.rev_max+1):
|
||||
handle_revision(db, command, repo, rev, verbose)
|
||||
elif command == 'update':
|
||||
if revs[0] is None:
|
||||
revs[0] = repo.rev_max
|
||||
if revs[1] is None:
|
||||
revs[1] = repo.rev_max
|
||||
revs.sort()
|
||||
for rev in range(revs[0], revs[1]+1):
|
||||
handle_revision(db, command, repo, rev, verbose, force)
|
||||
|
||||
def _rev2int(r):
|
||||
if r == 'HEAD':
|
||||
r = None
|
||||
else:
|
||||
r = int(r)
|
||||
if r < 0:
|
||||
raise ValueError, "invalid revision '%d'" % (r)
|
||||
return r
|
||||
|
||||
def usage():
|
||||
cmd = os.path.basename(sys.argv[0])
|
||||
sys.stderr.write(
|
||||
"""Administer the ViewVC checkins database data for the Subversion repository
|
||||
located at REPOS-PATH.
|
||||
|
||||
Usage: 1. %s [-v] rebuild REPOS-PATH
|
||||
2. %s [-v] update REPOS-PATH [REV:[REV2]] [--force]
|
||||
3. %s [-v] purge REPOS-PATH
|
||||
|
||||
1. Rebuild the commit database information for the repository located
|
||||
at REPOS-PATH across all revisions, after first purging
|
||||
information specific to that repository (if any).
|
||||
|
||||
2. Update the commit database information for the repository located
|
||||
at REPOS-PATH across all revisions or, optionally, only for the
|
||||
specified revision REV (or revision range REV:REV2). This is just
|
||||
like rebuilding, except that, unless --force is specified, no
|
||||
commit information will be stored for commits already present in
|
||||
the database. If a range is specified, the revisions will be
|
||||
processed in ascending order, and you may specify "HEAD" to
|
||||
indicate "the youngest revision currently in the repository".
|
||||
|
||||
3. Purge information specific to the repository located at REPOS-PATH
|
||||
from the database.
|
||||
|
||||
Use the -v flag to cause this script to give progress information as it works.
|
||||
|
||||
""" % (cmd, cmd, cmd))
|
||||
sys.exit(1)
|
||||
|
||||
if __name__ == '__main__':
|
||||
verbose = 0
|
||||
force = 0
|
||||
args = sys.argv
|
||||
try:
|
||||
index = args.index('-v')
|
||||
verbose = 1
|
||||
del args[index]
|
||||
except ValueError:
|
||||
pass
|
||||
try:
|
||||
index = args.index('--force')
|
||||
force = 1
|
||||
del args[index]
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
if len(args) < 3:
|
||||
usage()
|
||||
|
||||
command = string.lower(args[1])
|
||||
if command not in ('rebuild', 'update', 'purge'):
|
||||
sys.stderr.write('ERROR: unknown command %s\n' % command)
|
||||
usage()
|
||||
|
||||
repository = args[2]
|
||||
if not os.path.exists(repository):
|
||||
sys.stderr.write('ERROR: could not find repository %s\n' % args[2])
|
||||
usage()
|
||||
repository = vclib.svn.canonicalize_rootpath(repository)
|
||||
|
||||
revs = []
|
||||
if len(sys.argv) > 3:
|
||||
if command == 'rebuild':
|
||||
sys.stderr.write('ERROR: rebuild no longer accepts a revision '
|
||||
'number argument. Usage update --force.')
|
||||
usage()
|
||||
elif command != 'update':
|
||||
usage()
|
||||
try:
|
||||
revs = map(lambda x: _rev2int(x), sys.argv[3].split(':'))
|
||||
if len(revs) > 2:
|
||||
raise ValueError, "too many revisions in range"
|
||||
if len(revs) == 1:
|
||||
revs.append(revs[0])
|
||||
except ValueError:
|
||||
sys.stderr.write('ERROR: invalid revision specification "%s"\n' \
|
||||
% sys.argv[3])
|
||||
usage()
|
||||
else:
|
||||
rev = None
|
||||
|
||||
try:
|
||||
repository = cvsdb.CleanRepository(os.path.abspath(repository))
|
||||
main(command, repository, revs, verbose, force)
|
||||
except KeyboardInterrupt:
|
||||
print
|
||||
print '** break **'
|
||||
sys.exit(0)
|
|
@ -0,0 +1,394 @@
|
|||
# CvsGraph configuration
|
||||
#
|
||||
# - Empty lines and whitespace are ignored.
|
||||
#
|
||||
# - Comments start with '#' and everything until
|
||||
# end of line is ignored.
|
||||
#
|
||||
# - Strings are C-style strings in which characters
|
||||
# may be escaped with '\' and written in octal
|
||||
# and hex escapes. Note that '\' must be escaped
|
||||
# if it is to be entered as a character.
|
||||
#
|
||||
# - Some strings are expanded with printf like
|
||||
# conversions which start with '%'. Not all
|
||||
# are applicable at all times, in which case they
|
||||
# will expand to nothing.
|
||||
# %c = cvsroot (with trailing '/')
|
||||
# %C = cvsroot (*without* trailing '/')
|
||||
# %m = module (with trailing '/')
|
||||
# %M = module (*without* trailing '/')
|
||||
# %f = filename without path
|
||||
# %F = filename without path and with ",v" stripped
|
||||
# %p = path part of filename (with trailing '/')
|
||||
# %r = number of revisions
|
||||
# %b = number of branches
|
||||
# %% = '%'
|
||||
# %R = the revision number (e.g. '1.2.4.4')
|
||||
# %P = previous revision number
|
||||
# %B = the branch number (e.g. '1.2.4')
|
||||
# %d = date of revision
|
||||
# %a = author of revision
|
||||
# %s = state of revision
|
||||
# %t = current tag of branch or revision
|
||||
# %0..%9 = command-line argument -0 .. -9
|
||||
# %l = HTMLized log entry of the revision
|
||||
# NOTE: %l is obsolete. See %(%) and cvsgraph.conf(5) for
|
||||
# more details.
|
||||
# %L = log entry of revision
|
||||
# The log entry expansion takes an optional argument to
|
||||
# specify maximum length of the expansion like %L[25].
|
||||
# %(...%) = HTMLize the string within the parenthesis.
|
||||
# ViewVC currently uses the following four command-line arguments to
|
||||
# pass URL information to cvsgraph:
|
||||
# -3 link to current file's log page
|
||||
# -4 link to current file's checkout page minus "rev" parameter
|
||||
# -5 link to current file's diff page minus "r1" and "r2" parameters
|
||||
# -6 link to current directory page minus "pathrev" parameter
|
||||
#
|
||||
# - Numbers may be entered as octal, decimal or
|
||||
# hex as in 0117, 79 and 0x4f respectively.
|
||||
#
|
||||
# - Fonts are numbered 0..4 (defined as in libgd)
|
||||
# 0 = tiny
|
||||
# 1 = small
|
||||
# 2 = medium (bold)
|
||||
# 3 = large
|
||||
# 4 = giant
|
||||
#
|
||||
# - Colors are a string like HTML type colors in
|
||||
# the form "#rrggbb" with parts written in hex
|
||||
# rr = red (00..ff)
|
||||
# gg = green (00-ff)
|
||||
# bb = blue (00-ff)
|
||||
#
|
||||
# - There are several reserved words besides of the
|
||||
# feature-keywords. These additional reserved words
|
||||
# expand to numerical values:
|
||||
# * false = 0
|
||||
# * true = 1
|
||||
# * not = -1
|
||||
# * left = 0
|
||||
# * center = 1
|
||||
# * right = 2
|
||||
# * gif = 0
|
||||
# * png = 1
|
||||
# * jpeg = 2
|
||||
# * tiny = 0
|
||||
# * small = 1
|
||||
# * medium = 2
|
||||
# * large = 3
|
||||
# * giant = 4
|
||||
#
|
||||
# - Booleans have three possible arguments: true, false
|
||||
# and not. `Not' means inverse of what it was (logical
|
||||
# negation) and is represented by the value -1.
|
||||
# For the configuration file that means that the default
|
||||
# value is negated.
|
||||
#
|
||||
|
||||
# cvsroot <string>
|
||||
# The *absolute* base directory where the
|
||||
# CVS/RCS repository can be found
|
||||
# cvsmodule <string>
|
||||
#
|
||||
cvsroot = "--unused--"; # unused with ViewVC, will be overridden
|
||||
cvsmodule = ""; # unused with ViewVC -- please leave it blank
|
||||
|
||||
# color_bg <color>
|
||||
# The background color of the image
|
||||
# transparent_bg <boolean>
|
||||
# Make color_bg the transparent color (only useful with PNG)
|
||||
color_bg = "#ffffff";
|
||||
transparent_bg = false;
|
||||
|
||||
# date_format <string>
|
||||
# The strftime(3) format string for date and time
|
||||
date_format = "%d-%b-%Y %H:%M:%S";
|
||||
|
||||
# box_shadow <boolean>
|
||||
# Add a shadow around the boxes
|
||||
# upside_down <boolean>
|
||||
# Reverse the order of the revisions
|
||||
# left_right <boolean>
|
||||
# Draw the image left to right instead of top down,
|
||||
# or right to left is upside_down is set simultaneously.
|
||||
# strip_untagged <boolean>
|
||||
# Remove all untagged revisions except the first, last and tagged ones
|
||||
# strip_first_rev <boolean>
|
||||
# Also remove the first revision if untagged
|
||||
# auto_stretch <boolean>
|
||||
# Try to reformat the tree to minimize image size
|
||||
# use_ttf <boolean>
|
||||
# Use TrueType fonts for text
|
||||
# anti_alias <boolean>
|
||||
# Enable pretty TrueType anti-alias drawing
|
||||
# thick_lines <number>
|
||||
# Draw all connector lines thicker (range: 1..11)
|
||||
box_shadow = true;
|
||||
upside_down = false;
|
||||
left_right = false;
|
||||
strip_untagged = false;
|
||||
strip_first_rev = false;
|
||||
#auto_stretch = true; # not yet stable.
|
||||
use_ttf = false;
|
||||
anti_alias = true;
|
||||
thick_lines = 1;
|
||||
|
||||
# msg_color <color>
|
||||
# Sets the error/warning message color
|
||||
# msg_font <number>
|
||||
# msg_ttfont <string>
|
||||
# msg_ttsize <float>
|
||||
# Sets the error/warning message font
|
||||
msg_color = "#800000";
|
||||
msg_font = medium;
|
||||
msg_ttfont = "/dos/windows/fonts/ariali.ttf";
|
||||
msg_ttsize = 11.0;
|
||||
|
||||
# parse_logs <boolean>
|
||||
# Enable the parsing of the *entire* ,v file to read the
|
||||
# log-entries between revisions. This is necessary for
|
||||
# the %L expansion to work, but slows down parsing by
|
||||
# a very large factor. You're warned.
|
||||
parse_logs = false;
|
||||
|
||||
# tag_font <number>
|
||||
# The font of the tag text
|
||||
# tag_color <color>
|
||||
# The color of the tag text
|
||||
# tag_ignore <string>
|
||||
# A extended regular expression to exclude certain tags from view.
|
||||
# See regex(7) for details on the format.
|
||||
# Note 1: tags matched in merge_from/merge_to are always displayed unless
|
||||
# tag_ignore_merge is set to true.
|
||||
# Note 2: normal string rules apply and special characters must be
|
||||
# escaped.
|
||||
# tag_ignore_merge <boolean>
|
||||
# If set to true, allows tag_ignore to also hide merge_from and merge_to
|
||||
# tags.
|
||||
# tag_nocase <boolean>
|
||||
# Ignore the case is tag_ignore expressions
|
||||
# tag_negate <boolean>
|
||||
# Negate the matching criteria of tag_ignore. When true, only matching
|
||||
# tags will be shown.
|
||||
# Note: tags matched with merge_from/merge_to will still be displayed.
|
||||
tag_font = medium;
|
||||
#tag_ttfont = "/dos/windows/fonts/ariali.ttf";
|
||||
#tag_ttsize = 11.0;
|
||||
tag_color = "#007000";
|
||||
#tag_ignore = "(test|alpha)_release";
|
||||
#tag_ignore_merge = false;
|
||||
#tag_nocase = false;
|
||||
#tag_negate = false;
|
||||
|
||||
# rev_hidenumber <boolean>
|
||||
# If set to true no revision numbers will be printed in the graph.
|
||||
#rev_hidenumber = false;
|
||||
rev_font = giant;
|
||||
#rev_ttfont = "/dos/windows/fonts/arial.ttf";
|
||||
#rev_ttsize = 12.0;
|
||||
rev_color = "#000000";
|
||||
rev_bgcolor = "#f0f0f0";
|
||||
rev_separator = 1;
|
||||
rev_minline = 15;
|
||||
rev_maxline = 75;
|
||||
rev_lspace = 5;
|
||||
rev_rspace = 5;
|
||||
rev_tspace = 3;
|
||||
rev_bspace = 3;
|
||||
rev_text = "%d"; # or "%d\n%a, %s" for author and state too
|
||||
rev_text_font = tiny;
|
||||
#rev_text_ttfont = "/dos/windows/fonts/times.ttf";
|
||||
#rev_text_ttsize = 9.0;
|
||||
rev_text_color = "#500020";
|
||||
rev_maxtags = 25;
|
||||
|
||||
# merge_color <color>
|
||||
# The color of the line connecting merges
|
||||
# merge_front <boolean>
|
||||
# If true, draw the merge-lines on top if the image
|
||||
# merge_nocase <boolean>
|
||||
# Ignore case in regular expressions
|
||||
# merge_from <string>
|
||||
# A regex describing a tag that is used as the merge source
|
||||
# merge_to <string>
|
||||
# A regex describing a tag that is the target of the merge
|
||||
# merge_findall <boolean>
|
||||
# Try to match all merge_to targets possible. This can result in
|
||||
# multiple lines originating from one tag.
|
||||
# merge_arrows <boolean>
|
||||
# Use arrows to point to the merge destination. Default is true.
|
||||
# merge_cvsnt <boolean>
|
||||
# Use CVSNT's mergepoint registration for merges
|
||||
# merge_cvsnt_color <color>
|
||||
# The color of the line connecting merges from/to registered
|
||||
# mergepoints.
|
||||
# arrow_width <number>
|
||||
# arrow_length <number>
|
||||
# Specify the size of the arrows. Default is 3 wide and 12 long.
|
||||
#
|
||||
# NOTE:
|
||||
# - The merge_from is an extended regular expression as described in
|
||||
# regex(7) and POSIX 1003.2 (see also Single Unix Specification at
|
||||
# http://www.opengroup.com).
|
||||
# - The merge_to is an extended regular expression with a twist. All
|
||||
# subexpressions from the merge_from are expanded into merge_to
|
||||
# using %[1-9] (in contrast to \[1-9] for backreferences). Care is
|
||||
# taken to escape the constructed expression.
|
||||
# - A '$' at the end of the merge_to expression can be important to
|
||||
# prevent 'near match' references. Normally, you want the destination
|
||||
# to be a good representation of the source. However, this depends
|
||||
# on how well you defined the tags in the first place.
|
||||
#
|
||||
# Example:
|
||||
# merge_from = "^f_(.*)";
|
||||
# merge_to = "^t_%1$";
|
||||
# tags: f_foo, f_bar, f_foobar, t_foo, t_bar
|
||||
# result:
|
||||
# f_foo -> "^t_foo$" -> t_foo
|
||||
# f_bar -> "^t_bar$" -> t_bar
|
||||
# f_foobar-> "^t_foobar$" -> <no match>
|
||||
#
|
||||
merge_color = "#a000a0";
|
||||
merge_front = false;
|
||||
merge_nocase = false;
|
||||
merge_from = "^f_(.*)";
|
||||
merge_to = "^t_%1$";
|
||||
merge_findall = false;
|
||||
|
||||
#merge_arrows = true;
|
||||
#arrow_width = 3;
|
||||
#arrow_length = 12;
|
||||
|
||||
merge_cvsnt = true;
|
||||
merge_cvsnt_color = "#606000";
|
||||
|
||||
# branch_font <number>
|
||||
# The font of the number and tags
|
||||
# branch_color <color>
|
||||
# All branch element's color
|
||||
# branch_[lrtb]space <number>
|
||||
# Interior spacing (margin)
|
||||
# branch_margin <number>
|
||||
# Exterior spacing
|
||||
# branch_connect <number>
|
||||
# Length of the vertical connector
|
||||
# branch_dupbox <boolean>
|
||||
# Add the branch-tag also at the bottom/top of the trunk
|
||||
# branch_fold <boolean>
|
||||
# Fold empty branches in one box to save space
|
||||
# branch_foldall <boolean>
|
||||
# Put all empty branches in one box, even if they
|
||||
# were interspaced with branches with revisions.
|
||||
# branch_resort <boolean>
|
||||
# Resort the branches by the number of revisions to save space
|
||||
# branch_subtree <string>
|
||||
# Only show the branch denoted or all branches that sprout
|
||||
# from the denoted revision. The argument may be a symbolic
|
||||
# tag. This option you would normally want to set from the
|
||||
# command line with the -O option.
|
||||
branch_font = medium;
|
||||
#branch_ttfont = "/dos/windows/fonts/arialbd.ttf";
|
||||
#branch_ttsize = 18.0;
|
||||
branch_tag_color= "#000080";
|
||||
branch_tag_font = medium;
|
||||
#branch_tag_ttfont = "/dos/windows/fonts/arialbi.ttf";
|
||||
#branch_tag_ttsize = 14.0;
|
||||
branch_color = "#0000c0";
|
||||
branch_bgcolor = "#ffffc0";
|
||||
branch_lspace = 5;
|
||||
branch_rspace = 5;
|
||||
branch_tspace = 3;
|
||||
branch_bspace = 3;
|
||||
branch_margin = 15;
|
||||
branch_connect = 8;
|
||||
branch_dupbox = false;
|
||||
branch_fold = true;
|
||||
branch_foldall = false;
|
||||
branch_resort = false;
|
||||
#branch_subtree = "1.2.4";
|
||||
|
||||
# title <string>
|
||||
# The title string is expanded (see above for details)
|
||||
# title_[xy] <number>
|
||||
# Position of title
|
||||
# title_font <number>
|
||||
# The font
|
||||
# title_align <number>
|
||||
# 0 = left
|
||||
# 1 = center
|
||||
# 2 = right
|
||||
# title_color <color>
|
||||
title = "%c%p%f\nRevisions: %r, Branches: %b";
|
||||
title_x = 10;
|
||||
title_y = 5;
|
||||
title_font = small;
|
||||
#title_ttfont = "/dos/windows/fonts/times.ttf";
|
||||
#title_ttsize = 10.0;
|
||||
title_align = left;
|
||||
title_color = "#800000";
|
||||
|
||||
# Margins of the image
|
||||
# Note: the title is outside the margin
|
||||
margin_top = 35;
|
||||
margin_bottom = 10;
|
||||
margin_left = 10;
|
||||
margin_right = 10;
|
||||
|
||||
# Image format(s)
|
||||
# image_type <number|{gif,jpeg,png}>
|
||||
# gif (0) = Create gif image
|
||||
# png (1) = Create png image
|
||||
# jpeg (2) = Create jpeg image
|
||||
# Image types are available if they can be found in
|
||||
# the gd library. Newer versions of gd do not have
|
||||
# gif anymore. CvsGraph will automatically generate
|
||||
# png images instead.
|
||||
# image_quality <number>
|
||||
# The quality of a jpeg image (1..100)
|
||||
# image_compress <number>
|
||||
# Set the compression of a PNG image (gd version >= 2.0.12).
|
||||
# Values range from -1 to 9 where:
|
||||
# - -1 default compression (usually 3)
|
||||
# - 0 no compression
|
||||
# - 1 lowest level compression
|
||||
# - ... ...
|
||||
# - 9 highest level of compression
|
||||
# image_interlace <boolean>
|
||||
# Write interlaces PNG/JPEG images for progressive loading.
|
||||
image_type = png;
|
||||
image_quality = 75;
|
||||
image_compress = 3;
|
||||
image_interlace = true;
|
||||
|
||||
# HTML image map generation
|
||||
# map_name <string>
|
||||
# The name= attribute in <map name="mapname">...</map>
|
||||
# map_branch_href <string>
|
||||
# map_branch_alt <string>
|
||||
# map_rev_href <string>
|
||||
# map_rev_alt <string>
|
||||
# map_diff_href <string>
|
||||
# map_diff_alt <string>
|
||||
# map_merge_href <string>
|
||||
# map_merge_alt <string>
|
||||
# These are the href= and alt= attributes in the <area>
|
||||
# tags of HTML. The strings are expanded (see above).
|
||||
map_name = "MyMapName\" name=\"MyMapName";
|
||||
map_branch_href = "href=\"%6pathrev=%(%t%)\"";
|
||||
map_branch_alt = "alt=\"%0 %(%t%) (%B)\"";
|
||||
# You might want to experiment with the following setting:
|
||||
# 1. The default setting will take you to a ViewVC generated page displaying
|
||||
# that revision of the file, if you click into a revision box:
|
||||
map_rev_href = "href=\"%4rev=%R\"";
|
||||
# 2. This alternative setting will take you to the anchor representing this
|
||||
# revision on a ViewVC generated Log page for that file:
|
||||
# map_rev_href = "href=\"%3#rev%R\"";
|
||||
#
|
||||
map_rev_alt = "alt=\"%1 %(%t%) (%R)\"";
|
||||
map_diff_href = "href=\"%5r1=%P&r2=%R\"";
|
||||
map_diff_alt = "alt=\"%2 %P <-> %R\"";
|
||||
map_merge_href = "href=\"%5r1=%P&r2=%R\"";
|
||||
map_merge_alt = "alt=\"%2 %P <-> %R\"";
|
||||
|
|
@ -0,0 +1,170 @@
|
|||
"""Module to analyze Python source code; for syntax coloring tools.
|
||||
|
||||
Interface:
|
||||
|
||||
tags = fontify(pytext, searchfrom, searchto)
|
||||
|
||||
The PYTEXT argument is a string containing Python source code. The
|
||||
(optional) arguments SEARCHFROM and SEARCHTO may contain a slice in
|
||||
PYTEXT.
|
||||
|
||||
The returned value is a list of tuples, formatted like this:
|
||||
|
||||
[('keyword', 0, 6, None),
|
||||
('keyword', 11, 17, None),
|
||||
('comment', 23, 53, None),
|
||||
...
|
||||
]
|
||||
|
||||
The tuple contents are always like this:
|
||||
|
||||
(tag, startindex, endindex, sublist)
|
||||
|
||||
TAG is one of 'keyword', 'string', 'comment' or 'identifier'
|
||||
SUBLIST is not used, hence always None.
|
||||
"""
|
||||
|
||||
# Based on FontText.py by Mitchell S. Chapman,
|
||||
# which was modified by Zachary Roadhouse,
|
||||
# then un-Tk'd by Just van Rossum.
|
||||
# Many thanks for regular expression debugging & authoring are due to:
|
||||
# Tim (the-incredib-ly y'rs) Peters and Cristian Tismer
|
||||
# So, who owns the copyright? ;-) How about this:
|
||||
# Copyright 1996-1997:
|
||||
# Mitchell S. Chapman,
|
||||
# Zachary Roadhouse,
|
||||
# Tim Peters,
|
||||
# Just van Rossum
|
||||
|
||||
__version__ = "0.3.1"
|
||||
|
||||
import string, re
|
||||
|
||||
|
||||
# This list of keywords is taken from ref/node13.html of the
|
||||
# Python 1.3 HTML documentation. ("access" is intentionally omitted.)
|
||||
|
||||
keywordsList = ["and", "assert", "break", "class", "continue", "def",
|
||||
"del", "elif", "else", "except", "exec", "finally",
|
||||
"for", "from", "global", "if", "import", "in", "is",
|
||||
"lambda", "not", "or", "pass", "print", "raise",
|
||||
"return", "try", "while",
|
||||
]
|
||||
|
||||
# A regexp for matching Python comments.
|
||||
commentPat = "#.*"
|
||||
|
||||
# A regexp for matching simple quoted strings.
|
||||
pat = "q[^q\\n]*(\\[\000-\377][^q\\n]*)*q"
|
||||
quotePat = string.replace(pat, "q", "'") + "|" + string.replace(pat, 'q', '"')
|
||||
|
||||
# A regexp for matching multi-line tripled-quoted strings. (Way to go, Tim!)
|
||||
pat = """
|
||||
qqq
|
||||
[^q]*
|
||||
(
|
||||
( \\[\000-\377]
|
||||
| q
|
||||
( \\[\000-\377]
|
||||
| [^q]
|
||||
| q
|
||||
( \\[\000-\377]
|
||||
| [^q]
|
||||
)
|
||||
)
|
||||
)
|
||||
[^q]*
|
||||
)*
|
||||
qqq
|
||||
"""
|
||||
pat = string.join(string.split(pat), '') # get rid of whitespace
|
||||
tripleQuotePat = string.replace(pat, "q", "'") + "|" \
|
||||
+ string.replace(pat, 'q', '"')
|
||||
|
||||
# A regexp which matches all and only Python keywords. This will let
|
||||
# us skip the uninteresting identifier references.
|
||||
nonKeyPat = "(^|[^a-zA-Z0-9_.\"'])" # legal keyword-preceding characters
|
||||
keyPat = nonKeyPat + "(" + string.join(keywordsList, "|") + ")" + nonKeyPat
|
||||
|
||||
# Our final syntax-matching regexp is the concatation of the regexp's we
|
||||
# constructed above.
|
||||
syntaxPat = keyPat + \
|
||||
"|" + commentPat + \
|
||||
"|" + tripleQuotePat + \
|
||||
"|" + quotePat
|
||||
syntaxRE = re.compile(syntaxPat)
|
||||
|
||||
# Finally, we construct a regexp for matching indentifiers (with
|
||||
# optional leading whitespace).
|
||||
idKeyPat = "[ \t]*[A-Za-z_][A-Za-z_0-9.]*"
|
||||
idRE = re.compile(idKeyPat)
|
||||
|
||||
|
||||
def fontify(pytext, searchfrom=0, searchto=None):
|
||||
if searchto is None:
|
||||
searchto = len(pytext)
|
||||
tags = []
|
||||
commentTag = 'comment'
|
||||
stringTag = 'string'
|
||||
keywordTag = 'keyword'
|
||||
identifierTag = 'identifier'
|
||||
|
||||
start = 0
|
||||
end = searchfrom
|
||||
while 1:
|
||||
# Look for some syntax token we're interested in. If find
|
||||
# nothing, we're done.
|
||||
matchobj = syntaxRE.search(pytext, end)
|
||||
if not matchobj:
|
||||
break
|
||||
|
||||
# If we found something outside our search area, it doesn't
|
||||
# count (and we're done).
|
||||
start = matchobj.start()
|
||||
if start >= searchto:
|
||||
break
|
||||
|
||||
match = matchobj.group(0)
|
||||
end = start + len(match)
|
||||
c = match[0]
|
||||
if c == '#':
|
||||
# We matched a comment.
|
||||
tags.append((commentTag, start, end, None))
|
||||
elif c == '"' or c == '\'':
|
||||
# We matched a string.
|
||||
tags.append((stringTag, start, end, None))
|
||||
else:
|
||||
# We matched a keyword.
|
||||
if start != searchfrom:
|
||||
# there's still a redundant char before and after it, strip!
|
||||
match = match[1:-1]
|
||||
start = start + 1
|
||||
else:
|
||||
# This is the first keyword in the text.
|
||||
# Only a space at the end.
|
||||
match = match[:-1]
|
||||
end = end - 1
|
||||
tags.append((keywordTag, start, end, None))
|
||||
# If this was a defining keyword, look ahead to the
|
||||
# following identifier.
|
||||
if match in ["def", "class"]:
|
||||
matchobj = idRE.search(pytext, end)
|
||||
if matchobj:
|
||||
start = matchobj.start()
|
||||
if start == end and start < searchto:
|
||||
end = start + len(matchobj.group(0))
|
||||
tags.append((identifierTag, start, end, None))
|
||||
return tags
|
||||
|
||||
|
||||
def test(path):
|
||||
f = open(path)
|
||||
text = f.read()
|
||||
f.close()
|
||||
tags = fontify(text)
|
||||
for tag, start, end, sublist in tags:
|
||||
print tag, `text[start:end]`
|
||||
|
||||
if __name__ == "__main__":
|
||||
import sys
|
||||
test(sys.argv[0])
|
|
@ -0,0 +1,236 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2006 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# accept.py: parse/handle the various Accept headers from the client
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
import re
|
||||
import string
|
||||
|
||||
|
||||
def language(hdr):
|
||||
"Parse an Accept-Language header."
|
||||
|
||||
# parse the header, storing results in a _LanguageSelector object
|
||||
return _parse(hdr, _LanguageSelector())
|
||||
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
_re_token = re.compile(r'\s*([^\s;,"]+|"[^"]*")+\s*')
|
||||
_re_param = re.compile(r';\s*([^;,"]+|"[^"]*")+\s*')
|
||||
_re_split_param = re.compile(r'([^\s=])\s*=\s*(.*)')
|
||||
|
||||
def _parse(hdr, result):
|
||||
# quick exit for empty or not-supplied header
|
||||
if not hdr:
|
||||
return result
|
||||
|
||||
pos = 0
|
||||
while pos < len(hdr):
|
||||
name = _re_token.match(hdr, pos)
|
||||
if not name:
|
||||
raise AcceptParseError()
|
||||
a = result.item_class(string.lower(name.group(1)))
|
||||
pos = name.end()
|
||||
while 1:
|
||||
# are we looking at a parameter?
|
||||
match = _re_param.match(hdr, pos)
|
||||
if not match:
|
||||
break
|
||||
param = match.group(1)
|
||||
pos = match.end()
|
||||
|
||||
# split up the pieces of the parameter
|
||||
match = _re_split_param.match(param)
|
||||
if not match:
|
||||
# the "=" was probably missing
|
||||
continue
|
||||
|
||||
pname = string.lower(match.group(1))
|
||||
if pname == 'q' or pname == 'qs':
|
||||
try:
|
||||
a.quality = float(match.group(2))
|
||||
except ValueError:
|
||||
# bad float literal
|
||||
pass
|
||||
elif pname == 'level':
|
||||
try:
|
||||
a.level = float(match.group(2))
|
||||
except ValueError:
|
||||
# bad float literal
|
||||
pass
|
||||
elif pname == 'charset':
|
||||
a.charset = string.lower(match.group(2))
|
||||
|
||||
result.append(a)
|
||||
if hdr[pos:pos+1] == ',':
|
||||
pos = pos + 1
|
||||
|
||||
return result
|
||||
|
||||
class _AcceptItem:
|
||||
def __init__(self, name):
|
||||
self.name = name
|
||||
self.quality = 1.0
|
||||
self.level = 0.0
|
||||
self.charset = ''
|
||||
|
||||
def __str__(self):
|
||||
s = self.name
|
||||
if self.quality != 1.0:
|
||||
s = '%s;q=%.3f' % (s, self.quality)
|
||||
if self.level != 0.0:
|
||||
s = '%s;level=%.3f' % (s, self.level)
|
||||
if self.charset:
|
||||
s = '%s;charset=%s' % (s, self.charset)
|
||||
return s
|
||||
|
||||
class _LanguageRange(_AcceptItem):
|
||||
def matches(self, tag):
|
||||
"Match the tag against self. Returns the qvalue, or None if non-matching."
|
||||
if tag == self.name:
|
||||
return self.quality
|
||||
|
||||
# are we a prefix of the available language-tag
|
||||
name = self.name + '-'
|
||||
if tag[:len(name)] == name:
|
||||
return self.quality
|
||||
return None
|
||||
|
||||
class _LanguageSelector:
|
||||
"""Instances select an available language based on the user's request.
|
||||
|
||||
Languages found in the user's request are added to this object with the
|
||||
append() method (they should be instances of _LanguageRange). After the
|
||||
languages have been added, then the caller can use select_from() to
|
||||
determine which user-request language(s) best matches the set of
|
||||
available languages.
|
||||
|
||||
Strictly speaking, this class is pretty close for more than just
|
||||
language matching. It has been implemented to enable q-value based
|
||||
matching between requests and availability. Some minor tweaks may be
|
||||
necessary, but simply using a new 'item_class' should be sufficient
|
||||
to allow the _parse() function to construct a selector which holds
|
||||
the appropriate item implementations (e.g. _LanguageRange is the
|
||||
concrete _AcceptItem class that handles matching of language tags).
|
||||
"""
|
||||
|
||||
item_class = _LanguageRange
|
||||
|
||||
def __init__(self):
|
||||
self.requested = [ ]
|
||||
|
||||
def select_from(self, avail):
|
||||
"""Select one of the available choices based on the request.
|
||||
|
||||
Note: if there isn't a match, then the first available choice is
|
||||
considered the default. Also, if a number of matches are equally
|
||||
relevant, then the first-requested will be used.
|
||||
|
||||
avail is a list of language-tag strings of available languages
|
||||
"""
|
||||
|
||||
# tuples of (qvalue, language-tag)
|
||||
matches = [ ]
|
||||
|
||||
# try matching all pairs of desired vs available, recording the
|
||||
# resulting qvalues. we also need to record the longest language-range
|
||||
# that matches since the most specific range "wins"
|
||||
for tag in avail:
|
||||
longest = 0
|
||||
final = 0.0
|
||||
|
||||
# check this tag against the requests from the user
|
||||
for want in self.requested:
|
||||
qvalue = want.matches(tag)
|
||||
#print 'have %s. want %s. qvalue=%s' % (tag, want.name, qvalue)
|
||||
if qvalue is not None and len(want.name) > longest:
|
||||
# we have a match and it is longer than any we may have had.
|
||||
# the final qvalue should be from this tag.
|
||||
final = qvalue
|
||||
longest = len(want.name)
|
||||
|
||||
# a non-zero qvalue is a potential match
|
||||
if final:
|
||||
matches.append((final, tag))
|
||||
|
||||
# if there are no matches, then return the default language tag
|
||||
if not matches:
|
||||
return avail[0]
|
||||
|
||||
# get the highest qvalue and its corresponding tag
|
||||
matches.sort()
|
||||
qvalue, tag = matches[-1]
|
||||
|
||||
# if the qvalue is zero, then we have no valid matches. return the
|
||||
# default language tag.
|
||||
if not qvalue:
|
||||
return avail[0]
|
||||
|
||||
# if there are two or more matches, and the second-highest has a
|
||||
# qvalue equal to the best, then we have multiple "best" options.
|
||||
# select the one that occurs first in self.requested
|
||||
if len(matches) >= 2 and matches[-2][0] == qvalue:
|
||||
# remove non-best matches
|
||||
while matches[0][0] != qvalue:
|
||||
del matches[0]
|
||||
#print "non-deterministic choice", matches
|
||||
|
||||
# sequence through self.requested, in order
|
||||
for want in self.requested:
|
||||
# try to find this one in our best matches
|
||||
for qvalue, tag in matches:
|
||||
if want.matches(tag):
|
||||
# this requested item is one of the "best" options
|
||||
### note: this request item could match *other* "best" options,
|
||||
### so returning *this* one is rather non-deterministic.
|
||||
### theoretically, we could go further here, and do another
|
||||
### search based on the ordering in 'avail'. however, note
|
||||
### that this generally means that we are picking from multiple
|
||||
### *SUB* languages, so I'm all right with the non-determinism
|
||||
### at this point. stupid client should send a qvalue if they
|
||||
### want to refine.
|
||||
return tag
|
||||
|
||||
# NOTREACHED
|
||||
|
||||
# return the best match
|
||||
return tag
|
||||
|
||||
def append(self, item):
|
||||
self.requested.append(item)
|
||||
|
||||
class AcceptParseError(Exception):
|
||||
pass
|
||||
|
||||
def _test():
|
||||
s = language('en')
|
||||
assert s.select_from(['en']) == 'en'
|
||||
assert s.select_from(['en', 'de']) == 'en'
|
||||
assert s.select_from(['de', 'en']) == 'en'
|
||||
|
||||
# Netscape 4.x and early version of Mozilla may not send a q value
|
||||
s = language('en, ja')
|
||||
assert s.select_from(['en', 'ja']) == 'en'
|
||||
|
||||
s = language('fr, de;q=0.9, en-gb;q=0.7, en;q=0.6, en-gb-foo;q=0.8')
|
||||
assert s.select_from(['en']) == 'en'
|
||||
assert s.select_from(['en-gb-foo']) == 'en-gb-foo'
|
||||
assert s.select_from(['de', 'fr']) == 'fr'
|
||||
assert s.select_from(['de', 'en-gb']) == 'de'
|
||||
assert s.select_from(['en-gb', 'en-gb-foo']) == 'en-gb-foo'
|
||||
assert s.select_from(['en-bar']) == 'en-bar'
|
||||
assert s.select_from(['en-gb-bar', 'en-gb-foo']) == 'en-gb-foo'
|
||||
|
||||
# non-deterministic. en-gb;q=0.7 matches both avail tags.
|
||||
#assert s.select_from(['en-gb-bar', 'en-gb']) == 'en-gb'
|
|
@ -0,0 +1,151 @@
|
|||
#!/usr/bin/env python
|
||||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2007 The ViewCVS Group. All Rights Reserved.
|
||||
# Copyright (C) 2000 Curt Hagenlocher <curt@hagenlocher.org>
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# blame.py: Annotate each line of a CVS file with its author,
|
||||
# revision #, date, etc.
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# This file is based on the cvsblame.pl portion of the Bonsai CVS tool,
|
||||
# developed by Steve Lamm for Netscape Communications Corporation. More
|
||||
# information about Bonsai can be found at
|
||||
# http://www.mozilla.org/bonsai.html
|
||||
#
|
||||
# cvsblame.pl, in turn, was based on Scott Furman's cvsblame script
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
import sys
|
||||
import string
|
||||
import os
|
||||
import re
|
||||
import time
|
||||
import math
|
||||
import cgi
|
||||
import vclib
|
||||
|
||||
|
||||
re_includes = re.compile('\\#(\\s*)include(\\s*)"(.*?)"')
|
||||
|
||||
def link_includes(text, repos, path_parts, include_url):
|
||||
match = re_includes.match(text)
|
||||
if match:
|
||||
incfile = match.group(3)
|
||||
include_path_parts = path_parts[:-1]
|
||||
for part in filter(None, string.split(incfile, '/')):
|
||||
if part == "..":
|
||||
if not include_path_parts:
|
||||
# nothing left to pop; don't bother marking up this include.
|
||||
return text
|
||||
include_path_parts.pop()
|
||||
elif part and part != ".":
|
||||
include_path_parts.append(part)
|
||||
|
||||
include_path = None
|
||||
try:
|
||||
if repos.itemtype(include_path_parts, None) == vclib.FILE:
|
||||
include_path = string.join(include_path_parts, '/')
|
||||
except vclib.ItemNotFound:
|
||||
pass
|
||||
|
||||
if include_path:
|
||||
return '#%sinclude%s<a href="%s">"%s"</a>' % \
|
||||
(match.group(1), match.group(2),
|
||||
string.replace(include_url, '/WHERE/', include_path), incfile)
|
||||
|
||||
return text
|
||||
|
||||
|
||||
class HTMLBlameSource:
|
||||
"""Wrapper around a the object by the vclib.annotate() which does
|
||||
HTML escaping, diff URL generation, and #include linking."""
|
||||
def __init__(self, repos, path_parts, diff_url, include_url, opt_rev=None):
|
||||
self.repos = repos
|
||||
self.path_parts = path_parts
|
||||
self.diff_url = diff_url
|
||||
self.include_url = include_url
|
||||
self.annotation, self.revision = self.repos.annotate(path_parts, opt_rev)
|
||||
|
||||
def __getitem__(self, idx):
|
||||
item = self.annotation.__getitem__(idx)
|
||||
diff_url = None
|
||||
if item.prev_rev:
|
||||
diff_url = '%sr1=%s&r2=%s' % (self.diff_url, item.prev_rev, item.rev)
|
||||
thisline = link_includes(cgi.escape(item.text), self.repos,
|
||||
self.path_parts, self.include_url)
|
||||
return _item(text=thisline, line_number=item.line_number,
|
||||
rev=item.rev, prev_rev=item.prev_rev,
|
||||
diff_url=diff_url, date=item.date, author=item.author)
|
||||
|
||||
|
||||
def blame(repos, path_parts, diff_url, include_url, opt_rev=None):
|
||||
source = HTMLBlameSource(repos, path_parts, diff_url, include_url, opt_rev)
|
||||
return source, source.revision
|
||||
|
||||
|
||||
class _item:
|
||||
def __init__(self, **kw):
|
||||
vars(self).update(kw)
|
||||
|
||||
|
||||
def make_html(root, rcs_path):
|
||||
import vclib.ccvs.blame
|
||||
bs = vclib.ccvs.blame.BlameSource(os.path.join(root, rcs_path))
|
||||
|
||||
line = 0
|
||||
old_revision = 0
|
||||
row_color = 'ffffff'
|
||||
rev_count = 0
|
||||
|
||||
align = ' style="text-align: %s;"'
|
||||
|
||||
sys.stdout.write('<table cellpadding="2" cellspacing="2" style="font-family: monospace; whitespace: pre;">\n')
|
||||
for line_data in bs:
|
||||
revision = line_data.rev
|
||||
thisline = line_data.text
|
||||
line = line_data.line_number
|
||||
author = line_data.author
|
||||
prev_rev = line_data.prev_rev
|
||||
|
||||
if old_revision != revision and line != 1:
|
||||
if row_color == 'ffffff':
|
||||
row_color = 'e7e7e7'
|
||||
else:
|
||||
row_color = 'ffffff'
|
||||
|
||||
sys.stdout.write('<tr id="l%d" style="background-color: #%s; vertical-align: center;">' % (line, row_color))
|
||||
sys.stdout.write('<td%s>%d</td>' % (align % 'right', line))
|
||||
|
||||
if old_revision != revision or rev_count > 20:
|
||||
sys.stdout.write('<td%s>%s</td>' % (align % 'right', author or ' '))
|
||||
sys.stdout.write('<td%s>%s</td>' % (align % 'left', revision))
|
||||
old_revision = revision
|
||||
rev_count = 0
|
||||
else:
|
||||
sys.stdout.write('<td> </td><td> </td>')
|
||||
rev_count = rev_count + 1
|
||||
|
||||
sys.stdout.write('<td%s>%s</td></tr>\n' % (align % 'left', string.rstrip(thisline) or ' '))
|
||||
sys.stdout.write('</table>\n')
|
||||
|
||||
|
||||
def main():
|
||||
import sys
|
||||
if len(sys.argv) != 3:
|
||||
print 'USAGE: %s cvsroot rcs-file' % sys.argv[0]
|
||||
sys.exit(1)
|
||||
make_html(sys.argv[1], sys.argv[2])
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
|
@ -0,0 +1,180 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2007 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# compat.py: compatibility functions for operation across Python 1.5.x to 2.2.x
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
import urllib
|
||||
import string
|
||||
import time
|
||||
import calendar
|
||||
import re
|
||||
import os
|
||||
import rfc822
|
||||
import tempfile
|
||||
import errno
|
||||
|
||||
#
|
||||
# urllib.urlencode() is new to Python 1.5.2
|
||||
#
|
||||
try:
|
||||
urlencode = urllib.urlencode
|
||||
except AttributeError:
|
||||
def urlencode(dict):
|
||||
"Encode a dictionary as application/x-url-form-encoded."
|
||||
if not dict:
|
||||
return ''
|
||||
quote = urllib.quote_plus
|
||||
keyvalue = [ ]
|
||||
for key, value in dict.items():
|
||||
keyvalue.append(quote(key) + '=' + quote(str(value)))
|
||||
return string.join(keyvalue, '&')
|
||||
|
||||
#
|
||||
# time.strptime() is new to Python 1.5.2
|
||||
#
|
||||
if hasattr(time, 'strptime'):
|
||||
def cvs_strptime(timestr):
|
||||
'Parse a CVS-style date/time value.'
|
||||
return time.strptime(timestr, '%Y/%m/%d %H:%M:%S')[:-1] + (0,)
|
||||
else:
|
||||
_re_rev_date = re.compile('([0-9]{4})/([0-9][0-9])/([0-9][0-9]) '
|
||||
'([0-9][0-9]):([0-9][0-9]):([0-9][0-9])')
|
||||
def cvs_strptime(timestr):
|
||||
'Parse a CVS-style date/time value.'
|
||||
match = _re_rev_date.match(timestr)
|
||||
if match:
|
||||
return tuple(map(int, match.groups())) + (0, 1, 0)
|
||||
else:
|
||||
raise ValueError('date is not in cvs format')
|
||||
|
||||
#
|
||||
# os.makedirs() is new to Python 1.5.2
|
||||
#
|
||||
try:
|
||||
makedirs = os.makedirs
|
||||
except AttributeError:
|
||||
def makedirs(path, mode=0777):
|
||||
head, tail = os.path.split(path)
|
||||
if head and tail and not os.path.exists(head):
|
||||
makedirs(head, mode)
|
||||
os.mkdir(path, mode)
|
||||
|
||||
#
|
||||
# rfc822.formatdate() is new to Python 1.6
|
||||
#
|
||||
try:
|
||||
formatdate = rfc822.formatdate
|
||||
except AttributeError:
|
||||
def formatdate(timeval):
|
||||
if timeval is None:
|
||||
timeval = time.time()
|
||||
timeval = time.gmtime(timeval)
|
||||
return "%s, %02d %s %04d %02d:%02d:%02d GMT" % (
|
||||
["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"][timeval[6]],
|
||||
timeval[2],
|
||||
["Jan", "Feb", "Mar", "Apr", "May", "Jun",
|
||||
"Jul", "Aug", "Sep", "Oct", "Nov", "Dec"][timeval[1]-1],
|
||||
timeval[0], timeval[3], timeval[4], timeval[5])
|
||||
|
||||
#
|
||||
# calendar.timegm() is new to Python 2.x and
|
||||
# calendar.leapdays() was wrong in Python 1.5.2
|
||||
#
|
||||
try:
|
||||
timegm = calendar.timegm
|
||||
except AttributeError:
|
||||
def leapdays(year1, year2):
|
||||
"""Return number of leap years in range [year1, year2).
|
||||
Assume year1 <= year2."""
|
||||
year1 = year1 - 1
|
||||
year2 = year2 - 1
|
||||
return (year2/4 - year1/4) - (year2/100 -
|
||||
year1/100) + (year2/400 - year1/400)
|
||||
|
||||
EPOCH = 1970
|
||||
def timegm(tuple):
|
||||
"""Unrelated but handy function to calculate Unix timestamp from GMT."""
|
||||
year, month, day, hour, minute, second = tuple[:6]
|
||||
# assert year >= EPOCH
|
||||
# assert 1 <= month <= 12
|
||||
days = 365*(year-EPOCH) + leapdays(EPOCH, year)
|
||||
for i in range(1, month):
|
||||
days = days + calendar.mdays[i]
|
||||
if month > 2 and calendar.isleap(year):
|
||||
days = days + 1
|
||||
days = days + day - 1
|
||||
hours = days*24 + hour
|
||||
minutes = hours*60 + minute
|
||||
seconds = minutes*60 + second
|
||||
return seconds
|
||||
|
||||
#
|
||||
# tempfile.mkdtemp() is new to Python 2.3
|
||||
#
|
||||
try:
|
||||
mkdtemp = tempfile.mkdtemp
|
||||
except AttributeError:
|
||||
def mkdtemp(suffix="", prefix="tmp", dir=None):
|
||||
# mktemp() only took a single suffix argument until Python 2.3.
|
||||
# We'll do the best we can.
|
||||
oldtmpdir = os.environ.get('TMPDIR')
|
||||
try:
|
||||
for i in range(10):
|
||||
if dir:
|
||||
os.environ['TMPDIR'] = dir
|
||||
dir = tempfile.mktemp(suffix)
|
||||
if prefix:
|
||||
parent, base = os.path.split(dir)
|
||||
dir = os.path.join(parent, prefix + base)
|
||||
try:
|
||||
os.mkdir(dir, 0700)
|
||||
return dir
|
||||
except OSError, e:
|
||||
if e.errno == errno.EEXIST:
|
||||
continue # try again
|
||||
raise
|
||||
finally:
|
||||
if oldtmpdir:
|
||||
os.environ['TMPDIR'] = oldtmpdir
|
||||
elif os.environ.has_key('TMPDIR'):
|
||||
del(os.environ['TMPDIR'])
|
||||
|
||||
raise IOError, (errno.EEXIST, "No usable temporary directory name found")
|
||||
|
||||
#
|
||||
# the following stuff is *ONLY* needed for standalone.py.
|
||||
# For that reason I've encapsulated it into a function.
|
||||
#
|
||||
|
||||
def for_standalone():
|
||||
import SocketServer
|
||||
if not hasattr(SocketServer.TCPServer, "close_request"):
|
||||
#
|
||||
# method close_request() was missing until Python 2.1
|
||||
#
|
||||
class TCPServer(SocketServer.TCPServer):
|
||||
def process_request(self, request, client_address):
|
||||
"""Call finish_request.
|
||||
|
||||
Overridden by ForkingMixIn and ThreadingMixIn.
|
||||
|
||||
"""
|
||||
self.finish_request(request, client_address)
|
||||
self.close_request(request)
|
||||
|
||||
def close_request(self, request):
|
||||
"""Called to clean up an individual request."""
|
||||
request.close()
|
||||
|
||||
SocketServer.TCPServer = TCPServer
|
|
@ -0,0 +1,786 @@
|
|||
#! /usr/bin/env python
|
||||
# Backported to Python 1.5.2 for the ViewCVS project by pf@artcom-gmbh.de
|
||||
# 24-Dec-2001, original version "stolen" from Python-2.1.1
|
||||
"""
|
||||
Module difflib -- helpers for computing deltas between objects.
|
||||
|
||||
Function get_close_matches(word, possibilities, n=3, cutoff=0.6):
|
||||
|
||||
Use SequenceMatcher to return list of the best "good enough" matches.
|
||||
|
||||
word is a sequence for which close matches are desired (typically a
|
||||
string).
|
||||
|
||||
possibilities is a list of sequences against which to match word
|
||||
(typically a list of strings).
|
||||
|
||||
Optional arg n (default 3) is the maximum number of close matches to
|
||||
return. n must be > 0.
|
||||
|
||||
Optional arg cutoff (default 0.6) is a float in [0, 1]. Possibilities
|
||||
that don't score at least that similar to word are ignored.
|
||||
|
||||
The best (no more than n) matches among the possibilities are returned
|
||||
in a list, sorted by similarity score, most similar first.
|
||||
|
||||
>>> get_close_matches("appel", ["ape", "apple", "peach", "puppy"])
|
||||
['apple', 'ape']
|
||||
>>> import keyword
|
||||
>>> get_close_matches("wheel", keyword.kwlist)
|
||||
['while']
|
||||
>>> get_close_matches("apple", keyword.kwlist)
|
||||
[]
|
||||
>>> get_close_matches("accept", keyword.kwlist)
|
||||
['except']
|
||||
|
||||
Class SequenceMatcher
|
||||
|
||||
SequenceMatcher is a flexible class for comparing pairs of sequences of any
|
||||
type, so long as the sequence elements are hashable. The basic algorithm
|
||||
predates, and is a little fancier than, an algorithm published in the late
|
||||
1980's by Ratcliff and Obershelp under the hyperbolic name "gestalt pattern
|
||||
matching". The basic idea is to find the longest contiguous matching
|
||||
subsequence that contains no "junk" elements (R-O doesn't address junk).
|
||||
The same idea is then applied recursively to the pieces of the sequences to
|
||||
the left and to the right of the matching subsequence. This does not yield
|
||||
minimal edit sequences, but does tend to yield matches that "look right"
|
||||
to people.
|
||||
|
||||
Example, comparing two strings, and considering blanks to be "junk":
|
||||
|
||||
>>> s = SequenceMatcher(lambda x: x == " ",
|
||||
... "private Thread currentThread;",
|
||||
... "private volatile Thread currentThread;")
|
||||
>>>
|
||||
|
||||
.ratio() returns a float in [0, 1], measuring the "similarity" of the
|
||||
sequences. As a rule of thumb, a .ratio() value over 0.6 means the
|
||||
sequences are close matches:
|
||||
|
||||
>>> print round(s.ratio(), 3)
|
||||
0.866
|
||||
>>>
|
||||
|
||||
If you're only interested in where the sequences match,
|
||||
.get_matching_blocks() is handy:
|
||||
|
||||
>>> for block in s.get_matching_blocks():
|
||||
... print "a[%d] and b[%d] match for %d elements" % block
|
||||
a[0] and b[0] match for 8 elements
|
||||
a[8] and b[17] match for 6 elements
|
||||
a[14] and b[23] match for 15 elements
|
||||
a[29] and b[38] match for 0 elements
|
||||
|
||||
Note that the last tuple returned by .get_matching_blocks() is always a
|
||||
dummy, (len(a), len(b), 0), and this is the only case in which the last
|
||||
tuple element (number of elements matched) is 0.
|
||||
|
||||
If you want to know how to change the first sequence into the second, use
|
||||
.get_opcodes():
|
||||
|
||||
>>> for opcode in s.get_opcodes():
|
||||
... print "%6s a[%d:%d] b[%d:%d]" % opcode
|
||||
equal a[0:8] b[0:8]
|
||||
insert a[8:8] b[8:17]
|
||||
equal a[8:14] b[17:23]
|
||||
equal a[14:29] b[23:38]
|
||||
|
||||
See Tools/scripts/ndiff.py for a fancy human-friendly file differencer,
|
||||
which uses SequenceMatcher both to view files as sequences of lines, and
|
||||
lines as sequences of characters.
|
||||
|
||||
See also function get_close_matches() in this module, which shows how
|
||||
simple code building on SequenceMatcher can be used to do useful work.
|
||||
|
||||
Timing: Basic R-O is cubic time worst case and quadratic time expected
|
||||
case. SequenceMatcher is quadratic time for the worst case and has
|
||||
expected-case behavior dependent in a complicated way on how many
|
||||
elements the sequences have in common; best case time is linear.
|
||||
|
||||
SequenceMatcher methods:
|
||||
|
||||
__init__(isjunk=None, a='', b='')
|
||||
Construct a SequenceMatcher.
|
||||
|
||||
Optional arg isjunk is None (the default), or a one-argument function
|
||||
that takes a sequence element and returns true iff the element is junk.
|
||||
None is equivalent to passing "lambda x: 0", i.e. no elements are
|
||||
considered to be junk. For example, pass
|
||||
lambda x: x in " \\t"
|
||||
if you're comparing lines as sequences of characters, and don't want to
|
||||
synch up on blanks or hard tabs.
|
||||
|
||||
Optional arg a is the first of two sequences to be compared. By
|
||||
default, an empty string. The elements of a must be hashable.
|
||||
|
||||
Optional arg b is the second of two sequences to be compared. By
|
||||
default, an empty string. The elements of b must be hashable.
|
||||
|
||||
set_seqs(a, b)
|
||||
Set the two sequences to be compared.
|
||||
|
||||
>>> s = SequenceMatcher()
|
||||
>>> s.set_seqs("abcd", "bcde")
|
||||
>>> s.ratio()
|
||||
0.75
|
||||
|
||||
set_seq1(a)
|
||||
Set the first sequence to be compared.
|
||||
|
||||
The second sequence to be compared is not changed.
|
||||
|
||||
>>> s = SequenceMatcher(None, "abcd", "bcde")
|
||||
>>> s.ratio()
|
||||
0.75
|
||||
>>> s.set_seq1("bcde")
|
||||
>>> s.ratio()
|
||||
1.0
|
||||
>>>
|
||||
|
||||
SequenceMatcher computes and caches detailed information about the
|
||||
second sequence, so if you want to compare one sequence S against many
|
||||
sequences, use .set_seq2(S) once and call .set_seq1(x) repeatedly for
|
||||
each of the other sequences.
|
||||
|
||||
See also set_seqs() and set_seq2().
|
||||
|
||||
set_seq2(b)
|
||||
Set the second sequence to be compared.
|
||||
|
||||
The first sequence to be compared is not changed.
|
||||
|
||||
>>> s = SequenceMatcher(None, "abcd", "bcde")
|
||||
>>> s.ratio()
|
||||
0.75
|
||||
>>> s.set_seq2("abcd")
|
||||
>>> s.ratio()
|
||||
1.0
|
||||
>>>
|
||||
|
||||
SequenceMatcher computes and caches detailed information about the
|
||||
second sequence, so if you want to compare one sequence S against many
|
||||
sequences, use .set_seq2(S) once and call .set_seq1(x) repeatedly for
|
||||
each of the other sequences.
|
||||
|
||||
See also set_seqs() and set_seq1().
|
||||
|
||||
find_longest_match(alo, ahi, blo, bhi)
|
||||
Find longest matching block in a[alo:ahi] and b[blo:bhi].
|
||||
|
||||
If isjunk is not defined:
|
||||
|
||||
Return (i,j,k) such that a[i:i+k] is equal to b[j:j+k], where
|
||||
alo <= i <= i+k <= ahi
|
||||
blo <= j <= j+k <= bhi
|
||||
and for all (i',j',k') meeting those conditions,
|
||||
k >= k'
|
||||
i <= i'
|
||||
and if i == i', j <= j'
|
||||
|
||||
In other words, of all maximal matching blocks, return one that starts
|
||||
earliest in a, and of all those maximal matching blocks that start
|
||||
earliest in a, return the one that starts earliest in b.
|
||||
|
||||
>>> s = SequenceMatcher(None, " abcd", "abcd abcd")
|
||||
>>> s.find_longest_match(0, 5, 0, 9)
|
||||
(0, 4, 5)
|
||||
|
||||
If isjunk is defined, first the longest matching block is determined as
|
||||
above, but with the additional restriction that no junk element appears
|
||||
in the block. Then that block is extended as far as possible by
|
||||
matching (only) junk elements on both sides. So the resulting block
|
||||
never matches on junk except as identical junk happens to be adjacent
|
||||
to an "interesting" match.
|
||||
|
||||
Here's the same example as before, but considering blanks to be junk.
|
||||
That prevents " abcd" from matching the " abcd" at the tail end of the
|
||||
second sequence directly. Instead only the "abcd" can match, and
|
||||
matches the leftmost "abcd" in the second sequence:
|
||||
|
||||
>>> s = SequenceMatcher(lambda x: x==" ", " abcd", "abcd abcd")
|
||||
>>> s.find_longest_match(0, 5, 0, 9)
|
||||
(1, 0, 4)
|
||||
|
||||
If no blocks match, return (alo, blo, 0).
|
||||
|
||||
>>> s = SequenceMatcher(None, "ab", "c")
|
||||
>>> s.find_longest_match(0, 2, 0, 1)
|
||||
(0, 0, 0)
|
||||
|
||||
get_matching_blocks()
|
||||
Return list of triples describing matching subsequences.
|
||||
|
||||
Each triple is of the form (i, j, n), and means that
|
||||
a[i:i+n] == b[j:j+n]. The triples are monotonically increasing in i
|
||||
and in j.
|
||||
|
||||
The last triple is a dummy, (len(a), len(b), 0), and is the only triple
|
||||
with n==0.
|
||||
|
||||
>>> s = SequenceMatcher(None, "abxcd", "abcd")
|
||||
>>> s.get_matching_blocks()
|
||||
[(0, 0, 2), (3, 2, 2), (5, 4, 0)]
|
||||
|
||||
get_opcodes()
|
||||
Return list of 5-tuples describing how to turn a into b.
|
||||
|
||||
Each tuple is of the form (tag, i1, i2, j1, j2). The first tuple has
|
||||
i1 == j1 == 0, and remaining tuples have i1 == the i2 from the tuple
|
||||
preceding it, and likewise for j1 == the previous j2.
|
||||
|
||||
The tags are strings, with these meanings:
|
||||
|
||||
'replace': a[i1:i2] should be replaced by b[j1:j2]
|
||||
'delete': a[i1:i2] should be deleted.
|
||||
Note that j1==j2 in this case.
|
||||
'insert': b[j1:j2] should be inserted at a[i1:i1].
|
||||
Note that i1==i2 in this case.
|
||||
'equal': a[i1:i2] == b[j1:j2]
|
||||
|
||||
>>> a = "qabxcd"
|
||||
>>> b = "abycdf"
|
||||
>>> s = SequenceMatcher(None, a, b)
|
||||
>>> for tag, i1, i2, j1, j2 in s.get_opcodes():
|
||||
... print ("%7s a[%d:%d] (%s) b[%d:%d] (%s)" %
|
||||
... (tag, i1, i2, a[i1:i2], j1, j2, b[j1:j2]))
|
||||
delete a[0:1] (q) b[0:0] ()
|
||||
equal a[1:3] (ab) b[0:2] (ab)
|
||||
replace a[3:4] (x) b[2:3] (y)
|
||||
equal a[4:6] (cd) b[3:5] (cd)
|
||||
insert a[6:6] () b[5:6] (f)
|
||||
|
||||
ratio()
|
||||
Return a measure of the sequences' similarity (float in [0,1]).
|
||||
|
||||
Where T is the total number of elements in both sequences, and M is the
|
||||
number of matches, this is 2,0*M / T. Note that this is 1 if the
|
||||
sequences are identical, and 0 if they have nothing in common.
|
||||
|
||||
.ratio() is expensive to compute if you haven't already computed
|
||||
.get_matching_blocks() or .get_opcodes(), in which case you may want to
|
||||
try .quick_ratio() or .real_quick_ratio() first to get an upper bound.
|
||||
|
||||
>>> s = SequenceMatcher(None, "abcd", "bcde")
|
||||
>>> s.ratio()
|
||||
0.75
|
||||
>>> s.quick_ratio()
|
||||
0.75
|
||||
>>> s.real_quick_ratio()
|
||||
1.0
|
||||
|
||||
quick_ratio()
|
||||
Return an upper bound on .ratio() relatively quickly.
|
||||
|
||||
This isn't defined beyond that it is an upper bound on .ratio(), and
|
||||
is faster to compute.
|
||||
|
||||
real_quick_ratio():
|
||||
Return an upper bound on ratio() very quickly.
|
||||
|
||||
This isn't defined beyond that it is an upper bound on .ratio(), and
|
||||
is faster to compute than either .ratio() or .quick_ratio().
|
||||
"""
|
||||
|
||||
TRACE = 0
|
||||
|
||||
class SequenceMatcher:
|
||||
def __init__(self, isjunk=None, a='', b=''):
|
||||
"""Construct a SequenceMatcher.
|
||||
|
||||
Optional arg isjunk is None (the default), or a one-argument
|
||||
function that takes a sequence element and returns true iff the
|
||||
element is junk. None is equivalent to passing "lambda x: 0", i.e.
|
||||
no elements are considered to be junk. For example, pass
|
||||
lambda x: x in " \\t"
|
||||
if you're comparing lines as sequences of characters, and don't
|
||||
want to synch up on blanks or hard tabs.
|
||||
|
||||
Optional arg a is the first of two sequences to be compared. By
|
||||
default, an empty string. The elements of a must be hashable. See
|
||||
also .set_seqs() and .set_seq1().
|
||||
|
||||
Optional arg b is the second of two sequences to be compared. By
|
||||
default, an empty string. The elements of b must be hashable. See
|
||||
also .set_seqs() and .set_seq2().
|
||||
"""
|
||||
|
||||
# Members:
|
||||
# a
|
||||
# first sequence
|
||||
# b
|
||||
# second sequence; differences are computed as "what do
|
||||
# we need to do to 'a' to change it into 'b'?"
|
||||
# b2j
|
||||
# for x in b, b2j[x] is a list of the indices (into b)
|
||||
# at which x appears; junk elements do not appear
|
||||
# b2jhas
|
||||
# b2j.has_key
|
||||
# fullbcount
|
||||
# for x in b, fullbcount[x] == the number of times x
|
||||
# appears in b; only materialized if really needed (used
|
||||
# only for computing quick_ratio())
|
||||
# matching_blocks
|
||||
# a list of (i, j, k) triples, where a[i:i+k] == b[j:j+k];
|
||||
# ascending & non-overlapping in i and in j; terminated by
|
||||
# a dummy (len(a), len(b), 0) sentinel
|
||||
# opcodes
|
||||
# a list of (tag, i1, i2, j1, j2) tuples, where tag is
|
||||
# one of
|
||||
# 'replace' a[i1:i2] should be replaced by b[j1:j2]
|
||||
# 'delete' a[i1:i2] should be deleted
|
||||
# 'insert' b[j1:j2] should be inserted
|
||||
# 'equal' a[i1:i2] == b[j1:j2]
|
||||
# isjunk
|
||||
# a user-supplied function taking a sequence element and
|
||||
# returning true iff the element is "junk" -- this has
|
||||
# subtle but helpful effects on the algorithm, which I'll
|
||||
# get around to writing up someday <0.9 wink>.
|
||||
# DON'T USE! Only __chain_b uses this. Use isbjunk.
|
||||
# isbjunk
|
||||
# for x in b, isbjunk(x) == isjunk(x) but much faster;
|
||||
# it's really the has_key method of a hidden dict.
|
||||
# DOES NOT WORK for x in a!
|
||||
|
||||
self.isjunk = isjunk
|
||||
self.a = self.b = None
|
||||
self.set_seqs(a, b)
|
||||
|
||||
def set_seqs(self, a, b):
|
||||
"""Set the two sequences to be compared.
|
||||
|
||||
>>> s = SequenceMatcher()
|
||||
>>> s.set_seqs("abcd", "bcde")
|
||||
>>> s.ratio()
|
||||
0.75
|
||||
"""
|
||||
|
||||
self.set_seq1(a)
|
||||
self.set_seq2(b)
|
||||
|
||||
def set_seq1(self, a):
|
||||
"""Set the first sequence to be compared.
|
||||
|
||||
The second sequence to be compared is not changed.
|
||||
|
||||
>>> s = SequenceMatcher(None, "abcd", "bcde")
|
||||
>>> s.ratio()
|
||||
0.75
|
||||
>>> s.set_seq1("bcde")
|
||||
>>> s.ratio()
|
||||
1.0
|
||||
>>>
|
||||
|
||||
SequenceMatcher computes and caches detailed information about the
|
||||
second sequence, so if you want to compare one sequence S against
|
||||
many sequences, use .set_seq2(S) once and call .set_seq1(x)
|
||||
repeatedly for each of the other sequences.
|
||||
|
||||
See also set_seqs() and set_seq2().
|
||||
"""
|
||||
|
||||
if a is self.a:
|
||||
return
|
||||
self.a = a
|
||||
self.matching_blocks = self.opcodes = None
|
||||
|
||||
def set_seq2(self, b):
|
||||
"""Set the second sequence to be compared.
|
||||
|
||||
The first sequence to be compared is not changed.
|
||||
|
||||
>>> s = SequenceMatcher(None, "abcd", "bcde")
|
||||
>>> s.ratio()
|
||||
0.75
|
||||
>>> s.set_seq2("abcd")
|
||||
>>> s.ratio()
|
||||
1.0
|
||||
>>>
|
||||
|
||||
SequenceMatcher computes and caches detailed information about the
|
||||
second sequence, so if you want to compare one sequence S against
|
||||
many sequences, use .set_seq2(S) once and call .set_seq1(x)
|
||||
repeatedly for each of the other sequences.
|
||||
|
||||
See also set_seqs() and set_seq1().
|
||||
"""
|
||||
|
||||
if b is self.b:
|
||||
return
|
||||
self.b = b
|
||||
self.matching_blocks = self.opcodes = None
|
||||
self.fullbcount = None
|
||||
self.__chain_b()
|
||||
|
||||
# For each element x in b, set b2j[x] to a list of the indices in
|
||||
# b where x appears; the indices are in increasing order; note that
|
||||
# the number of times x appears in b is len(b2j[x]) ...
|
||||
# when self.isjunk is defined, junk elements don't show up in this
|
||||
# map at all, which stops the central find_longest_match method
|
||||
# from starting any matching block at a junk element ...
|
||||
# also creates the fast isbjunk function ...
|
||||
# note that this is only called when b changes; so for cross-product
|
||||
# kinds of matches, it's best to call set_seq2 once, then set_seq1
|
||||
# repeatedly
|
||||
|
||||
def __chain_b(self):
|
||||
# Because isjunk is a user-defined (not C) function, and we test
|
||||
# for junk a LOT, it's important to minimize the number of calls.
|
||||
# Before the tricks described here, __chain_b was by far the most
|
||||
# time-consuming routine in the whole module! If anyone sees
|
||||
# Jim Roskind, thank him again for profile.py -- I never would
|
||||
# have guessed that.
|
||||
# The first trick is to build b2j ignoring the possibility
|
||||
# of junk. I.e., we don't call isjunk at all yet. Throwing
|
||||
# out the junk later is much cheaper than building b2j "right"
|
||||
# from the start.
|
||||
b = self.b
|
||||
self.b2j = b2j = {}
|
||||
self.b2jhas = b2jhas = b2j.has_key
|
||||
for i in xrange(len(b)):
|
||||
elt = b[i]
|
||||
if b2jhas(elt):
|
||||
b2j[elt].append(i)
|
||||
else:
|
||||
b2j[elt] = [i]
|
||||
|
||||
# Now b2j.keys() contains elements uniquely, and especially when
|
||||
# the sequence is a string, that's usually a good deal smaller
|
||||
# than len(string). The difference is the number of isjunk calls
|
||||
# saved.
|
||||
isjunk, junkdict = self.isjunk, {}
|
||||
if isjunk:
|
||||
for elt in b2j.keys():
|
||||
if isjunk(elt):
|
||||
junkdict[elt] = 1 # value irrelevant; it's a set
|
||||
del b2j[elt]
|
||||
|
||||
# Now for x in b, isjunk(x) == junkdict.has_key(x), but the
|
||||
# latter is much faster. Note too that while there may be a
|
||||
# lot of junk in the sequence, the number of *unique* junk
|
||||
# elements is probably small. So the memory burden of keeping
|
||||
# this dict alive is likely trivial compared to the size of b2j.
|
||||
self.isbjunk = junkdict.has_key
|
||||
|
||||
def find_longest_match(self, alo, ahi, blo, bhi):
|
||||
"""Find longest matching block in a[alo:ahi] and b[blo:bhi].
|
||||
|
||||
If isjunk is not defined:
|
||||
|
||||
Return (i,j,k) such that a[i:i+k] is equal to b[j:j+k], where
|
||||
alo <= i <= i+k <= ahi
|
||||
blo <= j <= j+k <= bhi
|
||||
and for all (i',j',k') meeting those conditions,
|
||||
k >= k'
|
||||
i <= i'
|
||||
and if i == i', j <= j'
|
||||
|
||||
In other words, of all maximal matching blocks, return one that
|
||||
starts earliest in a, and of all those maximal matching blocks that
|
||||
start earliest in a, return the one that starts earliest in b.
|
||||
|
||||
>>> s = SequenceMatcher(None, " abcd", "abcd abcd")
|
||||
>>> s.find_longest_match(0, 5, 0, 9)
|
||||
(0, 4, 5)
|
||||
|
||||
If isjunk is defined, first the longest matching block is
|
||||
determined as above, but with the additional restriction that no
|
||||
junk element appears in the block. Then that block is extended as
|
||||
far as possible by matching (only) junk elements on both sides. So
|
||||
the resulting block never matches on junk except as identical junk
|
||||
happens to be adjacent to an "interesting" match.
|
||||
|
||||
Here's the same example as before, but considering blanks to be
|
||||
junk. That prevents " abcd" from matching the " abcd" at the tail
|
||||
end of the second sequence directly. Instead only the "abcd" can
|
||||
match, and matches the leftmost "abcd" in the second sequence:
|
||||
|
||||
>>> s = SequenceMatcher(lambda x: x==" ", " abcd", "abcd abcd")
|
||||
>>> s.find_longest_match(0, 5, 0, 9)
|
||||
(1, 0, 4)
|
||||
|
||||
If no blocks match, return (alo, blo, 0).
|
||||
|
||||
>>> s = SequenceMatcher(None, "ab", "c")
|
||||
>>> s.find_longest_match(0, 2, 0, 1)
|
||||
(0, 0, 0)
|
||||
"""
|
||||
|
||||
# CAUTION: stripping common prefix or suffix would be incorrect.
|
||||
# E.g.,
|
||||
# ab
|
||||
# acab
|
||||
# Longest matching block is "ab", but if common prefix is
|
||||
# stripped, it's "a" (tied with "b"). UNIX(tm) diff does so
|
||||
# strip, so ends up claiming that ab is changed to acab by
|
||||
# inserting "ca" in the middle. That's minimal but unintuitive:
|
||||
# "it's obvious" that someone inserted "ac" at the front.
|
||||
# Windiff ends up at the same place as diff, but by pairing up
|
||||
# the unique 'b's and then matching the first two 'a's.
|
||||
|
||||
a, b, b2j, isbjunk = self.a, self.b, self.b2j, self.isbjunk
|
||||
besti, bestj, bestsize = alo, blo, 0
|
||||
# find longest junk-free match
|
||||
# during an iteration of the loop, j2len[j] = length of longest
|
||||
# junk-free match ending with a[i-1] and b[j]
|
||||
j2len = {}
|
||||
nothing = []
|
||||
for i in xrange(alo, ahi):
|
||||
# look at all instances of a[i] in b; note that because
|
||||
# b2j has no junk keys, the loop is skipped if a[i] is junk
|
||||
j2lenget = j2len.get
|
||||
newj2len = {}
|
||||
for j in b2j.get(a[i], nothing):
|
||||
# a[i] matches b[j]
|
||||
if j < blo:
|
||||
continue
|
||||
if j >= bhi:
|
||||
break
|
||||
k = newj2len[j] = j2lenget(j-1, 0) + 1
|
||||
if k > bestsize:
|
||||
besti, bestj, bestsize = i-k+1, j-k+1, k
|
||||
j2len = newj2len
|
||||
|
||||
# Now that we have a wholly interesting match (albeit possibly
|
||||
# empty!), we may as well suck up the matching junk on each
|
||||
# side of it too. Can't think of a good reason not to, and it
|
||||
# saves post-processing the (possibly considerable) expense of
|
||||
# figuring out what to do with it. In the case of an empty
|
||||
# interesting match, this is clearly the right thing to do,
|
||||
# because no other kind of match is possible in the regions.
|
||||
while besti > alo and bestj > blo and \
|
||||
isbjunk(b[bestj-1]) and \
|
||||
a[besti-1] == b[bestj-1]:
|
||||
besti, bestj, bestsize = besti-1, bestj-1, bestsize+1
|
||||
while besti+bestsize < ahi and bestj+bestsize < bhi and \
|
||||
isbjunk(b[bestj+bestsize]) and \
|
||||
a[besti+bestsize] == b[bestj+bestsize]:
|
||||
bestsize = bestsize + 1
|
||||
|
||||
if TRACE:
|
||||
print "get_matching_blocks", alo, ahi, blo, bhi
|
||||
print " returns", besti, bestj, bestsize
|
||||
return besti, bestj, bestsize
|
||||
|
||||
def get_matching_blocks(self):
|
||||
"""Return list of triples describing matching subsequences.
|
||||
|
||||
Each triple is of the form (i, j, n), and means that
|
||||
a[i:i+n] == b[j:j+n]. The triples are monotonically increasing in
|
||||
i and in j.
|
||||
|
||||
The last triple is a dummy, (len(a), len(b), 0), and is the only
|
||||
triple with n==0.
|
||||
|
||||
>>> s = SequenceMatcher(None, "abxcd", "abcd")
|
||||
>>> s.get_matching_blocks()
|
||||
[(0, 0, 2), (3, 2, 2), (5, 4, 0)]
|
||||
"""
|
||||
|
||||
if self.matching_blocks is not None:
|
||||
return self.matching_blocks
|
||||
self.matching_blocks = []
|
||||
la, lb = len(self.a), len(self.b)
|
||||
self.__helper(0, la, 0, lb, self.matching_blocks)
|
||||
self.matching_blocks.append( (la, lb, 0) )
|
||||
if TRACE:
|
||||
print '*** matching blocks', self.matching_blocks
|
||||
return self.matching_blocks
|
||||
|
||||
# builds list of matching blocks covering a[alo:ahi] and
|
||||
# b[blo:bhi], appending them in increasing order to answer
|
||||
|
||||
def __helper(self, alo, ahi, blo, bhi, answer):
|
||||
i, j, k = x = self.find_longest_match(alo, ahi, blo, bhi)
|
||||
# a[alo:i] vs b[blo:j] unknown
|
||||
# a[i:i+k] same as b[j:j+k]
|
||||
# a[i+k:ahi] vs b[j+k:bhi] unknown
|
||||
if k:
|
||||
if alo < i and blo < j:
|
||||
self.__helper(alo, i, blo, j, answer)
|
||||
answer.append(x)
|
||||
if i+k < ahi and j+k < bhi:
|
||||
self.__helper(i+k, ahi, j+k, bhi, answer)
|
||||
|
||||
def get_opcodes(self):
|
||||
"""Return list of 5-tuples describing how to turn a into b.
|
||||
|
||||
Each tuple is of the form (tag, i1, i2, j1, j2). The first tuple
|
||||
has i1 == j1 == 0, and remaining tuples have i1 == the i2 from the
|
||||
tuple preceding it, and likewise for j1 == the previous j2.
|
||||
|
||||
The tags are strings, with these meanings:
|
||||
|
||||
'replace': a[i1:i2] should be replaced by b[j1:j2]
|
||||
'delete': a[i1:i2] should be deleted.
|
||||
Note that j1==j2 in this case.
|
||||
'insert': b[j1:j2] should be inserted at a[i1:i1].
|
||||
Note that i1==i2 in this case.
|
||||
'equal': a[i1:i2] == b[j1:j2]
|
||||
|
||||
>>> a = "qabxcd"
|
||||
>>> b = "abycdf"
|
||||
>>> s = SequenceMatcher(None, a, b)
|
||||
>>> for tag, i1, i2, j1, j2 in s.get_opcodes():
|
||||
... print ("%7s a[%d:%d] (%s) b[%d:%d] (%s)" %
|
||||
... (tag, i1, i2, a[i1:i2], j1, j2, b[j1:j2]))
|
||||
delete a[0:1] (q) b[0:0] ()
|
||||
equal a[1:3] (ab) b[0:2] (ab)
|
||||
replace a[3:4] (x) b[2:3] (y)
|
||||
equal a[4:6] (cd) b[3:5] (cd)
|
||||
insert a[6:6] () b[5:6] (f)
|
||||
"""
|
||||
|
||||
if self.opcodes is not None:
|
||||
return self.opcodes
|
||||
i = j = 0
|
||||
self.opcodes = answer = []
|
||||
for ai, bj, size in self.get_matching_blocks():
|
||||
# invariant: we've pumped out correct diffs to change
|
||||
# a[:i] into b[:j], and the next matching block is
|
||||
# a[ai:ai+size] == b[bj:bj+size]. So we need to pump
|
||||
# out a diff to change a[i:ai] into b[j:bj], pump out
|
||||
# the matching block, and move (i,j) beyond the match
|
||||
tag = ''
|
||||
if i < ai and j < bj:
|
||||
tag = 'replace'
|
||||
elif i < ai:
|
||||
tag = 'delete'
|
||||
elif j < bj:
|
||||
tag = 'insert'
|
||||
if tag:
|
||||
answer.append( (tag, i, ai, j, bj) )
|
||||
i, j = ai+size, bj+size
|
||||
# the list of matching blocks is terminated by a
|
||||
# sentinel with size 0
|
||||
if size:
|
||||
answer.append( ('equal', ai, i, bj, j) )
|
||||
return answer
|
||||
|
||||
def ratio(self):
|
||||
"""Return a measure of the sequences' similarity (float in [0,1]).
|
||||
|
||||
Where T is the total number of elements in both sequences, and
|
||||
M is the number of matches, this is 2,0*M / T.
|
||||
Note that this is 1 if the sequences are identical, and 0 if
|
||||
they have nothing in common.
|
||||
|
||||
.ratio() is expensive to compute if you haven't already computed
|
||||
.get_matching_blocks() or .get_opcodes(), in which case you may
|
||||
want to try .quick_ratio() or .real_quick_ratio() first to get an
|
||||
upper bound.
|
||||
|
||||
>>> s = SequenceMatcher(None, "abcd", "bcde")
|
||||
>>> s.ratio()
|
||||
0.75
|
||||
>>> s.quick_ratio()
|
||||
0.75
|
||||
>>> s.real_quick_ratio()
|
||||
1.0
|
||||
"""
|
||||
|
||||
matches = reduce(lambda sum, triple: sum + triple[-1],
|
||||
self.get_matching_blocks(), 0)
|
||||
return 2.0 * matches / (len(self.a) + len(self.b))
|
||||
|
||||
def quick_ratio(self):
|
||||
"""Return an upper bound on ratio() relatively quickly.
|
||||
|
||||
This isn't defined beyond that it is an upper bound on .ratio(), and
|
||||
is faster to compute.
|
||||
"""
|
||||
|
||||
# viewing a and b as multisets, set matches to the cardinality
|
||||
# of their intersection; this counts the number of matches
|
||||
# without regard to order, so is clearly an upper bound
|
||||
if self.fullbcount is None:
|
||||
self.fullbcount = fullbcount = {}
|
||||
for elt in self.b:
|
||||
fullbcount[elt] = fullbcount.get(elt, 0) + 1
|
||||
fullbcount = self.fullbcount
|
||||
# avail[x] is the number of times x appears in 'b' less the
|
||||
# number of times we've seen it in 'a' so far ... kinda
|
||||
avail = {}
|
||||
availhas, matches = avail.has_key, 0
|
||||
for elt in self.a:
|
||||
if availhas(elt):
|
||||
numb = avail[elt]
|
||||
else:
|
||||
numb = fullbcount.get(elt, 0)
|
||||
avail[elt] = numb - 1
|
||||
if numb > 0:
|
||||
matches = matches + 1
|
||||
return 2.0 * matches / (len(self.a) + len(self.b))
|
||||
|
||||
def real_quick_ratio(self):
|
||||
"""Return an upper bound on ratio() very quickly.
|
||||
|
||||
This isn't defined beyond that it is an upper bound on .ratio(), and
|
||||
is faster to compute than either .ratio() or .quick_ratio().
|
||||
"""
|
||||
|
||||
la, lb = len(self.a), len(self.b)
|
||||
# can't have more matches than the number of elements in the
|
||||
# shorter sequence
|
||||
return 2.0 * min(la, lb) / (la + lb)
|
||||
|
||||
def get_close_matches(word, possibilities, n=3, cutoff=0.6):
|
||||
"""Use SequenceMatcher to return list of the best "good enough" matches.
|
||||
|
||||
word is a sequence for which close matches are desired (typically a
|
||||
string).
|
||||
|
||||
possibilities is a list of sequences against which to match word
|
||||
(typically a list of strings).
|
||||
|
||||
Optional arg n (default 3) is the maximum number of close matches to
|
||||
return. n must be > 0.
|
||||
|
||||
Optional arg cutoff (default 0.6) is a float in [0, 1]. Possibilities
|
||||
that don't score at least that similar to word are ignored.
|
||||
|
||||
The best (no more than n) matches among the possibilities are returned
|
||||
in a list, sorted by similarity score, most similar first.
|
||||
|
||||
>>> get_close_matches("appel", ["ape", "apple", "peach", "puppy"])
|
||||
['apple', 'ape']
|
||||
>>> import keyword
|
||||
>>> get_close_matches("wheel", keyword.kwlist)
|
||||
['while']
|
||||
>>> get_close_matches("apple", keyword.kwlist)
|
||||
[]
|
||||
>>> get_close_matches("accept", keyword.kwlist)
|
||||
['except']
|
||||
"""
|
||||
|
||||
if not n > 0:
|
||||
raise ValueError("n must be > 0: " + `n`)
|
||||
if not 0.0 <= cutoff <= 1.0:
|
||||
raise ValueError("cutoff must be in [0.0, 1.0]: " + `cutoff`)
|
||||
result = []
|
||||
s = SequenceMatcher()
|
||||
s.set_seq2(word)
|
||||
for x in possibilities:
|
||||
s.set_seq1(x)
|
||||
if s.real_quick_ratio() >= cutoff and \
|
||||
s.quick_ratio() >= cutoff and \
|
||||
s.ratio() >= cutoff:
|
||||
result.append((s.ratio(), x))
|
||||
# Sort by score.
|
||||
result.sort()
|
||||
# Retain only the best n.
|
||||
result = result[-n:]
|
||||
# Move best-scorer to head of list.
|
||||
result.reverse()
|
||||
# Strip scores.
|
||||
# Python 2.x list comprehensions: return [x for score, x in result]
|
||||
return_result = []
|
||||
for score, x in result:
|
||||
return_result.append(x)
|
||||
return return_result
|
||||
|
||||
def _test():
|
||||
import doctest, difflib
|
||||
return doctest.testmod(difflib)
|
||||
|
||||
if __name__ == "__main__":
|
||||
_test()
|
|
@ -0,0 +1,346 @@
|
|||
#! /usr/bin/env python
|
||||
|
||||
# Module ndiff version 1.6.0
|
||||
# Released to the public domain 08-Dec-2000,
|
||||
# by Tim Peters (tim.one@home.com).
|
||||
|
||||
# Backported to Python 1.5.2 for ViewCVS by pf@artcom-gmbh.de, 24-Dec-2001
|
||||
|
||||
# Provided as-is; use at your own risk; no warranty; no promises; enjoy!
|
||||
|
||||
"""ndiff [-q] file1 file2
|
||||
or
|
||||
ndiff (-r1 | -r2) < ndiff_output > file1_or_file2
|
||||
|
||||
Print a human-friendly file difference report to stdout. Both inter-
|
||||
and intra-line differences are noted. In the second form, recreate file1
|
||||
(-r1) or file2 (-r2) on stdout, from an ndiff report on stdin.
|
||||
|
||||
In the first form, if -q ("quiet") is not specified, the first two lines
|
||||
of output are
|
||||
|
||||
-: file1
|
||||
+: file2
|
||||
|
||||
Each remaining line begins with a two-letter code:
|
||||
|
||||
"- " line unique to file1
|
||||
"+ " line unique to file2
|
||||
" " line common to both files
|
||||
"? " line not present in either input file
|
||||
|
||||
Lines beginning with "? " attempt to guide the eye to intraline
|
||||
differences, and were not present in either input file. These lines can be
|
||||
confusing if the source files contain tab characters.
|
||||
|
||||
The first file can be recovered by retaining only lines that begin with
|
||||
" " or "- ", and deleting those 2-character prefixes; use ndiff with -r1.
|
||||
|
||||
The second file can be recovered similarly, but by retaining only " " and
|
||||
"+ " lines; use ndiff with -r2; or, on Unix, the second file can be
|
||||
recovered by piping the output through
|
||||
|
||||
sed -n '/^[+ ] /s/^..//p'
|
||||
|
||||
See module comments for details and programmatic interface.
|
||||
"""
|
||||
|
||||
__version__ = 1, 6, 1
|
||||
|
||||
# SequenceMatcher tries to compute a "human-friendly diff" between
|
||||
# two sequences (chiefly picturing a file as a sequence of lines,
|
||||
# and a line as a sequence of characters, here). Unlike e.g. UNIX(tm)
|
||||
# diff, the fundamental notion is the longest *contiguous* & junk-free
|
||||
# matching subsequence. That's what catches peoples' eyes. The
|
||||
# Windows(tm) windiff has another interesting notion, pairing up elements
|
||||
# that appear uniquely in each sequence. That, and the method here,
|
||||
# appear to yield more intuitive difference reports than does diff. This
|
||||
# method appears to be the least vulnerable to synching up on blocks
|
||||
# of "junk lines", though (like blank lines in ordinary text files,
|
||||
# or maybe "<P>" lines in HTML files). That may be because this is
|
||||
# the only method of the 3 that has a *concept* of "junk" <wink>.
|
||||
#
|
||||
# Note that ndiff makes no claim to produce a *minimal* diff. To the
|
||||
# contrary, minimal diffs are often counter-intuitive, because they
|
||||
# synch up anywhere possible, sometimes accidental matches 100 pages
|
||||
# apart. Restricting synch points to contiguous matches preserves some
|
||||
# notion of locality, at the occasional cost of producing a longer diff.
|
||||
#
|
||||
# With respect to junk, an earlier version of ndiff simply refused to
|
||||
# *start* a match with a junk element. The result was cases like this:
|
||||
# before: private Thread currentThread;
|
||||
# after: private volatile Thread currentThread;
|
||||
# If you consider whitespace to be junk, the longest contiguous match
|
||||
# not starting with junk is "e Thread currentThread". So ndiff reported
|
||||
# that "e volatil" was inserted between the 't' and the 'e' in "private".
|
||||
# While an accurate view, to people that's absurd. The current version
|
||||
# looks for matching blocks that are entirely junk-free, then extends the
|
||||
# longest one of those as far as possible but only with matching junk.
|
||||
# So now "currentThread" is matched, then extended to suck up the
|
||||
# preceding blank; then "private" is matched, and extended to suck up the
|
||||
# following blank; then "Thread" is matched; and finally ndiff reports
|
||||
# that "volatile " was inserted before "Thread". The only quibble
|
||||
# remaining is that perhaps it was really the case that " volatile"
|
||||
# was inserted after "private". I can live with that <wink>.
|
||||
#
|
||||
# NOTE on junk: the module-level names
|
||||
# IS_LINE_JUNK
|
||||
# IS_CHARACTER_JUNK
|
||||
# can be set to any functions you like. The first one should accept
|
||||
# a single string argument, and return true iff the string is junk.
|
||||
# The default is whether the regexp r"\s*#?\s*$" matches (i.e., a
|
||||
# line without visible characters, except for at most one splat).
|
||||
# The second should accept a string of length 1 etc. The default is
|
||||
# whether the character is a blank or tab (note: bad idea to include
|
||||
# newline in this!).
|
||||
#
|
||||
# After setting those, you can call fcompare(f1name, f2name) with the
|
||||
# names of the files you want to compare. The difference report
|
||||
# is sent to stdout. Or you can call main(args), passing what would
|
||||
# have been in sys.argv[1:] had the cmd-line form been used.
|
||||
|
||||
from compat_difflib import SequenceMatcher
|
||||
|
||||
TRACE = 0
|
||||
|
||||
# define what "junk" means
|
||||
import re
|
||||
|
||||
def IS_LINE_JUNK(line, pat=re.compile(r"\s*#?\s*$").match):
|
||||
return pat(line) is not None
|
||||
|
||||
def IS_CHARACTER_JUNK(ch, ws=" \t"):
|
||||
return ch in ws
|
||||
|
||||
del re
|
||||
|
||||
# meant for dumping lines
|
||||
def dump(tag, x, lo, hi):
|
||||
for i in xrange(lo, hi):
|
||||
print tag, x[i],
|
||||
|
||||
def plain_replace(a, alo, ahi, b, blo, bhi):
|
||||
assert alo < ahi and blo < bhi
|
||||
# dump the shorter block first -- reduces the burden on short-term
|
||||
# memory if the blocks are of very different sizes
|
||||
if bhi - blo < ahi - alo:
|
||||
dump('+', b, blo, bhi)
|
||||
dump('-', a, alo, ahi)
|
||||
else:
|
||||
dump('-', a, alo, ahi)
|
||||
dump('+', b, blo, bhi)
|
||||
|
||||
# When replacing one block of lines with another, this guy searches
|
||||
# the blocks for *similar* lines; the best-matching pair (if any) is
|
||||
# used as a synch point, and intraline difference marking is done on
|
||||
# the similar pair. Lots of work, but often worth it.
|
||||
|
||||
def fancy_replace(a, alo, ahi, b, blo, bhi):
|
||||
if TRACE:
|
||||
print '*** fancy_replace', alo, ahi, blo, bhi
|
||||
dump('>', a, alo, ahi)
|
||||
dump('<', b, blo, bhi)
|
||||
|
||||
# don't synch up unless the lines have a similarity score of at
|
||||
# least cutoff; best_ratio tracks the best score seen so far
|
||||
best_ratio, cutoff = 0.74, 0.75
|
||||
cruncher = SequenceMatcher(IS_CHARACTER_JUNK)
|
||||
eqi, eqj = None, None # 1st indices of equal lines (if any)
|
||||
|
||||
# search for the pair that matches best without being identical
|
||||
# (identical lines must be junk lines, & we don't want to synch up
|
||||
# on junk -- unless we have to)
|
||||
for j in xrange(blo, bhi):
|
||||
bj = b[j]
|
||||
cruncher.set_seq2(bj)
|
||||
for i in xrange(alo, ahi):
|
||||
ai = a[i]
|
||||
if ai == bj:
|
||||
if eqi is None:
|
||||
eqi, eqj = i, j
|
||||
continue
|
||||
cruncher.set_seq1(ai)
|
||||
# computing similarity is expensive, so use the quick
|
||||
# upper bounds first -- have seen this speed up messy
|
||||
# compares by a factor of 3.
|
||||
# note that ratio() is only expensive to compute the first
|
||||
# time it's called on a sequence pair; the expensive part
|
||||
# of the computation is cached by cruncher
|
||||
if cruncher.real_quick_ratio() > best_ratio and \
|
||||
cruncher.quick_ratio() > best_ratio and \
|
||||
cruncher.ratio() > best_ratio:
|
||||
best_ratio, best_i, best_j = cruncher.ratio(), i, j
|
||||
if best_ratio < cutoff:
|
||||
# no non-identical "pretty close" pair
|
||||
if eqi is None:
|
||||
# no identical pair either -- treat it as a straight replace
|
||||
plain_replace(a, alo, ahi, b, blo, bhi)
|
||||
return
|
||||
# no close pair, but an identical pair -- synch up on that
|
||||
best_i, best_j, best_ratio = eqi, eqj, 1.0
|
||||
else:
|
||||
# there's a close pair, so forget the identical pair (if any)
|
||||
eqi = None
|
||||
|
||||
# a[best_i] very similar to b[best_j]; eqi is None iff they're not
|
||||
# identical
|
||||
if TRACE:
|
||||
print '*** best_ratio', best_ratio, best_i, best_j
|
||||
dump('>', a, best_i, best_i+1)
|
||||
dump('<', b, best_j, best_j+1)
|
||||
|
||||
# pump out diffs from before the synch point
|
||||
fancy_helper(a, alo, best_i, b, blo, best_j)
|
||||
|
||||
# do intraline marking on the synch pair
|
||||
aelt, belt = a[best_i], b[best_j]
|
||||
if eqi is None:
|
||||
# pump out a '-', '?', '+', '?' quad for the synched lines
|
||||
atags = btags = ""
|
||||
cruncher.set_seqs(aelt, belt)
|
||||
for tag, ai1, ai2, bj1, bj2 in cruncher.get_opcodes():
|
||||
la, lb = ai2 - ai1, bj2 - bj1
|
||||
if tag == 'replace':
|
||||
atags = atags + '^' * la
|
||||
btags = btags + '^' * lb
|
||||
elif tag == 'delete':
|
||||
atags = atags + '-' * la
|
||||
elif tag == 'insert':
|
||||
btags = btags + '+' * lb
|
||||
elif tag == 'equal':
|
||||
atags = atags + ' ' * la
|
||||
btags = btags + ' ' * lb
|
||||
else:
|
||||
raise ValueError, 'unknown tag ' + `tag`
|
||||
printq(aelt, belt, atags, btags)
|
||||
else:
|
||||
# the synch pair is identical
|
||||
print ' ', aelt,
|
||||
|
||||
# pump out diffs from after the synch point
|
||||
fancy_helper(a, best_i+1, ahi, b, best_j+1, bhi)
|
||||
|
||||
def fancy_helper(a, alo, ahi, b, blo, bhi):
|
||||
if alo < ahi:
|
||||
if blo < bhi:
|
||||
fancy_replace(a, alo, ahi, b, blo, bhi)
|
||||
else:
|
||||
dump('-', a, alo, ahi)
|
||||
elif blo < bhi:
|
||||
dump('+', b, blo, bhi)
|
||||
|
||||
# Crap to deal with leading tabs in "?" output. Can hurt, but will
|
||||
# probably help most of the time.
|
||||
|
||||
def printq(aline, bline, atags, btags):
|
||||
common = min(count_leading(aline, "\t"),
|
||||
count_leading(bline, "\t"))
|
||||
common = min(common, count_leading(atags[:common], " "))
|
||||
print "-", aline,
|
||||
if count_leading(atags, " ") < len(atags):
|
||||
print "?", "\t" * common + atags[common:]
|
||||
print "+", bline,
|
||||
if count_leading(btags, " ") < len(btags):
|
||||
print "?", "\t" * common + btags[common:]
|
||||
|
||||
def count_leading(line, ch):
|
||||
i, n = 0, len(line)
|
||||
while i < n and line[i] == ch:
|
||||
i = i+1
|
||||
return i
|
||||
|
||||
def fail(msg):
|
||||
import sys
|
||||
out = sys.stderr.write
|
||||
out(msg + "\n\n")
|
||||
out(__doc__)
|
||||
return 0
|
||||
|
||||
# open a file & return the file object; gripe and return 0 if it
|
||||
# couldn't be opened
|
||||
def fopen(fname):
|
||||
try:
|
||||
return open(fname, 'r')
|
||||
except IOError, detail:
|
||||
return fail("couldn't open " + fname + ": " + str(detail))
|
||||
|
||||
# open two files & spray the diff to stdout; return false iff a problem
|
||||
def fcompare(f1name, f2name):
|
||||
f1 = fopen(f1name)
|
||||
f2 = fopen(f2name)
|
||||
if not f1 or not f2:
|
||||
return 0
|
||||
|
||||
a = f1.readlines(); f1.close()
|
||||
b = f2.readlines(); f2.close()
|
||||
|
||||
cruncher = SequenceMatcher(IS_LINE_JUNK, a, b)
|
||||
for tag, alo, ahi, blo, bhi in cruncher.get_opcodes():
|
||||
if tag == 'replace':
|
||||
fancy_replace(a, alo, ahi, b, blo, bhi)
|
||||
elif tag == 'delete':
|
||||
dump('-', a, alo, ahi)
|
||||
elif tag == 'insert':
|
||||
dump('+', b, blo, bhi)
|
||||
elif tag == 'equal':
|
||||
dump(' ', a, alo, ahi)
|
||||
else:
|
||||
raise ValueError, 'unknown tag ' + `tag`
|
||||
|
||||
return 1
|
||||
|
||||
# crack args (sys.argv[1:] is normal) & compare;
|
||||
# return false iff a problem
|
||||
|
||||
def main(args):
|
||||
import getopt
|
||||
try:
|
||||
opts, args = getopt.getopt(args, "qr:")
|
||||
except getopt.error, detail:
|
||||
return fail(str(detail))
|
||||
noisy = 1
|
||||
qseen = rseen = 0
|
||||
for opt, val in opts:
|
||||
if opt == "-q":
|
||||
qseen = 1
|
||||
noisy = 0
|
||||
elif opt == "-r":
|
||||
rseen = 1
|
||||
whichfile = val
|
||||
if qseen and rseen:
|
||||
return fail("can't specify both -q and -r")
|
||||
if rseen:
|
||||
if args:
|
||||
return fail("no args allowed with -r option")
|
||||
if whichfile in "12":
|
||||
restore(whichfile)
|
||||
return 1
|
||||
return fail("-r value must be 1 or 2")
|
||||
if len(args) != 2:
|
||||
return fail("need 2 filename args")
|
||||
f1name, f2name = args
|
||||
if noisy:
|
||||
print '-:', f1name
|
||||
print '+:', f2name
|
||||
return fcompare(f1name, f2name)
|
||||
|
||||
def restore(which):
|
||||
import sys
|
||||
tag = {"1": "- ", "2": "+ "}[which]
|
||||
prefixes = (" ", tag)
|
||||
for line in sys.stdin.readlines():
|
||||
if line[:2] in prefixes:
|
||||
print line[2:],
|
||||
|
||||
if __name__ == '__main__':
|
||||
import sys
|
||||
args = sys.argv[1:]
|
||||
if "-profile" in args:
|
||||
import profile, pstats
|
||||
args.remove("-profile")
|
||||
statf = "ndiff.pro"
|
||||
profile.run("main(args)", statf)
|
||||
stats = pstats.Stats(statf)
|
||||
stats.strip_dirs().sort_stats('time').print_stats()
|
||||
else:
|
||||
main(args)
|
|
@ -0,0 +1,317 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2008 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# config.py: configuration utilities
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
import sys
|
||||
import os
|
||||
import string
|
||||
import ConfigParser
|
||||
import fnmatch
|
||||
|
||||
|
||||
#########################################################################
|
||||
#
|
||||
# CONFIGURATION
|
||||
#
|
||||
# There are three forms of configuration:
|
||||
#
|
||||
# 1) edit the viewvc.conf created by the viewvc-install(er)
|
||||
# 2) as (1), but delete all unchanged entries from viewvc.conf
|
||||
# 3) do not use viewvc.conf and just edit the defaults in this file
|
||||
#
|
||||
# Most users will want to use (1), but there are slight speed advantages
|
||||
# to the other two options. Note that viewvc.conf values are a bit easier
|
||||
# to work with since it is raw text, rather than python literal values.
|
||||
#
|
||||
#########################################################################
|
||||
|
||||
class Config:
|
||||
_sections = ('general', 'utilities', 'options', 'cvsdb', 'templates')
|
||||
_force_multi_value = ('cvs_roots', 'svn_roots', 'languages', 'kv_files',
|
||||
'root_parents', 'allowed_views')
|
||||
|
||||
def __init__(self):
|
||||
for section in self._sections:
|
||||
setattr(self, section, _sub_config())
|
||||
|
||||
def load_config(self, pathname, vhost=None, rootname=None):
|
||||
self.conf_path = os.path.isfile(pathname) and pathname or None
|
||||
self.base = os.path.dirname(pathname)
|
||||
self.parser = ConfigParser.ConfigParser()
|
||||
self.parser.read(self.conf_path or [])
|
||||
|
||||
for section in self._sections:
|
||||
if self.parser.has_section(section):
|
||||
self._process_section(self.parser, section, section)
|
||||
|
||||
if vhost and self.parser.has_section('vhosts'):
|
||||
self._process_vhost(self.parser, vhost)
|
||||
|
||||
if rootname:
|
||||
self._process_root_options(self.parser, rootname)
|
||||
|
||||
def load_kv_files(self, language):
|
||||
kv = _sub_config()
|
||||
|
||||
for fname in self.general.kv_files:
|
||||
if fname[0] == '[':
|
||||
idx = string.index(fname, ']')
|
||||
parts = string.split(fname[1:idx], '.')
|
||||
fname = string.strip(fname[idx+1:])
|
||||
else:
|
||||
parts = [ ]
|
||||
fname = string.replace(fname, '%lang%', language)
|
||||
|
||||
parser = ConfigParser.ConfigParser()
|
||||
parser.read(os.path.join(self.base, fname))
|
||||
for section in parser.sections():
|
||||
for option in parser.options(section):
|
||||
full_name = parts + [section]
|
||||
ob = kv
|
||||
for name in full_name:
|
||||
try:
|
||||
ob = getattr(ob, name)
|
||||
except AttributeError:
|
||||
c = _sub_config()
|
||||
setattr(ob, name, c)
|
||||
ob = c
|
||||
setattr(ob, option, parser.get(section, option))
|
||||
|
||||
return kv
|
||||
|
||||
def path(self, path):
|
||||
"""Return path relative to the config file directory"""
|
||||
return os.path.join(self.base, path)
|
||||
|
||||
def _process_section(self, parser, section, subcfg_name):
|
||||
sc = getattr(self, subcfg_name)
|
||||
|
||||
for opt in parser.options(section):
|
||||
value = parser.get(section, opt)
|
||||
if opt in self._force_multi_value:
|
||||
value = map(string.strip, filter(None, string.split(value, ',')))
|
||||
else:
|
||||
try:
|
||||
value = int(value)
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
if opt == 'cvs_roots' or opt == 'svn_roots':
|
||||
value = _parse_roots(opt, value)
|
||||
|
||||
setattr(sc, opt, value)
|
||||
|
||||
def _process_vhost(self, parser, vhost):
|
||||
# find a vhost name for this vhost, if any (if not, we've nothing to do)
|
||||
canon_vhost = self._find_canon_vhost(parser, vhost)
|
||||
if not canon_vhost:
|
||||
return
|
||||
|
||||
# overlay any option sections associated with this vhost name
|
||||
cv = 'vhost-%s/' % (canon_vhost)
|
||||
lcv = len(cv)
|
||||
for section in parser.sections():
|
||||
if section[:lcv] == cv:
|
||||
base_section = section[lcv:]
|
||||
if base_section not in self._sections:
|
||||
raise IllegalOverrideSection('vhost', section)
|
||||
self._process_section(parser, section, base_section)
|
||||
|
||||
def _find_canon_vhost(self, parser, vhost):
|
||||
vhost = string.split(string.lower(vhost), ':')[0] # lower-case, no port
|
||||
for canon_vhost in parser.options('vhosts'):
|
||||
value = parser.get('vhosts', canon_vhost)
|
||||
patterns = map(string.lower, map(string.strip,
|
||||
filter(None, string.split(value, ','))))
|
||||
for pat in patterns:
|
||||
if fnmatch.fnmatchcase(vhost, pat):
|
||||
return canon_vhost
|
||||
|
||||
return None
|
||||
|
||||
def _process_root_options(self, parser, rootname):
|
||||
rn = 'root-%s/' % (rootname)
|
||||
lrn = len(rn)
|
||||
for section in parser.sections():
|
||||
if section[:lrn] == rn:
|
||||
base_section = section[lrn:]
|
||||
if base_section in self._sections:
|
||||
if base_section == 'general':
|
||||
raise IllegalOverrideSection('root', section)
|
||||
self._process_section(parser, section, base_section)
|
||||
elif _startswith(base_section, 'authz-'):
|
||||
pass
|
||||
else:
|
||||
raise IllegalOverrideSection('root', section)
|
||||
|
||||
def overlay_root_options(self, rootname):
|
||||
"Overly per-root options atop the existing option set."
|
||||
if not self.conf_path:
|
||||
return
|
||||
self._process_root_options(self.parser, rootname)
|
||||
|
||||
def _get_parser_items(self, parser, section):
|
||||
"""Basically implement ConfigParser.items() for pre-Python-2.3 versions."""
|
||||
try:
|
||||
return self.parser.items(section)
|
||||
except AttributeError:
|
||||
d = {}
|
||||
for option in parser.options(section):
|
||||
d[option] = parser.get(section, option)
|
||||
return d.items()
|
||||
|
||||
def get_authorizer_params(self, authorizer, rootname=None):
|
||||
if not self.conf_path:
|
||||
return {}
|
||||
|
||||
params = {}
|
||||
authz_section = 'authz-%s' % (authorizer)
|
||||
for section in self.parser.sections():
|
||||
if section == authz_section:
|
||||
for key, value in self._get_parser_items(self.parser, section):
|
||||
params[key] = value
|
||||
if rootname:
|
||||
root_authz_section = 'root-%s/authz-%s' % (rootname, authorizer)
|
||||
for section in self.parser.sections():
|
||||
if section == root_authz_section:
|
||||
for key, value in self._get_parser_items(self.parser, section):
|
||||
params[key] = value
|
||||
return params
|
||||
|
||||
def set_defaults(self):
|
||||
"Set some default values in the configuration."
|
||||
|
||||
self.general.cvs_roots = { }
|
||||
self.general.svn_roots = { }
|
||||
self.general.root_parents = []
|
||||
self.general.default_root = ''
|
||||
self.general.mime_types_file = ''
|
||||
self.general.address = ''
|
||||
self.general.kv_files = [ ]
|
||||
self.general.languages = ['en-us']
|
||||
|
||||
self.utilities.rcs_dir = ''
|
||||
if sys.platform == "win32":
|
||||
self.utilities.cvsnt = 'cvs'
|
||||
else:
|
||||
self.utilities.cvsnt = None
|
||||
self.utilities.svn = ''
|
||||
self.utilities.diff = ''
|
||||
self.utilities.cvsgraph = ''
|
||||
|
||||
self.options.root_as_url_component = 1
|
||||
self.options.checkout_magic = 0
|
||||
self.options.allowed_views = ['markup', 'annotate', 'roots']
|
||||
self.options.authorizer = 'forbidden'
|
||||
self.options.mangle_email_addresses = 0
|
||||
self.options.default_file_view = "log"
|
||||
self.options.http_expiration_time = 600
|
||||
self.options.generate_etags = 1
|
||||
self.options.svn_config_dir = None
|
||||
self.options.use_rcsparse = 0
|
||||
self.options.sort_by = 'file'
|
||||
self.options.sort_group_dirs = 1
|
||||
self.options.hide_attic = 1
|
||||
self.options.hide_errorful_entries = 0
|
||||
self.options.log_sort = 'date'
|
||||
self.options.diff_format = 'h'
|
||||
self.options.hide_cvsroot = 1
|
||||
self.options.hr_breakable = 1
|
||||
self.options.hr_funout = 1
|
||||
self.options.hr_ignore_white = 0
|
||||
self.options.hr_ignore_keyword_subst = 1
|
||||
self.options.hr_intraline = 0
|
||||
self.options.allow_compress = 0
|
||||
self.options.template_dir = "templates"
|
||||
self.options.docroot = None
|
||||
self.options.show_subdir_lastmod = 0
|
||||
self.options.show_logs = 1
|
||||
self.options.show_log_in_markup = 1
|
||||
self.options.cross_copies = 0
|
||||
self.options.use_localtime = 0
|
||||
self.options.short_log_len = 80
|
||||
self.options.enable_syntax_coloration = 1
|
||||
self.options.use_cvsgraph = 0
|
||||
self.options.cvsgraph_conf = "cvsgraph.conf"
|
||||
self.options.use_re_search = 0
|
||||
self.options.use_pagesize = 0
|
||||
self.options.limit_changes = 100
|
||||
|
||||
self.templates.diff = None
|
||||
self.templates.directory = None
|
||||
self.templates.error = None
|
||||
self.templates.file = None
|
||||
self.templates.graph = None
|
||||
self.templates.log = None
|
||||
self.templates.query = None
|
||||
self.templates.query_form = None
|
||||
self.templates.query_results = None
|
||||
self.templates.roots = None
|
||||
|
||||
self.cvsdb.enabled = 0
|
||||
self.cvsdb.host = ''
|
||||
self.cvsdb.port = 3306
|
||||
self.cvsdb.database_name = ''
|
||||
self.cvsdb.user = ''
|
||||
self.cvsdb.passwd = ''
|
||||
self.cvsdb.readonly_user = ''
|
||||
self.cvsdb.readonly_passwd = ''
|
||||
self.cvsdb.row_limit = 1000
|
||||
self.cvsdb.rss_row_limit = 100
|
||||
self.cvsdb.check_database_for_root = 0
|
||||
|
||||
def _startswith(somestr, substr):
|
||||
return somestr[:len(substr)] == substr
|
||||
|
||||
def _parse_roots(config_name, config_value):
|
||||
roots = { }
|
||||
for root in config_value:
|
||||
pos = string.find(root, ':')
|
||||
if pos < 0:
|
||||
raise MalformedRoot(config_name, root)
|
||||
name, path = map(string.strip, (root[:pos], root[pos+1:]))
|
||||
roots[name] = path
|
||||
return roots
|
||||
|
||||
class ViewVCConfigurationError(Exception):
|
||||
pass
|
||||
|
||||
class IllegalOverrideSection(ViewVCConfigurationError):
|
||||
def __init__(self, override_type, section_name):
|
||||
self.section_name = section_name
|
||||
self.override_type = override_type
|
||||
def __str__(self):
|
||||
return "malformed configuration: illegal %s override section: %s" \
|
||||
% (self.override_type, self.section_name)
|
||||
|
||||
class MalformedRoot(ViewVCConfigurationError):
|
||||
def __init__(self, config_name, value_given):
|
||||
Exception.__init__(self, config_name, value_given)
|
||||
self.config_name = config_name
|
||||
self.value_given = value_given
|
||||
def __str__(self):
|
||||
return "malformed configuration: '%s' uses invalid syntax: %s" \
|
||||
% (self.config_name, self.value_given)
|
||||
|
||||
|
||||
class _sub_config:
|
||||
pass
|
||||
|
||||
if not hasattr(sys, 'hexversion'):
|
||||
# Python 1.5 or 1.5.1. fix the syntax for ConfigParser options.
|
||||
import regex
|
||||
ConfigParser.option_cre = regex.compile('^\([-A-Za-z0-9._]+\)\(:\|['
|
||||
+ string.whitespace
|
||||
+ ']*=\)\(.*\)$')
|
|
@ -0,0 +1,839 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2007 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
import os
|
||||
import sys
|
||||
import string
|
||||
import time
|
||||
import fnmatch
|
||||
import re
|
||||
|
||||
import vclib
|
||||
import dbi
|
||||
|
||||
|
||||
## error
|
||||
error = "cvsdb error"
|
||||
|
||||
## CheckinDatabase provides all interfaces needed to the SQL database
|
||||
## back-end; it needs to be subclassed, and have its "Connect" method
|
||||
## defined to actually be complete; it should run well off of any DBI 2.0
|
||||
## complient database interface
|
||||
|
||||
class CheckinDatabase:
|
||||
def __init__(self, host, port, user, passwd, database, row_limit):
|
||||
self._host = host
|
||||
self._port = port
|
||||
self._user = user
|
||||
self._passwd = passwd
|
||||
self._database = database
|
||||
self._row_limit = row_limit
|
||||
|
||||
## database lookup caches
|
||||
self._get_cache = {}
|
||||
self._get_id_cache = {}
|
||||
self._desc_id_cache = {}
|
||||
|
||||
def Connect(self):
|
||||
self.db = dbi.connect(
|
||||
self._host, self._port, self._user, self._passwd, self._database)
|
||||
cursor = self.db.cursor()
|
||||
cursor.execute("SET AUTOCOMMIT=1")
|
||||
|
||||
def sql_get_id(self, table, column, value, auto_set):
|
||||
sql = "SELECT id FROM %s WHERE %s=%%s" % (table, column)
|
||||
sql_args = (value, )
|
||||
|
||||
cursor = self.db.cursor()
|
||||
cursor.execute(sql, sql_args)
|
||||
try:
|
||||
(id, ) = cursor.fetchone()
|
||||
except TypeError:
|
||||
if not auto_set:
|
||||
return None
|
||||
else:
|
||||
return str(int(id))
|
||||
|
||||
## insert the new identifier
|
||||
sql = "INSERT INTO %s(%s) VALUES(%%s)" % (table, column)
|
||||
sql_args = (value, )
|
||||
cursor.execute(sql, sql_args)
|
||||
|
||||
return self.sql_get_id(table, column, value, 0)
|
||||
|
||||
def get_id(self, table, column, value, auto_set):
|
||||
## attempt to retrieve from cache
|
||||
try:
|
||||
return self._get_id_cache[table][column][value]
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
id = self.sql_get_id(table, column, value, auto_set)
|
||||
if id == None:
|
||||
return None
|
||||
|
||||
## add to cache
|
||||
try:
|
||||
temp = self._get_id_cache[table]
|
||||
except KeyError:
|
||||
temp = self._get_id_cache[table] = {}
|
||||
|
||||
try:
|
||||
temp2 = temp[column]
|
||||
except KeyError:
|
||||
temp2 = temp[column] = {}
|
||||
|
||||
temp2[value] = id
|
||||
return id
|
||||
|
||||
def sql_get(self, table, column, id):
|
||||
sql = "SELECT %s FROM %s WHERE id=%%s" % (column, table)
|
||||
sql_args = (id, )
|
||||
|
||||
cursor = self.db.cursor()
|
||||
cursor.execute(sql, sql_args)
|
||||
try:
|
||||
(value, ) = cursor.fetchone()
|
||||
except TypeError:
|
||||
return None
|
||||
|
||||
return value
|
||||
|
||||
def get(self, table, column, id):
|
||||
## attempt to retrieve from cache
|
||||
try:
|
||||
return self._get_cache[table][column][id]
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
value = self.sql_get(table, column, id)
|
||||
if value == None:
|
||||
return None
|
||||
|
||||
## add to cache
|
||||
try:
|
||||
temp = self._get_cache[table]
|
||||
except KeyError:
|
||||
temp = self._get_cache[table] = {}
|
||||
|
||||
try:
|
||||
temp2 = temp[column]
|
||||
except KeyError:
|
||||
temp2 = temp[column] = {}
|
||||
|
||||
temp2[id] = value
|
||||
return value
|
||||
|
||||
def get_list(self, table, field_index):
|
||||
sql = "SELECT * FROM %s" % (table)
|
||||
cursor = self.db.cursor()
|
||||
cursor.execute(sql)
|
||||
|
||||
list = []
|
||||
while 1:
|
||||
row = cursor.fetchone()
|
||||
if row == None:
|
||||
break
|
||||
list.append(row[field_index])
|
||||
|
||||
return list
|
||||
|
||||
def GetBranchID(self, branch, auto_set = 1):
|
||||
return self.get_id("branches", "branch", branch, auto_set)
|
||||
|
||||
def GetBranch(self, id):
|
||||
return self.get("branches", "branch", id)
|
||||
|
||||
def GetDirectoryID(self, dir, auto_set = 1):
|
||||
return self.get_id("dirs", "dir", dir, auto_set)
|
||||
|
||||
def GetDirectory(self, id):
|
||||
return self.get("dirs", "dir", id)
|
||||
|
||||
def GetFileID(self, file, auto_set = 1):
|
||||
return self.get_id("files", "file", file, auto_set)
|
||||
|
||||
def GetFile(self, id):
|
||||
return self.get("files", "file", id)
|
||||
|
||||
def GetAuthorID(self, author, auto_set = 1):
|
||||
return self.get_id("people", "who", author, auto_set)
|
||||
|
||||
def GetAuthor(self, id):
|
||||
return self.get("people", "who", id)
|
||||
|
||||
def GetRepositoryID(self, repository, auto_set = 1):
|
||||
return self.get_id("repositories", "repository", repository, auto_set)
|
||||
|
||||
def GetRepository(self, id):
|
||||
return self.get("repositories", "repository", id)
|
||||
|
||||
def SQLGetDescriptionID(self, description, auto_set = 1):
|
||||
## lame string hash, blame Netscape -JMP
|
||||
hash = len(description)
|
||||
|
||||
sql = "SELECT id FROM descs WHERE hash=%s AND description=%s"
|
||||
sql_args = (hash, description)
|
||||
|
||||
cursor = self.db.cursor()
|
||||
cursor.execute(sql, sql_args)
|
||||
try:
|
||||
(id, ) = cursor.fetchone()
|
||||
except TypeError:
|
||||
if not auto_set:
|
||||
return None
|
||||
else:
|
||||
return str(int(id))
|
||||
|
||||
sql = "INSERT INTO descs (hash,description) values (%s,%s)"
|
||||
sql_args = (hash, description)
|
||||
cursor.execute(sql, sql_args)
|
||||
|
||||
return self.GetDescriptionID(description, 0)
|
||||
|
||||
def GetDescriptionID(self, description, auto_set = 1):
|
||||
## attempt to retrieve from cache
|
||||
hash = len(description)
|
||||
try:
|
||||
return self._desc_id_cache[hash][description]
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
id = self.SQLGetDescriptionID(description, auto_set)
|
||||
if id == None:
|
||||
return None
|
||||
|
||||
## add to cache
|
||||
try:
|
||||
temp = self._desc_id_cache[hash]
|
||||
except KeyError:
|
||||
temp = self._desc_id_cache[hash] = {}
|
||||
|
||||
temp[description] = id
|
||||
return id
|
||||
|
||||
def GetDescription(self, id):
|
||||
return self.get("descs", "description", id)
|
||||
|
||||
def GetRepositoryList(self):
|
||||
return self.get_list("repositories", 1)
|
||||
|
||||
def GetBranchList(self):
|
||||
return self.get_list("branches", 1)
|
||||
|
||||
def GetAuthorList(self):
|
||||
return self.get_list("people", 1)
|
||||
|
||||
def AddCommitList(self, commit_list):
|
||||
for commit in commit_list:
|
||||
self.AddCommit(commit)
|
||||
|
||||
def AddCommit(self, commit):
|
||||
ci_when = dbi.DateTimeFromTicks(commit.GetTime() or 0.0)
|
||||
ci_type = commit.GetTypeString()
|
||||
who_id = self.GetAuthorID(commit.GetAuthor())
|
||||
repository_id = self.GetRepositoryID(commit.GetRepository())
|
||||
directory_id = self.GetDirectoryID(commit.GetDirectory())
|
||||
file_id = self.GetFileID(commit.GetFile())
|
||||
revision = commit.GetRevision()
|
||||
sticky_tag = "NULL"
|
||||
branch_id = self.GetBranchID(commit.GetBranch())
|
||||
plus_count = commit.GetPlusCount() or '0'
|
||||
minus_count = commit.GetMinusCount() or '0'
|
||||
description_id = self.GetDescriptionID(commit.GetDescription())
|
||||
|
||||
sql = "REPLACE INTO checkins"\
|
||||
" (type,ci_when,whoid,repositoryid,dirid,fileid,revision,"\
|
||||
" stickytag,branchid,addedlines,removedlines,descid)"\
|
||||
"VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)"
|
||||
sql_args = (ci_type, ci_when, who_id, repository_id,
|
||||
directory_id, file_id, revision, sticky_tag, branch_id,
|
||||
plus_count, minus_count, description_id)
|
||||
|
||||
cursor = self.db.cursor()
|
||||
try:
|
||||
cursor.execute(sql, sql_args)
|
||||
except Exception, e:
|
||||
raise Exception("Error adding commit: '%s'\n"
|
||||
"Values were:\n"
|
||||
"\ttype = %s\n"
|
||||
"\tci_when = %s\n"
|
||||
"\twhoid = %s\n"
|
||||
"\trepositoryid = %s\n"
|
||||
"\tdirid = %s\n"
|
||||
"\tfileid = %s\n"
|
||||
"\trevision = %s\n"
|
||||
"\tstickytag = %s\n"
|
||||
"\tbranchid = %s\n"
|
||||
"\taddedlines = %s\n"
|
||||
"\tremovedlines = %s\n"
|
||||
"\tdescid = %s\n"
|
||||
% ((str(e), ) + sql_args))
|
||||
|
||||
def SQLQueryListString(self, field, query_entry_list):
|
||||
sqlList = []
|
||||
|
||||
for query_entry in query_entry_list:
|
||||
data = query_entry.data
|
||||
## figure out the correct match type
|
||||
if query_entry.match == "exact":
|
||||
match = "="
|
||||
elif query_entry.match == "like":
|
||||
match = " LIKE "
|
||||
elif query_entry.match == "glob":
|
||||
match = " REGEXP "
|
||||
# use fnmatch to translate the glob into a regexp
|
||||
data = fnmatch.translate(data)
|
||||
if data[0] != '^': data = '^' + data
|
||||
elif query_entry.match == "regex":
|
||||
match = " REGEXP "
|
||||
elif query_entry.match == "notregex":
|
||||
match = " NOT REGEXP "
|
||||
|
||||
sqlList.append("%s%s%s" % (field, match, self.db.literal(data)))
|
||||
|
||||
return "(%s)" % (string.join(sqlList, " OR "))
|
||||
|
||||
def CreateSQLQueryString(self, query):
|
||||
tableList = [("checkins", None)]
|
||||
condList = []
|
||||
|
||||
if len(query.repository_list):
|
||||
tableList.append(("repositories",
|
||||
"(checkins.repositoryid=repositories.id)"))
|
||||
temp = self.SQLQueryListString("repositories.repository",
|
||||
query.repository_list)
|
||||
condList.append(temp)
|
||||
|
||||
if len(query.branch_list):
|
||||
tableList.append(("branches", "(checkins.branchid=branches.id)"))
|
||||
temp = self.SQLQueryListString("branches.branch",
|
||||
query.branch_list)
|
||||
condList.append(temp)
|
||||
|
||||
if len(query.directory_list):
|
||||
tableList.append(("dirs", "(checkins.dirid=dirs.id)"))
|
||||
temp = self.SQLQueryListString("dirs.dir", query.directory_list)
|
||||
condList.append(temp)
|
||||
|
||||
if len(query.file_list):
|
||||
tableList.append(("files", "(checkins.fileid=files.id)"))
|
||||
temp = self.SQLQueryListString("files.file", query.file_list)
|
||||
condList.append(temp)
|
||||
|
||||
if len(query.author_list):
|
||||
tableList.append(("people", "(checkins.whoid=people.id)"))
|
||||
temp = self.SQLQueryListString("people.who", query.author_list)
|
||||
condList.append(temp)
|
||||
|
||||
if len(query.comment_list):
|
||||
tableList.append(("descs", "(checkins.descid=descs.id)"))
|
||||
temp = self.SQLQueryListString("descs.description",
|
||||
query.comment_list)
|
||||
condList.append(temp)
|
||||
|
||||
if query.from_date:
|
||||
temp = "(checkins.ci_when>=\"%s\")" % (str(query.from_date))
|
||||
condList.append(temp)
|
||||
|
||||
if query.to_date:
|
||||
temp = "(checkins.ci_when<=\"%s\")" % (str(query.to_date))
|
||||
condList.append(temp)
|
||||
|
||||
if query.sort == "date":
|
||||
order_by = "ORDER BY checkins.ci_when DESC,descid"
|
||||
elif query.sort == "author":
|
||||
tableList.append(("people", "(checkins.whoid=people.id)"))
|
||||
order_by = "ORDER BY people.who,descid"
|
||||
elif query.sort == "file":
|
||||
tableList.append(("files", "(checkins.fileid=files.id)"))
|
||||
order_by = "ORDER BY files.file,descid"
|
||||
|
||||
## exclude duplicates from the table list, and split out join
|
||||
## conditions from table names. In future, the join conditions
|
||||
## might be handled by INNER JOIN statements instead of WHERE
|
||||
## clauses, but MySQL 3.22 apparently doesn't support them well.
|
||||
tables = []
|
||||
joinConds = []
|
||||
for (table, cond) in tableList:
|
||||
if table not in tables:
|
||||
tables.append(table)
|
||||
if cond is not None: joinConds.append(cond)
|
||||
|
||||
tables = string.join(tables, ",")
|
||||
conditions = string.join(joinConds + condList, " AND ")
|
||||
conditions = conditions and "WHERE %s" % conditions
|
||||
|
||||
## limit the number of rows requested or we could really slam
|
||||
## a server with a large database
|
||||
limit = ""
|
||||
if query.limit:
|
||||
limit = "LIMIT %s" % (str(query.limit))
|
||||
elif self._row_limit:
|
||||
limit = "LIMIT %s" % (str(self._row_limit))
|
||||
|
||||
sql = "SELECT checkins.* FROM %s %s %s %s" % (
|
||||
tables, conditions, order_by, limit)
|
||||
|
||||
return sql
|
||||
|
||||
def RunQuery(self, query):
|
||||
sql = self.CreateSQLQueryString(query)
|
||||
cursor = self.db.cursor()
|
||||
cursor.execute(sql)
|
||||
|
||||
while 1:
|
||||
row = cursor.fetchone()
|
||||
if not row:
|
||||
break
|
||||
|
||||
(dbType, dbCI_When, dbAuthorID, dbRepositoryID, dbDirID,
|
||||
dbFileID, dbRevision, dbStickyTag, dbBranchID, dbAddedLines,
|
||||
dbRemovedLines, dbDescID) = row
|
||||
|
||||
commit = LazyCommit(self)
|
||||
if dbType == 'Add':
|
||||
commit.SetTypeAdd()
|
||||
elif dbType == 'Remove':
|
||||
commit.SetTypeRemove()
|
||||
else:
|
||||
commit.SetTypeChange()
|
||||
commit.SetTime(dbi.TicksFromDateTime(dbCI_When))
|
||||
commit.SetFileID(dbFileID)
|
||||
commit.SetDirectoryID(dbDirID)
|
||||
commit.SetRevision(dbRevision)
|
||||
commit.SetRepositoryID(dbRepositoryID)
|
||||
commit.SetAuthorID(dbAuthorID)
|
||||
commit.SetBranchID(dbBranchID)
|
||||
commit.SetPlusCount(dbAddedLines)
|
||||
commit.SetMinusCount(dbRemovedLines)
|
||||
commit.SetDescriptionID(dbDescID)
|
||||
|
||||
query.AddCommit(commit)
|
||||
|
||||
def CheckCommit(self, commit):
|
||||
repository_id = self.GetRepositoryID(commit.GetRepository(), 0)
|
||||
if repository_id == None:
|
||||
return None
|
||||
|
||||
dir_id = self.GetDirectoryID(commit.GetDirectory(), 0)
|
||||
if dir_id == None:
|
||||
return None
|
||||
|
||||
file_id = self.GetFileID(commit.GetFile(), 0)
|
||||
if file_id == None:
|
||||
return None
|
||||
|
||||
sql = "SELECT * FROM checkins WHERE "\
|
||||
" repositoryid=%s AND dirid=%s AND fileid=%s AND revision=%s"
|
||||
sql_args = (repository_id, dir_id, file_id, commit.GetRevision())
|
||||
|
||||
cursor = self.db.cursor()
|
||||
cursor.execute(sql, sql_args)
|
||||
try:
|
||||
(ci_type, ci_when, who_id, repository_id,
|
||||
dir_id, file_id, revision, sticky_tag, branch_id,
|
||||
plus_count, minus_count, description_id) = cursor.fetchone()
|
||||
except TypeError:
|
||||
return None
|
||||
|
||||
return commit
|
||||
|
||||
def sql_delete(self, table, key, value, keep_fkey = None):
|
||||
sql = "DELETE FROM %s WHERE %s=%%s" % (table, key)
|
||||
sql_args = (value, )
|
||||
if keep_fkey:
|
||||
sql += " AND %s NOT IN (SELECT %s FROM checkins WHERE %s = %%s)" \
|
||||
% (key, keep_fkey, keep_fkey)
|
||||
sql_args = (value, value)
|
||||
cursor = self.db.cursor()
|
||||
cursor.execute(sql, sql_args)
|
||||
|
||||
def PurgeRepository(self, repository):
|
||||
rep_id = self.GetRepositoryID(repository)
|
||||
if not rep_id:
|
||||
raise Exception, "Unknown repository '%s'" % (repository)
|
||||
|
||||
sql = "SELECT * FROM checkins WHERE repositoryid=%s"
|
||||
sql_args = (rep_id, )
|
||||
cursor = self.db.cursor()
|
||||
cursor.execute(sql, sql_args)
|
||||
checkins = []
|
||||
while 1:
|
||||
try:
|
||||
(ci_type, ci_when, who_id, repository_id,
|
||||
dir_id, file_id, revision, sticky_tag, branch_id,
|
||||
plus_count, minus_count, description_id) = cursor.fetchone()
|
||||
except TypeError:
|
||||
break
|
||||
checkins.append([file_id, dir_id, branch_id, description_id, who_id])
|
||||
|
||||
#self.sql_delete('repositories', 'id', rep_id)
|
||||
self.sql_delete('checkins', 'repositoryid', rep_id)
|
||||
for checkin in checkins:
|
||||
self.sql_delete('files', 'id', checkin[0], 'fileid')
|
||||
self.sql_delete('dirs', 'id', checkin[1], 'dirid')
|
||||
self.sql_delete('branches', 'id', checkin[2], 'branchid')
|
||||
self.sql_delete('descs', 'id', checkin[3], 'descid')
|
||||
self.sql_delete('people', 'id', checkin[4], 'whoid')
|
||||
|
||||
## the Commit class holds data on one commit, the representation is as
|
||||
## close as possible to how it should be committed and retrieved to the
|
||||
## database engine
|
||||
class Commit:
|
||||
## static constants for type of commit
|
||||
CHANGE = 0
|
||||
ADD = 1
|
||||
REMOVE = 2
|
||||
|
||||
def __init__(self):
|
||||
self.__directory = ''
|
||||
self.__file = ''
|
||||
self.__repository = ''
|
||||
self.__revision = ''
|
||||
self.__author = ''
|
||||
self.__branch = ''
|
||||
self.__pluscount = ''
|
||||
self.__minuscount = ''
|
||||
self.__description = ''
|
||||
self.__gmt_time = 0.0
|
||||
self.__type = Commit.CHANGE
|
||||
|
||||
def SetRepository(self, repository):
|
||||
self.__repository = repository
|
||||
|
||||
def GetRepository(self):
|
||||
return self.__repository
|
||||
|
||||
def SetDirectory(self, dir):
|
||||
self.__directory = dir
|
||||
|
||||
def GetDirectory(self):
|
||||
return self.__directory
|
||||
|
||||
def SetFile(self, file):
|
||||
self.__file = file
|
||||
|
||||
def GetFile(self):
|
||||
return self.__file
|
||||
|
||||
def SetRevision(self, revision):
|
||||
self.__revision = revision
|
||||
|
||||
def GetRevision(self):
|
||||
return self.__revision
|
||||
|
||||
def SetTime(self, gmt_time):
|
||||
if gmt_time is None:
|
||||
### We're just going to assume that a datestamp of The Epoch
|
||||
### ain't real.
|
||||
self.__gmt_time = 0.0
|
||||
else:
|
||||
self.__gmt_time = float(gmt_time)
|
||||
|
||||
def GetTime(self):
|
||||
return self.__gmt_time and self.__gmt_time or None
|
||||
|
||||
def SetAuthor(self, author):
|
||||
self.__author = author
|
||||
|
||||
def GetAuthor(self):
|
||||
return self.__author
|
||||
|
||||
def SetBranch(self, branch):
|
||||
self.__branch = branch or ''
|
||||
|
||||
def GetBranch(self):
|
||||
return self.__branch
|
||||
|
||||
def SetPlusCount(self, pluscount):
|
||||
self.__pluscount = pluscount
|
||||
|
||||
def GetPlusCount(self):
|
||||
return self.__pluscount
|
||||
|
||||
def SetMinusCount(self, minuscount):
|
||||
self.__minuscount = minuscount
|
||||
|
||||
def GetMinusCount(self):
|
||||
return self.__minuscount
|
||||
|
||||
def SetDescription(self, description):
|
||||
self.__description = description
|
||||
|
||||
def GetDescription(self):
|
||||
return self.__description
|
||||
|
||||
def SetTypeChange(self):
|
||||
self.__type = Commit.CHANGE
|
||||
|
||||
def SetTypeAdd(self):
|
||||
self.__type = Commit.ADD
|
||||
|
||||
def SetTypeRemove(self):
|
||||
self.__type = Commit.REMOVE
|
||||
|
||||
def GetType(self):
|
||||
return self.__type
|
||||
|
||||
def GetTypeString(self):
|
||||
if self.__type == Commit.CHANGE:
|
||||
return 'Change'
|
||||
elif self.__type == Commit.ADD:
|
||||
return 'Add'
|
||||
elif self.__type == Commit.REMOVE:
|
||||
return 'Remove'
|
||||
|
||||
## LazyCommit overrides a few methods of Commit to only retrieve
|
||||
## it's properties as they are needed
|
||||
class LazyCommit(Commit):
|
||||
def __init__(self, db):
|
||||
Commit.__init__(self)
|
||||
self.__db = db
|
||||
|
||||
def SetFileID(self, dbFileID):
|
||||
self.__dbFileID = dbFileID
|
||||
|
||||
def GetFileID(self):
|
||||
return self.__dbFileID
|
||||
|
||||
def GetFile(self):
|
||||
return self.__db.GetFile(self.__dbFileID)
|
||||
|
||||
def SetDirectoryID(self, dbDirID):
|
||||
self.__dbDirID = dbDirID
|
||||
|
||||
def GetDirectoryID(self):
|
||||
return self.__dbDirID
|
||||
|
||||
def GetDirectory(self):
|
||||
return self.__db.GetDirectory(self.__dbDirID)
|
||||
|
||||
def SetRepositoryID(self, dbRepositoryID):
|
||||
self.__dbRepositoryID = dbRepositoryID
|
||||
|
||||
def GetRepositoryID(self):
|
||||
return self.__dbRepositoryID
|
||||
|
||||
def GetRepository(self):
|
||||
return self.__db.GetRepository(self.__dbRepositoryID)
|
||||
|
||||
def SetAuthorID(self, dbAuthorID):
|
||||
self.__dbAuthorID = dbAuthorID
|
||||
|
||||
def GetAuthorID(self):
|
||||
return self.__dbAuthorID
|
||||
|
||||
def GetAuthor(self):
|
||||
return self.__db.GetAuthor(self.__dbAuthorID)
|
||||
|
||||
def SetBranchID(self, dbBranchID):
|
||||
self.__dbBranchID = dbBranchID
|
||||
|
||||
def GetBranchID(self):
|
||||
return self.__dbBranchID
|
||||
|
||||
def GetBranch(self):
|
||||
return self.__db.GetBranch(self.__dbBranchID)
|
||||
|
||||
def SetDescriptionID(self, dbDescID):
|
||||
self.__dbDescID = dbDescID
|
||||
|
||||
def GetDescriptionID(self):
|
||||
return self.__dbDescID
|
||||
|
||||
def GetDescription(self):
|
||||
return self.__db.GetDescription(self.__dbDescID)
|
||||
|
||||
## QueryEntry holds data on one match-type in the SQL database
|
||||
## match is: "exact", "like", or "regex"
|
||||
class QueryEntry:
|
||||
def __init__(self, data, match):
|
||||
self.data = data
|
||||
self.match = match
|
||||
|
||||
## CheckinDatabaseQueryData is a object which contains the search parameters
|
||||
## for a query to the CheckinDatabase
|
||||
class CheckinDatabaseQuery:
|
||||
def __init__(self):
|
||||
## sorting
|
||||
self.sort = "date"
|
||||
|
||||
## repository to query
|
||||
self.repository_list = []
|
||||
self.branch_list = []
|
||||
self.directory_list = []
|
||||
self.file_list = []
|
||||
self.author_list = []
|
||||
self.comment_list = []
|
||||
|
||||
## date range in DBI 2.0 timedate objects
|
||||
self.from_date = None
|
||||
self.to_date = None
|
||||
|
||||
## limit on number of rows to return
|
||||
self.limit = None
|
||||
|
||||
## list of commits -- filled in by CVS query
|
||||
self.commit_list = []
|
||||
|
||||
## commit_cb provides a callback for commits as they
|
||||
## are added
|
||||
self.commit_cb = None
|
||||
|
||||
def SetRepository(self, repository, match = "exact"):
|
||||
self.repository_list.append(QueryEntry(repository, match))
|
||||
|
||||
def SetBranch(self, branch, match = "exact"):
|
||||
self.branch_list.append(QueryEntry(branch, match))
|
||||
|
||||
def SetDirectory(self, directory, match = "exact"):
|
||||
self.directory_list.append(QueryEntry(directory, match))
|
||||
|
||||
def SetFile(self, file, match = "exact"):
|
||||
self.file_list.append(QueryEntry(file, match))
|
||||
|
||||
def SetAuthor(self, author, match = "exact"):
|
||||
self.author_list.append(QueryEntry(author, match))
|
||||
|
||||
def SetComment(self, comment, match = "exact"):
|
||||
self.comment_list.append(QueryEntry(comment, match))
|
||||
|
||||
def SetSortMethod(self, sort):
|
||||
self.sort = sort
|
||||
|
||||
def SetFromDateObject(self, ticks):
|
||||
self.from_date = dbi.DateTimeFromTicks(ticks)
|
||||
|
||||
def SetToDateObject(self, ticks):
|
||||
self.to_date = dbi.DateTimeFromTicks(ticks)
|
||||
|
||||
def SetFromDateHoursAgo(self, hours_ago):
|
||||
ticks = time.time() - (3600 * hours_ago)
|
||||
self.from_date = dbi.DateTimeFromTicks(ticks)
|
||||
|
||||
def SetFromDateDaysAgo(self, days_ago):
|
||||
ticks = time.time() - (86400 * days_ago)
|
||||
self.from_date = dbi.DateTimeFromTicks(ticks)
|
||||
|
||||
def SetToDateDaysAgo(self, days_ago):
|
||||
ticks = time.time() - (86400 * days_ago)
|
||||
self.to_date = dbi.DateTimeFromTicks(ticks)
|
||||
|
||||
def SetLimit(self, limit):
|
||||
self.limit = limit;
|
||||
|
||||
def AddCommit(self, commit):
|
||||
self.commit_list.append(commit)
|
||||
|
||||
|
||||
##
|
||||
## entrypoints
|
||||
##
|
||||
def CreateCommit():
|
||||
return Commit()
|
||||
|
||||
def CreateCheckinQuery():
|
||||
return CheckinDatabaseQuery()
|
||||
|
||||
def ConnectDatabase(cfg, readonly=0):
|
||||
if readonly:
|
||||
user = cfg.cvsdb.readonly_user
|
||||
passwd = cfg.cvsdb.readonly_passwd
|
||||
else:
|
||||
user = cfg.cvsdb.user
|
||||
passwd = cfg.cvsdb.passwd
|
||||
db = CheckinDatabase(cfg.cvsdb.host, cfg.cvsdb.port, user, passwd,
|
||||
cfg.cvsdb.database_name, cfg.cvsdb.row_limit)
|
||||
db.Connect()
|
||||
return db
|
||||
|
||||
def ConnectDatabaseReadOnly(cfg):
|
||||
return ConnectDatabase(cfg, 1)
|
||||
|
||||
def GetCommitListFromRCSFile(repository, path_parts, revision=None):
|
||||
commit_list = []
|
||||
|
||||
directory = string.join(path_parts[:-1], "/")
|
||||
file = path_parts[-1]
|
||||
|
||||
revs = repository.itemlog(path_parts, revision, vclib.SORTBY_DEFAULT,
|
||||
0, 0, {"cvs_pass_rev": 1})
|
||||
for rev in revs:
|
||||
commit = CreateCommit()
|
||||
commit.SetRepository(repository.rootpath)
|
||||
commit.SetDirectory(directory)
|
||||
commit.SetFile(file)
|
||||
commit.SetRevision(rev.string)
|
||||
commit.SetAuthor(rev.author)
|
||||
commit.SetDescription(rev.log)
|
||||
commit.SetTime(rev.date)
|
||||
|
||||
if rev.changed:
|
||||
# extract the plus/minus and drop the sign
|
||||
plus, minus = string.split(rev.changed)
|
||||
commit.SetPlusCount(plus[1:])
|
||||
commit.SetMinusCount(minus[1:])
|
||||
|
||||
if rev.dead:
|
||||
commit.SetTypeRemove()
|
||||
else:
|
||||
commit.SetTypeChange()
|
||||
else:
|
||||
commit.SetTypeAdd()
|
||||
|
||||
commit_list.append(commit)
|
||||
|
||||
# if revision is on a branch which has at least one tag
|
||||
if len(rev.number) > 2 and rev.branches:
|
||||
commit.SetBranch(rev.branches[0].name)
|
||||
|
||||
return commit_list
|
||||
|
||||
def GetUnrecordedCommitList(repository, path_parts, db):
|
||||
commit_list = GetCommitListFromRCSFile(repository, path_parts)
|
||||
|
||||
unrecorded_commit_list = []
|
||||
for commit in commit_list:
|
||||
result = db.CheckCommit(commit)
|
||||
if not result:
|
||||
unrecorded_commit_list.append(commit)
|
||||
|
||||
return unrecorded_commit_list
|
||||
|
||||
_re_likechars = re.compile(r"([_%\\])")
|
||||
|
||||
def EscapeLike(literal):
|
||||
"""Escape literal string for use in a MySQL LIKE pattern"""
|
||||
return re.sub(_re_likechars, r"\\\1", literal)
|
||||
|
||||
def FindRepository(db, path):
|
||||
"""Find repository path in database given path to subdirectory
|
||||
Returns normalized repository path and relative directory path"""
|
||||
path = os.path.normpath(path)
|
||||
dirs = []
|
||||
while path:
|
||||
rep = os.path.normcase(path)
|
||||
if db.GetRepositoryID(rep, 0) is None:
|
||||
path, pdir = os.path.split(path)
|
||||
if not pdir:
|
||||
return None, None
|
||||
dirs.append(pdir)
|
||||
else:
|
||||
break
|
||||
dirs.reverse()
|
||||
return rep, dirs
|
||||
|
||||
def CleanRepository(path):
|
||||
"""Return normalized top-level repository path"""
|
||||
return os.path.normcase(os.path.normpath(path))
|
||||
|
|
@ -0,0 +1,63 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2006 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
import sys
|
||||
import time
|
||||
import types
|
||||
import re
|
||||
import compat
|
||||
import MySQLdb
|
||||
|
||||
# set to 1 to store commit times in UTC, or 0 to use the ViewVC machine's
|
||||
# local timezone. Using UTC is recommended because it ensures that the
|
||||
# database will remain valid even if it is moved to another machine or the host
|
||||
# computer's time zone is changed. UTC also avoids the ambiguity associated
|
||||
# with daylight saving time (for example if a computer in New York recorded the
|
||||
# local time 2002/10/27 1:30 am, there would be no way to tell whether the
|
||||
# actual time was recorded before or after clocks were rolled back). Use local
|
||||
# times for compatibility with databases used by ViewCVS 0.92 and earlier
|
||||
# versions.
|
||||
utc_time = 1
|
||||
|
||||
def DateTimeFromTicks(ticks):
|
||||
"""Return a MySQL DATETIME value from a unix timestamp"""
|
||||
|
||||
if utc_time:
|
||||
t = time.gmtime(ticks)
|
||||
else:
|
||||
t = time.localtime(ticks)
|
||||
return "%04d-%02d-%02d %02d:%02d:%02d" % t[:6]
|
||||
|
||||
_re_datetime = re.compile('([0-9]{4})-([0-9][0-9])-([0-9][0-9]) '
|
||||
'([0-9][0-9]):([0-9][0-9]):([0-9][0-9])')
|
||||
|
||||
def TicksFromDateTime(datetime):
|
||||
"""Return a unix timestamp from a MySQL DATETIME value"""
|
||||
|
||||
if type(datetime) == types.StringType:
|
||||
# datetime is a MySQL DATETIME string
|
||||
matches = _re_datetime.match(datetime).groups()
|
||||
t = tuple(map(int, matches)) + (0, 0, 0)
|
||||
elif hasattr(datetime, "timetuple"):
|
||||
# datetime is a Python >=2.3 datetime.DateTime object
|
||||
t = datetime.timetuple()
|
||||
else:
|
||||
# datetime is an eGenix mx.DateTime object
|
||||
t = datetime.tuple()
|
||||
|
||||
if utc_time:
|
||||
return compat.timegm(t)
|
||||
else:
|
||||
return time.mktime(t[:8] + (-1,))
|
||||
|
||||
def connect(host, port, user, passwd, db):
|
||||
return MySQLdb.connect(host=host, port=port, user=user, passwd=passwd, db=db)
|
|
@ -0,0 +1,199 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2007 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
#
|
||||
# Note: a t_start/t_end pair consumes about 0.00005 seconds on a P3/700.
|
||||
# the lambda form (when debugging is disabled) should be even faster.
|
||||
#
|
||||
|
||||
import sys
|
||||
|
||||
# Set to non-zero to track and print processing times
|
||||
SHOW_TIMES = 0
|
||||
|
||||
# Set to non-zero to display child process info
|
||||
SHOW_CHILD_PROCESSES = 0
|
||||
|
||||
# Set to a server-side path to force the tarball view to generate the
|
||||
# tarball as a file on the server, instead of transmitting the data
|
||||
# back to the browser. This enables easy display of error
|
||||
# considitions in the browser, as well as tarball inspection on the
|
||||
# server. NOTE: The file will be a TAR archive, *not* gzip-compressed.
|
||||
TARFILE_PATH = ''
|
||||
|
||||
|
||||
if SHOW_TIMES:
|
||||
|
||||
import time
|
||||
|
||||
_timers = { }
|
||||
_times = { }
|
||||
|
||||
def t_start(which):
|
||||
_timers[which] = time.time()
|
||||
|
||||
def t_end(which):
|
||||
t = time.time() - _timers[which]
|
||||
if _times.has_key(which):
|
||||
_times[which] = _times[which] + t
|
||||
else:
|
||||
_times[which] = t
|
||||
|
||||
def dump():
|
||||
for name, value in _times.items():
|
||||
print '%s: %.6f<br />' % (name, value)
|
||||
|
||||
else:
|
||||
|
||||
t_start = t_end = dump = lambda *args: None
|
||||
|
||||
|
||||
class ViewVCException:
|
||||
def __init__(self, msg, status=None):
|
||||
self.msg = msg
|
||||
self.status = status
|
||||
|
||||
def __str__(self):
|
||||
if self.status:
|
||||
return '%s: %s' % (self.status, self.msg)
|
||||
return "ViewVC Unrecoverable Error: %s" % self.msg
|
||||
|
||||
|
||||
def PrintException(server, exc_data):
|
||||
status = exc_data['status']
|
||||
msg = exc_data['msg']
|
||||
tb = exc_data['stacktrace']
|
||||
|
||||
server.header(status=status)
|
||||
server.write("<h3>An Exception Has Occurred</h3>\n")
|
||||
|
||||
s = ''
|
||||
if msg:
|
||||
s = '<p><pre>%s</pre></p>' % server.escape(msg)
|
||||
if status:
|
||||
s = s + ('<h4>HTTP Response Status</h4>\n<p><pre>\n%s</pre></p><hr />\n'
|
||||
% status)
|
||||
server.write(s)
|
||||
|
||||
server.write("<h4>Python Traceback</h4>\n<p><pre>")
|
||||
server.write(server.escape(tb))
|
||||
server.write("</pre></p>\n")
|
||||
|
||||
|
||||
def GetExceptionData():
|
||||
# capture the exception before doing anything else
|
||||
exc_type, exc, exc_tb = sys.exc_info()
|
||||
|
||||
exc_dict = {
|
||||
'status' : None,
|
||||
'msg' : None,
|
||||
'stacktrace' : None,
|
||||
}
|
||||
|
||||
try:
|
||||
import traceback, string
|
||||
|
||||
if isinstance(exc, ViewVCException):
|
||||
exc_dict['msg'] = exc.msg
|
||||
exc_dict['status'] = exc.status
|
||||
|
||||
tb = string.join(traceback.format_exception(exc_type, exc, exc_tb), '')
|
||||
exc_dict['stacktrace'] = tb
|
||||
|
||||
finally:
|
||||
# prevent circular reference. sys.exc_info documentation warns
|
||||
# "Assigning the traceback return value to a local variable in a function
|
||||
# that is handling an exception will cause a circular reference..."
|
||||
# This is all based on 'exc_tb', and we're now done with it. Toss it.
|
||||
del exc_tb
|
||||
|
||||
return exc_dict
|
||||
|
||||
|
||||
if SHOW_CHILD_PROCESSES:
|
||||
class Process:
|
||||
def __init__(self, command, inStream, outStream, errStream):
|
||||
self.command = command
|
||||
self.debugIn = inStream
|
||||
self.debugOut = outStream
|
||||
self.debugErr = errStream
|
||||
|
||||
import sapi
|
||||
if not sapi.server is None:
|
||||
if not sapi.server.pageGlobals.has_key('processes'):
|
||||
sapi.server.pageGlobals['processes'] = [self]
|
||||
else:
|
||||
sapi.server.pageGlobals['processes'].append(self)
|
||||
|
||||
def DumpChildren(server):
|
||||
import os
|
||||
|
||||
if not server.pageGlobals.has_key('processes'):
|
||||
return
|
||||
|
||||
server.header()
|
||||
lastOut = None
|
||||
i = 0
|
||||
|
||||
for k in server.pageGlobals['processes']:
|
||||
i = i + 1
|
||||
server.write("<table>\n")
|
||||
server.write("<tr><td colspan=\"2\">Child Process%i</td></tr>" % i)
|
||||
server.write("<tr>\n <td style=\"vertical-align:top\">Command Line</td> <td><pre>")
|
||||
server.write(server.escape(k.command))
|
||||
server.write("</pre></td>\n</tr>\n")
|
||||
server.write("<tr>\n <td style=\"vertical-align:top\">Standard In:</td> <td>")
|
||||
|
||||
if k.debugIn is lastOut and not lastOut is None:
|
||||
server.write("<em>Output from process %i</em>" % (i - 1))
|
||||
elif k.debugIn:
|
||||
server.write("<pre>")
|
||||
server.write(server.escape(k.debugIn.getvalue()))
|
||||
server.write("</pre>")
|
||||
|
||||
server.write("</td>\n</tr>\n")
|
||||
|
||||
if k.debugOut is k.debugErr:
|
||||
server.write("<tr>\n <td style=\"vertical-align:top\">Standard Out & Error:</td> <td><pre>")
|
||||
if k.debugOut:
|
||||
server.write(server.escape(k.debugOut.getvalue()))
|
||||
server.write("</pre></td>\n</tr>\n")
|
||||
|
||||
else:
|
||||
server.write("<tr>\n <td style=\"vertical-align:top\">Standard Out:</td> <td><pre>")
|
||||
if k.debugOut:
|
||||
server.write(server.escape(k.debugOut.getvalue()))
|
||||
server.write("</pre></td>\n</tr>\n")
|
||||
server.write("<tr>\n <td style=\"vertical-align:top\">Standard Error:</td> <td><pre>")
|
||||
if k.debugErr:
|
||||
server.write(server.escape(k.debugErr.getvalue()))
|
||||
server.write("</pre></td>\n</tr>\n")
|
||||
|
||||
server.write("</table>\n")
|
||||
server.flush()
|
||||
lastOut = k.debugOut
|
||||
|
||||
server.write("<table>\n")
|
||||
server.write("<tr><td colspan=\"2\">Environment Variables</td></tr>")
|
||||
for k, v in os.environ.items():
|
||||
server.write("<tr>\n <td style=\"vertical-align:top\"><pre>")
|
||||
server.write(server.escape(k))
|
||||
server.write("</pre></td>\n <td style=\"vertical-align:top\"><pre>")
|
||||
server.write(server.escape(v))
|
||||
server.write("</pre></td>\n</tr>")
|
||||
server.write("</table>")
|
||||
|
||||
else:
|
||||
|
||||
def DumpChildren(server):
|
||||
pass
|
||||
|
|
@ -0,0 +1,830 @@
|
|||
#!/usr/bin/env python
|
||||
"""ezt.py -- easy templating
|
||||
|
||||
ezt templates are simply text files in whatever format you so desire
|
||||
(such as XML, HTML, etc.) which contain directives sprinkled
|
||||
throughout. With these directives it is possible to generate the
|
||||
dynamic content from the ezt templates.
|
||||
|
||||
These directives are enclosed in square brackets. If you are a
|
||||
C-programmer, you might be familar with the #ifdef directives of the C
|
||||
preprocessor 'cpp'. ezt provides a similar concept. Additionally EZT
|
||||
has a 'for' directive, which allows it to iterate (repeat) certain
|
||||
subsections of the template according to sequence of data items
|
||||
provided by the application.
|
||||
|
||||
The final rendering is performed by the method generate() of the Template
|
||||
class. Building template instances can either be done using external
|
||||
EZT files (convention: use the suffix .ezt for such files):
|
||||
|
||||
>>> template = Template("../templates/log.ezt")
|
||||
|
||||
or by calling the parse() method of a template instance directly with
|
||||
a EZT template string:
|
||||
|
||||
>>> template = Template()
|
||||
>>> template.parse('''<html><head>
|
||||
... <title>[title_string]</title></head>
|
||||
... <body><h1>[title_string]</h1>
|
||||
... [for a_sequence] <p>[a_sequence]</p>
|
||||
... [end] <hr />
|
||||
... The [person] is [if-any state]in[else]out[end].
|
||||
... </body>
|
||||
... </html>
|
||||
... ''')
|
||||
|
||||
The application should build a dictionary 'data' and pass it together
|
||||
with the output fileobject to the templates generate method:
|
||||
|
||||
>>> data = {'title_string' : "A Dummy Page",
|
||||
... 'a_sequence' : ['list item 1', 'list item 2', 'another element'],
|
||||
... 'person': "doctor",
|
||||
... 'state' : None }
|
||||
>>> import sys
|
||||
>>> template.generate(sys.stdout, data)
|
||||
<html><head>
|
||||
<title>A Dummy Page</title></head>
|
||||
<body><h1>A Dummy Page</h1>
|
||||
<p>list item 1</p>
|
||||
<p>list item 2</p>
|
||||
<p>another element</p>
|
||||
<hr />
|
||||
The doctor is out.
|
||||
</body>
|
||||
</html>
|
||||
|
||||
Template syntax error reporting should be improved. Currently it is
|
||||
very sparse (template line numbers would be nice):
|
||||
|
||||
>>> Template().parse("[if-any where] foo [else] bar [end unexpected args]")
|
||||
Traceback (innermost last):
|
||||
File "<stdin>", line 1, in ?
|
||||
File "ezt.py", line 220, in parse
|
||||
self.program = self._parse(text)
|
||||
File "ezt.py", line 275, in _parse
|
||||
raise ArgCountSyntaxError(str(args[1:]))
|
||||
ArgCountSyntaxError: ['unexpected', 'args']
|
||||
>>> Template().parse("[if unmatched_end]foo[end]")
|
||||
Traceback (innermost last):
|
||||
File "<stdin>", line 1, in ?
|
||||
File "ezt.py", line 206, in parse
|
||||
self.program = self._parse(text)
|
||||
File "ezt.py", line 266, in _parse
|
||||
raise UnmatchedEndError()
|
||||
UnmatchedEndError
|
||||
|
||||
|
||||
Directives
|
||||
==========
|
||||
|
||||
Several directives allow the use of dotted qualified names refering to objects
|
||||
or attributes of objects contained in the data dictionary given to the
|
||||
.generate() method.
|
||||
|
||||
Qualified names
|
||||
---------------
|
||||
|
||||
Qualified names have two basic forms: a variable reference, or a string
|
||||
constant. References are a name from the data dictionary with optional
|
||||
dotted attributes (where each intermediary is an object with attributes,
|
||||
of course).
|
||||
|
||||
Examples:
|
||||
|
||||
[varname]
|
||||
|
||||
[ob.attr]
|
||||
|
||||
["string"]
|
||||
|
||||
Simple directives
|
||||
-----------------
|
||||
|
||||
[QUAL_NAME]
|
||||
|
||||
This directive is simply replaced by the value of the qualified name.
|
||||
If the value is a number it's converted to a string before being
|
||||
outputted. If it is None, nothing is outputted. If it is a python file
|
||||
object (i.e. any object with a "read" method), it's contents are
|
||||
outputted. If it is a callback function (any callable python object
|
||||
is assumed to be a callback function), it is invoked and passed an EZT
|
||||
Context object as an argument.
|
||||
|
||||
[QUAL_NAME QUAL_NAME ...]
|
||||
|
||||
If the first value is a callback function, it is invoked with an EZT
|
||||
Context object as a first argument, and the rest of the values as
|
||||
additional arguments.
|
||||
|
||||
Otherwise, the first value defines a substitution format, specifying
|
||||
constant text and indices of the additional arguments. The arguments
|
||||
are substituted and the result is inserted into the output stream.
|
||||
|
||||
Example:
|
||||
["abc %0 def %1 ghi %0" foo bar.baz]
|
||||
|
||||
Note that the first value can be any type of qualified name -- a string
|
||||
constant or a variable reference. Use %% to substitute a percent sign.
|
||||
Argument indices are 0-based.
|
||||
|
||||
[include "filename"] or [include QUAL_NAME]
|
||||
|
||||
This directive is replaced by content of the named include file. Note
|
||||
that a string constant is more efficient -- the target file is compiled
|
||||
inline. In the variable form, the target file is compiled and executed
|
||||
at runtime.
|
||||
|
||||
Block directives
|
||||
----------------
|
||||
|
||||
[for QUAL_NAME] ... [end]
|
||||
|
||||
The text within the [for ...] directive and the corresponding [end]
|
||||
is repeated for each element in the sequence referred to by the
|
||||
qualified name in the for directive. Within the for block this
|
||||
identifiers now refers to the actual item indexed by this loop
|
||||
iteration.
|
||||
|
||||
[if-any QUAL_NAME [QUAL_NAME2 ...]] ... [else] ... [end]
|
||||
|
||||
Test if any QUAL_NAME value is not None or an empty string or list.
|
||||
The [else] clause is optional. CAUTION: Numeric values are
|
||||
converted to string, so if QUAL_NAME refers to a numeric value 0,
|
||||
the then-clause is substituted!
|
||||
|
||||
[if-index INDEX_FROM_FOR odd] ... [else] ... [end]
|
||||
[if-index INDEX_FROM_FOR even] ... [else] ... [end]
|
||||
[if-index INDEX_FROM_FOR first] ... [else] ... [end]
|
||||
[if-index INDEX_FROM_FOR last] ... [else] ... [end]
|
||||
[if-index INDEX_FROM_FOR NUMBER] ... [else] ... [end]
|
||||
|
||||
These five directives work similar to [if-any], but are only useful
|
||||
within a [for ...]-block (see above). The odd/even directives are
|
||||
for example useful to choose different background colors for
|
||||
adjacent rows in a table. Similar the first/last directives might
|
||||
be used to remove certain parts (for example "Diff to previous"
|
||||
doesn't make sense, if there is no previous).
|
||||
|
||||
[is QUAL_NAME STRING] ... [else] ... [end]
|
||||
[is QUAL_NAME QUAL_NAME] ... [else] ... [end]
|
||||
|
||||
The [is ...] directive is similar to the other conditional
|
||||
directives above. But it allows to compare two value references or
|
||||
a value reference with some constant string.
|
||||
|
||||
[define VARIABLE] ... [end]
|
||||
|
||||
The [define ...] directive allows you to create and modify template
|
||||
variables from within the template itself. Essentially, any data
|
||||
between inside the [define ...] and its matching [end] will be
|
||||
expanded using the other template parsing and output generation
|
||||
rules, and then stored as a string value assigned to the variable
|
||||
VARIABLE. The new (or changed) variable is then available for use
|
||||
with other mechanisms such as [is ...] or [if-any ...], as long as
|
||||
they appear later in the template.
|
||||
|
||||
[format STRING] ... [end]
|
||||
|
||||
The format directive controls how the values substituted into
|
||||
templates are escaped before they are put into the output stream. It
|
||||
has no effect on the literal text of the templates, only the output
|
||||
from [QUAL_NAME ...] directives. STRING can be one of "raw" "html"
|
||||
"xml" or "uri". The "raw" mode leaves the output unaltered; the "html"
|
||||
and "xml" modes escape special characters using entity escapes (like
|
||||
" and >); the "uri" mode escapes characters using hexadecimal
|
||||
escape sequences (like %20 and %7e).
|
||||
|
||||
[format CALLBACK]
|
||||
|
||||
Python applications using EZT can provide custom formatters as callback
|
||||
variables. "[format CALLBACK][QUAL_NAME][end]" is in most cases
|
||||
equivalent to "[CALLBACK QUAL_NAME]"
|
||||
"""
|
||||
#
|
||||
# Copyright (C) 2001-2007 Greg Stein. All Rights Reserved.
|
||||
#
|
||||
# Redistribution and use in source and binary forms, with or without
|
||||
# modification, are permitted provided that the following conditions are
|
||||
# met:
|
||||
#
|
||||
# * Redistributions of source code must retain the above copyright
|
||||
# notice, this list of conditions and the following disclaimer.
|
||||
#
|
||||
# * Redistributions in binary form must reproduce the above copyright
|
||||
# notice, this list of conditions and the following disclaimer in the
|
||||
# documentation and/or other materials provided with the distribution.
|
||||
#
|
||||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
|
||||
# IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
|
||||
# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE
|
||||
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
||||
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
||||
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||
# POSSIBILITY OF SUCH DAMAGE.
|
||||
#
|
||||
#
|
||||
# This software is maintained by Greg and is available at:
|
||||
# http://svn.webdav.org/repos/projects/ezt/trunk/
|
||||
#
|
||||
|
||||
import string
|
||||
import re
|
||||
from types import StringType, IntType, FloatType, LongType, TupleType
|
||||
import os
|
||||
import cgi
|
||||
import urllib
|
||||
try:
|
||||
import cStringIO
|
||||
except ImportError:
|
||||
import StringIO
|
||||
cStringIO = StringIO
|
||||
|
||||
#
|
||||
# Formatting types
|
||||
#
|
||||
FORMAT_RAW = 'raw'
|
||||
FORMAT_HTML = 'html'
|
||||
FORMAT_XML = 'xml'
|
||||
FORMAT_URI = 'uri'
|
||||
|
||||
#
|
||||
# This regular expression matches three alternatives:
|
||||
# expr: DIRECTIVE | BRACKET | COMMENT
|
||||
# DIRECTIVE: '[' ITEM (whitespace ITEM)* ']
|
||||
# ITEM: STRING | NAME
|
||||
# STRING: '"' (not-slash-or-dquote | '\' anychar)* '"'
|
||||
# NAME: (alphanum | '_' | '-' | '.')+
|
||||
# BRACKET: '[[]'
|
||||
# COMMENT: '[#' not-rbracket* ']'
|
||||
#
|
||||
# When used with the split() method, the return value will be composed of
|
||||
# non-matching text and the two paren groups (DIRECTIVE and BRACKET). Since
|
||||
# the COMMENT matches are not placed into a group, they are considered a
|
||||
# "splitting" value and simply dropped.
|
||||
#
|
||||
_item = r'(?:"(?:[^\\"]|\\.)*"|[-\w.]+)'
|
||||
_re_parse = re.compile(r'\[(%s(?: +%s)*)\]|(\[\[\])|\[#[^\]]*\]' % (_item, _item))
|
||||
|
||||
_re_args = re.compile(r'"(?:[^\\"]|\\.)*"|[-\w.]+')
|
||||
|
||||
# block commands and their argument counts
|
||||
_block_cmd_specs = { 'if-index':2, 'for':1, 'is':2, 'define':1, 'format':1 }
|
||||
_block_cmds = _block_cmd_specs.keys()
|
||||
|
||||
# two regular expresssions for compressing whitespace. the first is used to
|
||||
# compress any whitespace including a newline into a single newline. the
|
||||
# second regex is used to compress runs of whitespace into a single space.
|
||||
_re_newline = re.compile('[ \t\r\f\v]*\n\\s*')
|
||||
_re_whitespace = re.compile(r'\s\s+')
|
||||
|
||||
# this regex is used to substitute arguments into a value. we split the value,
|
||||
# replace the relevant pieces, and then put it all back together. splitting
|
||||
# will produce a list of: TEXT ( splitter TEXT )*. splitter will be '%' or
|
||||
# an integer.
|
||||
_re_subst = re.compile('%(%|[0-9]+)')
|
||||
|
||||
class Template:
|
||||
|
||||
def __init__(self, fname=None, compress_whitespace=1,
|
||||
base_format=FORMAT_RAW):
|
||||
self.compress_whitespace = compress_whitespace
|
||||
if fname:
|
||||
self.parse_file(fname, base_format)
|
||||
|
||||
def parse_file(self, fname, base_format=FORMAT_RAW):
|
||||
"fname -> a string object with pathname of file containg an EZT template."
|
||||
|
||||
self.parse(_FileReader(fname), base_format)
|
||||
|
||||
def parse(self, text_or_reader, base_format=FORMAT_RAW):
|
||||
"""Parse the template specified by text_or_reader.
|
||||
|
||||
The argument should be a string containing the template, or it should
|
||||
specify a subclass of ezt.Reader which can read templates. The base
|
||||
format for printing values is given by base_format.
|
||||
"""
|
||||
if not isinstance(text_or_reader, Reader):
|
||||
# assume the argument is a plain text string
|
||||
text_or_reader = _TextReader(text_or_reader)
|
||||
|
||||
self.program = self._parse(text_or_reader, base_format=base_format)
|
||||
|
||||
def generate(self, fp, data):
|
||||
if hasattr(data, '__getitem__') or callable(getattr(data, 'keys', None)):
|
||||
# a dictionary-like object was passed. convert it to an
|
||||
# attribute-based object.
|
||||
class _data_ob:
|
||||
def __init__(self, d):
|
||||
vars(self).update(d)
|
||||
data = _data_ob(data)
|
||||
|
||||
ctx = Context(fp)
|
||||
ctx.data = data
|
||||
ctx.for_iterators = { }
|
||||
ctx.defines = { }
|
||||
self._execute(self.program, ctx)
|
||||
|
||||
def _parse(self, reader, for_names=None, file_args=(), base_format=None):
|
||||
"""text -> string object containing the template.
|
||||
|
||||
This is a private helper function doing the real work for method parse.
|
||||
It returns the parsed template as a 'program'. This program is a sequence
|
||||
made out of strings or (function, argument) 2-tuples.
|
||||
|
||||
Note: comment directives [# ...] are automatically dropped by _re_parse.
|
||||
"""
|
||||
|
||||
# parse the template program into: (TEXT DIRECTIVE BRACKET)* TEXT
|
||||
parts = _re_parse.split(reader.text)
|
||||
|
||||
program = [ ]
|
||||
stack = [ ]
|
||||
if not for_names:
|
||||
for_names = [ ]
|
||||
|
||||
if base_format:
|
||||
program.append((self._cmd_format, _printers[base_format]))
|
||||
|
||||
for i in range(len(parts)):
|
||||
piece = parts[i]
|
||||
which = i % 3 # discriminate between: TEXT DIRECTIVE BRACKET
|
||||
if which == 0:
|
||||
# TEXT. append if non-empty.
|
||||
if piece:
|
||||
if self.compress_whitespace:
|
||||
piece = _re_whitespace.sub(' ', _re_newline.sub('\n', piece))
|
||||
program.append(piece)
|
||||
elif which == 2:
|
||||
# BRACKET directive. append '[' if present.
|
||||
if piece:
|
||||
program.append('[')
|
||||
elif piece:
|
||||
# DIRECTIVE is present.
|
||||
args = _re_args.findall(piece)
|
||||
cmd = args[0]
|
||||
if cmd == 'else':
|
||||
if len(args) > 1:
|
||||
raise ArgCountSyntaxError(str(args[1:]))
|
||||
### check: don't allow for 'for' cmd
|
||||
idx = stack[-1][1]
|
||||
true_section = program[idx:]
|
||||
del program[idx:]
|
||||
stack[-1][3] = true_section
|
||||
elif cmd == 'end':
|
||||
if len(args) > 1:
|
||||
raise ArgCountSyntaxError(str(args[1:]))
|
||||
# note: true-section may be None
|
||||
try:
|
||||
cmd, idx, args, true_section = stack.pop()
|
||||
except IndexError:
|
||||
raise UnmatchedEndError()
|
||||
else_section = program[idx:]
|
||||
if cmd == 'format':
|
||||
program.append((self._cmd_end_format, None))
|
||||
else:
|
||||
func = getattr(self, '_cmd_' + re.sub('-', '_', cmd))
|
||||
program[idx:] = [ (func, (args, true_section, else_section)) ]
|
||||
if cmd == 'for':
|
||||
for_names.pop()
|
||||
elif cmd in _block_cmds:
|
||||
if len(args) > _block_cmd_specs[cmd] + 1:
|
||||
raise ArgCountSyntaxError(str(args[1:]))
|
||||
### this assumes arg1 is always a ref unless cmd is 'define'
|
||||
if cmd != 'define':
|
||||
args[1] = _prepare_ref(args[1], for_names, file_args)
|
||||
|
||||
# handle arg2 for the 'is' command
|
||||
if cmd == 'is':
|
||||
args[2] = _prepare_ref(args[2], for_names, file_args)
|
||||
elif cmd == 'for':
|
||||
for_names.append(args[1][0]) # append the refname
|
||||
elif cmd == 'format':
|
||||
if args[1][0]:
|
||||
# argument is a variable reference
|
||||
printer = args[1]
|
||||
else:
|
||||
# argument is a string constant referring to built-in printer
|
||||
printer = _printers.get(args[1][1])
|
||||
if not printer:
|
||||
raise UnknownFormatConstantError(str(args[1:]))
|
||||
program.append((self._cmd_format, printer))
|
||||
|
||||
# remember the cmd, current pos, args, and a section placeholder
|
||||
stack.append([cmd, len(program), args[1:], None])
|
||||
elif cmd == 'include':
|
||||
if args[1][0] == '"':
|
||||
include_filename = args[1][1:-1]
|
||||
f_args = [ ]
|
||||
for arg in args[2:]:
|
||||
f_args.append(_prepare_ref(arg, for_names, file_args))
|
||||
program.extend(self._parse(reader.read_other(include_filename),
|
||||
for_names, f_args))
|
||||
else:
|
||||
if len(args) != 2:
|
||||
raise ArgCountSyntaxError(str(args))
|
||||
program.append((self._cmd_include,
|
||||
(_prepare_ref(args[1], for_names, file_args),
|
||||
reader)))
|
||||
elif cmd == 'if-any':
|
||||
f_args = [ ]
|
||||
for arg in args[1:]:
|
||||
f_args.append(_prepare_ref(arg, for_names, file_args))
|
||||
stack.append(['if-any', len(program), f_args, None])
|
||||
else:
|
||||
# implied PRINT command
|
||||
f_args = [ ]
|
||||
for arg in args:
|
||||
f_args.append(_prepare_ref(arg, for_names, file_args))
|
||||
program.append((self._cmd_print, f_args))
|
||||
|
||||
if stack:
|
||||
### would be nice to say which blocks...
|
||||
raise UnclosedBlocksError()
|
||||
return program
|
||||
|
||||
def _execute(self, program, ctx):
|
||||
"""This private helper function takes a 'program' sequence as created
|
||||
by the method '_parse' and executes it step by step. strings are written
|
||||
to the file object 'fp' and functions are called.
|
||||
"""
|
||||
for step in program:
|
||||
if isinstance(step, StringType):
|
||||
ctx.fp.write(step)
|
||||
else:
|
||||
step[0](step[1], ctx)
|
||||
|
||||
def _cmd_print(self, valrefs, ctx):
|
||||
value = _get_value(valrefs[0], ctx)
|
||||
args = map(lambda valref, ctx=ctx: _get_value(valref, ctx), valrefs[1:])
|
||||
_write_value(value, args, ctx)
|
||||
|
||||
def _cmd_format(self, printer, ctx):
|
||||
if type(printer) is TupleType:
|
||||
printer = _get_value(printer, ctx)
|
||||
ctx.printers.append(printer)
|
||||
|
||||
def _cmd_end_format(self, valref, ctx):
|
||||
ctx.printers.pop()
|
||||
|
||||
def _cmd_include(self, (valref, reader), ctx):
|
||||
fname = _get_value(valref, ctx)
|
||||
### note: we don't have the set of for_names to pass into this parse.
|
||||
### I don't think there is anything to do but document it.
|
||||
self._execute(self._parse(reader.read_other(fname)), ctx)
|
||||
|
||||
def _cmd_if_any(self, args, ctx):
|
||||
"If any value is a non-empty string or non-empty list, then T else F."
|
||||
(valrefs, t_section, f_section) = args
|
||||
value = 0
|
||||
for valref in valrefs:
|
||||
if _get_value(valref, ctx):
|
||||
value = 1
|
||||
break
|
||||
self._do_if(value, t_section, f_section, ctx)
|
||||
|
||||
def _cmd_if_index(self, args, ctx):
|
||||
((valref, value), t_section, f_section) = args
|
||||
iterator = ctx.for_iterators[valref[0]]
|
||||
if value == 'even':
|
||||
value = iterator.index % 2 == 0
|
||||
elif value == 'odd':
|
||||
value = iterator.index % 2 == 1
|
||||
elif value == 'first':
|
||||
value = iterator.index == 0
|
||||
elif value == 'last':
|
||||
value = iterator.is_last()
|
||||
else:
|
||||
value = iterator.index == int(value)
|
||||
self._do_if(value, t_section, f_section, ctx)
|
||||
|
||||
def _cmd_is(self, args, ctx):
|
||||
((left_ref, right_ref), t_section, f_section) = args
|
||||
value = _get_value(right_ref, ctx)
|
||||
value = string.lower(_get_value(left_ref, ctx)) == string.lower(value)
|
||||
self._do_if(value, t_section, f_section, ctx)
|
||||
|
||||
def _do_if(self, value, t_section, f_section, ctx):
|
||||
if t_section is None:
|
||||
t_section = f_section
|
||||
f_section = None
|
||||
if value:
|
||||
section = t_section
|
||||
else:
|
||||
section = f_section
|
||||
if section is not None:
|
||||
self._execute(section, ctx)
|
||||
|
||||
def _cmd_for(self, args, ctx):
|
||||
((valref,), unused, section) = args
|
||||
list = _get_value(valref, ctx)
|
||||
if isinstance(list, StringType):
|
||||
raise NeedSequenceError()
|
||||
refname = valref[0]
|
||||
ctx.for_iterators[refname] = iterator = _iter(list)
|
||||
for unused in iterator:
|
||||
self._execute(section, ctx)
|
||||
del ctx.for_iterators[refname]
|
||||
|
||||
def _cmd_define(self, args, ctx):
|
||||
((name,), unused, section) = args
|
||||
origfp = ctx.fp
|
||||
ctx.fp = cStringIO.StringIO()
|
||||
if section is not None:
|
||||
self._execute(section, ctx)
|
||||
ctx.defines[name] = ctx.fp.getvalue()
|
||||
ctx.fp = origfp
|
||||
|
||||
def boolean(value):
|
||||
"Return a value suitable for [if-any bool_var] usage in a template."
|
||||
if value:
|
||||
return 'yes'
|
||||
return None
|
||||
|
||||
|
||||
def _prepare_ref(refname, for_names, file_args):
|
||||
"""refname -> a string containing a dotted identifier. example:"foo.bar.bang"
|
||||
for_names -> a list of active for sequences.
|
||||
|
||||
Returns a `value reference', a 3-tuple made out of (refname, start, rest),
|
||||
for fast access later.
|
||||
"""
|
||||
# is the reference a string constant?
|
||||
if refname[0] == '"':
|
||||
return None, refname[1:-1], None
|
||||
|
||||
parts = string.split(refname, '.')
|
||||
start = parts[0]
|
||||
rest = parts[1:]
|
||||
|
||||
# if this is an include-argument, then just return the prepared ref
|
||||
if start[:3] == 'arg':
|
||||
try:
|
||||
idx = int(start[3:])
|
||||
except ValueError:
|
||||
pass
|
||||
else:
|
||||
if idx < len(file_args):
|
||||
orig_refname, start, more_rest = file_args[idx]
|
||||
if more_rest is None:
|
||||
# the include-argument was a string constant
|
||||
return None, start, None
|
||||
|
||||
# prepend the argument's "rest" for our further processing
|
||||
rest[:0] = more_rest
|
||||
|
||||
# rewrite the refname to ensure that any potential 'for' processing
|
||||
# has the correct name
|
||||
### this can make it hard for debugging include files since we lose
|
||||
### the 'argNNN' names
|
||||
if not rest:
|
||||
return start, start, [ ]
|
||||
refname = start + '.' + string.join(rest, '.')
|
||||
|
||||
if for_names:
|
||||
# From last to first part, check if this reference is part of a for loop
|
||||
for i in range(len(parts), 0, -1):
|
||||
name = string.join(parts[:i], '.')
|
||||
if name in for_names:
|
||||
return refname, name, parts[i:]
|
||||
|
||||
return refname, start, rest
|
||||
|
||||
def _get_value((refname, start, rest), ctx):
|
||||
"""(refname, start, rest) -> a prepared `value reference' (see above).
|
||||
ctx -> an execution context instance.
|
||||
|
||||
Does a name space lookup within the template name space. Active
|
||||
for blocks take precedence over data dictionary members with the
|
||||
same name.
|
||||
"""
|
||||
if rest is None:
|
||||
# it was a string constant
|
||||
return start
|
||||
|
||||
# get the starting object
|
||||
if ctx.for_iterators.has_key(start):
|
||||
ob = ctx.for_iterators[start].last_item
|
||||
elif ctx.defines.has_key(start):
|
||||
ob = ctx.defines[start]
|
||||
elif hasattr(ctx.data, start):
|
||||
ob = getattr(ctx.data, start)
|
||||
else:
|
||||
raise UnknownReference(refname)
|
||||
|
||||
# walk the rest of the dotted reference
|
||||
for attr in rest:
|
||||
try:
|
||||
ob = getattr(ob, attr)
|
||||
except AttributeError:
|
||||
raise UnknownReference(refname)
|
||||
|
||||
# make sure we return a string instead of some various Python types
|
||||
if isinstance(ob, IntType) \
|
||||
or isinstance(ob, LongType) \
|
||||
or isinstance(ob, FloatType):
|
||||
return str(ob)
|
||||
if ob is None:
|
||||
return ''
|
||||
|
||||
# string or a sequence
|
||||
return ob
|
||||
|
||||
def _write_value(value, args, ctx):
|
||||
# value is a callback function, generates its own output
|
||||
if callable(value):
|
||||
apply(value, [ctx] + list(args))
|
||||
return
|
||||
|
||||
# pop printer in case it recursively calls _write_value
|
||||
printer = ctx.printers.pop()
|
||||
|
||||
try:
|
||||
# if the value has a 'read' attribute, then it is a stream: copy it
|
||||
if hasattr(value, 'read'):
|
||||
while 1:
|
||||
chunk = value.read(16384)
|
||||
if not chunk:
|
||||
break
|
||||
printer(ctx, chunk)
|
||||
|
||||
# value is a substitution pattern
|
||||
elif args:
|
||||
parts = _re_subst.split(value)
|
||||
for i in range(len(parts)):
|
||||
piece = parts[i]
|
||||
if i%2 == 1 and piece != '%':
|
||||
idx = int(piece)
|
||||
if idx < len(args):
|
||||
piece = args[idx]
|
||||
else:
|
||||
piece = '<undef>'
|
||||
printer(ctx, piece)
|
||||
|
||||
# plain old value, write to output
|
||||
else:
|
||||
printer(ctx, value)
|
||||
|
||||
finally:
|
||||
ctx.printers.append(printer)
|
||||
|
||||
|
||||
class Context:
|
||||
"""A container for the execution context"""
|
||||
def __init__(self, fp):
|
||||
self.fp = fp
|
||||
self.printers = []
|
||||
def write(self, value, args=()):
|
||||
_write_value(value, args, self)
|
||||
|
||||
class Reader:
|
||||
"Abstract class which allows EZT to detect Reader objects."
|
||||
|
||||
class _FileReader(Reader):
|
||||
"""Reads templates from the filesystem."""
|
||||
def __init__(self, fname):
|
||||
self.text = open(fname, 'rb').read()
|
||||
self._dir = os.path.dirname(fname)
|
||||
def read_other(self, relative):
|
||||
return _FileReader(os.path.join(self._dir, relative))
|
||||
|
||||
class _TextReader(Reader):
|
||||
"""'Reads' a template from provided text."""
|
||||
def __init__(self, text):
|
||||
self.text = text
|
||||
def read_other(self, relative):
|
||||
raise BaseUnavailableError()
|
||||
|
||||
class _Iterator:
|
||||
"""Specialized iterator for EZT that counts items and can look ahead
|
||||
|
||||
Implements standard iterator interface and provides an is_last() method
|
||||
and two public members:
|
||||
|
||||
index - integer index of the current item
|
||||
last_item - last item returned by next()"""
|
||||
|
||||
def __init__(self, sequence):
|
||||
self._iter = iter(sequence)
|
||||
|
||||
def next(self):
|
||||
if hasattr(self, '_next_item'):
|
||||
self.last_item = self._next_item
|
||||
del self._next_item
|
||||
else:
|
||||
self.last_item = self._iter.next() # may raise StopIteration
|
||||
|
||||
if hasattr(self, 'index'):
|
||||
self.index = self.index + 1
|
||||
else:
|
||||
self.index = 0
|
||||
|
||||
return self.last_item
|
||||
|
||||
def is_last(self):
|
||||
"""Return true if the current item is the last in the sequence"""
|
||||
# the only way we can tell if the current item is last is to call next()
|
||||
# and store the return value so it doesn't get lost
|
||||
if not hasattr(self, '_next_item'):
|
||||
try:
|
||||
self._next_item = self._iter.next()
|
||||
except StopIteration:
|
||||
return 1
|
||||
return 0
|
||||
|
||||
def __iter__(self):
|
||||
return self
|
||||
|
||||
class _OldIterator:
|
||||
"""Alternate implemention of _Iterator for old Pythons without iterators
|
||||
|
||||
This class implements the sequence protocol, instead of the iterator
|
||||
interface, so it's really not an iterator at all. But it can be used in
|
||||
python "for" loops as a drop-in replacement for _Iterator. It also provides
|
||||
the is_last() method and "last_item" and "index" members described in the
|
||||
_Iterator docstring."""
|
||||
|
||||
def __init__(self, sequence):
|
||||
self._seq = sequence
|
||||
|
||||
def __getitem__(self, index):
|
||||
self.last_item = self._seq[index] # may raise IndexError
|
||||
self.index = index
|
||||
return self.last_item
|
||||
|
||||
def is_last(self):
|
||||
return self.index + 1 >= len(self._seq)
|
||||
|
||||
try:
|
||||
iter
|
||||
except NameError:
|
||||
_iter = _OldIterator
|
||||
else:
|
||||
_iter = _Iterator
|
||||
|
||||
class EZTException(Exception):
|
||||
"""Parent class of all EZT exceptions."""
|
||||
|
||||
class ArgCountSyntaxError(EZTException):
|
||||
"""A bracket directive got the wrong number of arguments."""
|
||||
|
||||
class UnknownReference(EZTException):
|
||||
"""The template references an object not contained in the data dictionary."""
|
||||
|
||||
class NeedSequenceError(EZTException):
|
||||
"""The object dereferenced by the template is no sequence (tuple or list)."""
|
||||
|
||||
class UnclosedBlocksError(EZTException):
|
||||
"""This error may be simply a missing [end]."""
|
||||
|
||||
class UnmatchedEndError(EZTException):
|
||||
"""This error may be caused by a misspelled if directive."""
|
||||
|
||||
class BaseUnavailableError(EZTException):
|
||||
"""Base location is unavailable, which disables includes."""
|
||||
|
||||
class UnknownFormatConstantError(EZTException):
|
||||
"""The format specifier is an unknown value."""
|
||||
|
||||
def _raw_printer(ctx, s):
|
||||
ctx.fp.write(s)
|
||||
|
||||
def _html_printer(ctx, s):
|
||||
ctx.fp.write(cgi.escape(s))
|
||||
|
||||
def _uri_printer(ctx, s):
|
||||
ctx.fp.write(urllib.quote(s))
|
||||
|
||||
_printers = {
|
||||
FORMAT_RAW : _raw_printer,
|
||||
FORMAT_HTML : _html_printer,
|
||||
FORMAT_XML : _html_printer,
|
||||
FORMAT_URI : _uri_printer,
|
||||
}
|
||||
|
||||
# --- standard test environment ---
|
||||
def test_parse():
|
||||
assert _re_parse.split('[a]') == ['', '[a]', None, '']
|
||||
assert _re_parse.split('[a] [b]') == \
|
||||
['', '[a]', None, ' ', '[b]', None, '']
|
||||
assert _re_parse.split('[a c] [b]') == \
|
||||
['', '[a c]', None, ' ', '[b]', None, '']
|
||||
assert _re_parse.split('x [a] y [b] z') == \
|
||||
['x ', '[a]', None, ' y ', '[b]', None, ' z']
|
||||
assert _re_parse.split('[a "b" c "d"]') == \
|
||||
['', '[a "b" c "d"]', None, '']
|
||||
assert _re_parse.split(r'["a \"b[foo]" c.d f]') == \
|
||||
['', '["a \\"b[foo]" c.d f]', None, '']
|
||||
|
||||
def _test(argv):
|
||||
import doctest, ezt
|
||||
verbose = "-v" in argv
|
||||
return doctest.testmod(ezt, verbose=verbose)
|
||||
|
||||
if __name__ == "__main__":
|
||||
# invoke unit test for this module:
|
||||
import sys
|
||||
sys.exit(_test(sys.argv)[0])
|
|
@ -0,0 +1,190 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2006 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# idiff: display differences between files highlighting intraline changes
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
from __future__ import generators
|
||||
|
||||
import difflib
|
||||
import sys
|
||||
import re
|
||||
import ezt
|
||||
import cgi
|
||||
|
||||
def sidebyside(fromlines, tolines, context):
|
||||
"""Generate side by side diff"""
|
||||
|
||||
### for some reason mdiff chokes on \n's in input lines
|
||||
line_strip = lambda line: line.rstrip("\n")
|
||||
fromlines = map(line_strip, fromlines)
|
||||
tolines = map(line_strip, tolines)
|
||||
|
||||
gap = False
|
||||
for fromdata, todata, flag in difflib._mdiff(fromlines, tolines, context):
|
||||
if fromdata is None and todata is None and flag is None:
|
||||
gap = True
|
||||
else:
|
||||
from_item = _mdiff_split(flag, fromdata)
|
||||
to_item = _mdiff_split(flag, todata)
|
||||
yield _item(gap=ezt.boolean(gap), columns=(from_item, to_item))
|
||||
gap = False
|
||||
|
||||
_re_mdiff = re.compile("\0([+-^])(.*?)\1")
|
||||
|
||||
def _mdiff_split(flag, (line_number, text)):
|
||||
"""Break up row from mdiff output into segments"""
|
||||
segments = []
|
||||
pos = 0
|
||||
while True:
|
||||
m = _re_mdiff.search(text, pos)
|
||||
if not m:
|
||||
segments.append(_item(text=cgi.escape(text[pos:]), type=None))
|
||||
break
|
||||
|
||||
if m.start() > pos:
|
||||
segments.append(_item(text=cgi.escape(text[pos:m.start()]), type=None))
|
||||
|
||||
if m.group(1) == "+":
|
||||
segments.append(_item(text=cgi.escape(m.group(2)), type="add"))
|
||||
elif m.group(1) == "-":
|
||||
segments.append(_item(text=cgi.escape(m.group(2)), type="remove"))
|
||||
elif m.group(1) == "^":
|
||||
segments.append(_item(text=cgi.escape(m.group(2)), type="change"))
|
||||
|
||||
pos = m.end()
|
||||
|
||||
return _item(segments=segments, line_number=line_number)
|
||||
|
||||
def unified(fromlines, tolines, context):
|
||||
"""Generate unified diff"""
|
||||
|
||||
diff = difflib.Differ().compare(fromlines, tolines)
|
||||
lastrow = None
|
||||
|
||||
for row in _trim_context(diff, context):
|
||||
if row[0].startswith("? "):
|
||||
yield _differ_split(lastrow, row[0])
|
||||
lastrow = None
|
||||
else:
|
||||
if lastrow:
|
||||
yield _differ_split(lastrow, None)
|
||||
lastrow = row
|
||||
|
||||
if lastrow:
|
||||
yield _differ_split(lastrow, None)
|
||||
|
||||
def _trim_context(lines, context_size):
|
||||
"""Trim context lines that don't surround changes from Differ results
|
||||
|
||||
yields (line, leftnum, rightnum, gap) tuples"""
|
||||
|
||||
# circular buffer to hold context lines
|
||||
context_buffer = [None] * (context_size or 0)
|
||||
context_start = context_len = 0
|
||||
|
||||
# number of context lines left to print after encountering a change
|
||||
context_owed = 0
|
||||
|
||||
# current line numbers
|
||||
leftnum = rightnum = 0
|
||||
|
||||
# whether context lines have been dropped
|
||||
gap = False
|
||||
|
||||
for line in lines:
|
||||
row = save = None
|
||||
|
||||
if line.startswith("- "):
|
||||
leftnum = leftnum + 1
|
||||
row = line, leftnum, None
|
||||
context_owed = context_size
|
||||
|
||||
elif line.startswith("+ "):
|
||||
rightnum = rightnum + 1
|
||||
row = line, None, rightnum
|
||||
context_owed = context_size
|
||||
|
||||
else:
|
||||
if line.startswith(" "):
|
||||
leftnum = leftnum = leftnum + 1
|
||||
rightnum = rightnum = rightnum + 1
|
||||
if context_owed > 0:
|
||||
context_owed = context_owed - 1
|
||||
elif context_size is not None:
|
||||
save = True
|
||||
|
||||
row = line, leftnum, rightnum
|
||||
|
||||
if save:
|
||||
# don't yield row right away, store it in buffer
|
||||
context_buffer[(context_start + context_len) % context_size] = row
|
||||
if context_len == context_size:
|
||||
context_start = (context_start + 1) % context_size
|
||||
gap = True
|
||||
else:
|
||||
context_len = context_len + 1
|
||||
else:
|
||||
# yield row, but first drain stuff in buffer
|
||||
context_len == context_size
|
||||
while context_len:
|
||||
yield context_buffer[context_start] + (gap,)
|
||||
gap = False
|
||||
context_start = (context_start + 1) % context_size
|
||||
context_len = context_len - 1
|
||||
yield row + (gap,)
|
||||
gap = False
|
||||
|
||||
_re_differ = re.compile(r"[+-^]+")
|
||||
|
||||
def _differ_split(row, guide):
|
||||
"""Break row into segments using guide line"""
|
||||
line, left_number, right_number, gap = row
|
||||
|
||||
if left_number and right_number:
|
||||
type = ""
|
||||
elif left_number:
|
||||
type = "remove"
|
||||
elif right_number:
|
||||
type = "add"
|
||||
|
||||
segments = []
|
||||
pos = 2
|
||||
|
||||
if guide:
|
||||
assert guide.startswith("? ")
|
||||
|
||||
for m in _re_differ.finditer(guide, pos):
|
||||
if m.start() > pos:
|
||||
segments.append(_item(text=cgi.escape(line[pos:m.start()]), type=None))
|
||||
segments.append(_item(text=cgi.escape(line[m.start():m.end()]),
|
||||
type="change"))
|
||||
pos = m.end()
|
||||
|
||||
segments.append(_item(text=cgi.escape(line[pos:]), type=None))
|
||||
|
||||
return _item(gap=ezt.boolean(gap), type=type, segments=segments,
|
||||
left_number=left_number, right_number=right_number)
|
||||
|
||||
class _item:
|
||||
def __init__(self, **kw):
|
||||
vars(self).update(kw)
|
||||
|
||||
try:
|
||||
### Using difflib._mdiff function here was the easiest way of obtaining
|
||||
### intraline diffs for use in ViewVC, but it doesn't exist prior to
|
||||
### Python 2.4 and is not part of the public difflib API, so for now
|
||||
### fall back if it doesn't exist.
|
||||
difflib._mdiff
|
||||
except AttributeError:
|
||||
sidebyside = None
|
|
@ -0,0 +1,379 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2007 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# popen.py: a replacement for os.popen()
|
||||
#
|
||||
# This implementation of popen() provides a cmd + args calling sequence,
|
||||
# rather than a system() type of convention. The shell facilities are not
|
||||
# available, but that implies we can avoid worrying about shell hacks in
|
||||
# the arguments.
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
import os
|
||||
import sys
|
||||
import sapi
|
||||
import threading
|
||||
import string
|
||||
|
||||
if sys.platform == "win32":
|
||||
import win32popen
|
||||
import win32event
|
||||
import win32process
|
||||
import debug
|
||||
import StringIO
|
||||
|
||||
def popen(cmd, args, mode, capture_err=1):
|
||||
if sys.platform == "win32":
|
||||
command = win32popen.CommandLine(cmd, args)
|
||||
|
||||
if string.find(mode, 'r') >= 0:
|
||||
hStdIn = None
|
||||
|
||||
if debug.SHOW_CHILD_PROCESSES:
|
||||
dbgIn, dbgOut = None, StringIO.StringIO()
|
||||
|
||||
handle, hStdOut = win32popen.MakeSpyPipe(0, 1, (dbgOut,))
|
||||
|
||||
if capture_err:
|
||||
hStdErr = hStdOut
|
||||
dbgErr = dbgOut
|
||||
else:
|
||||
dbgErr = StringIO.StringIO()
|
||||
x, hStdErr = win32popen.MakeSpyPipe(None, 1, (dbgErr,))
|
||||
else:
|
||||
handle, hStdOut = win32popen.CreatePipe(0, 1)
|
||||
if capture_err:
|
||||
hStdErr = hStdOut
|
||||
else:
|
||||
hStdErr = win32popen.NullFile(1)
|
||||
|
||||
else:
|
||||
if debug.SHOW_CHILD_PROCESSES:
|
||||
dbgIn, dbgOut, dbgErr = StringIO.StringIO(), StringIO.StringIO(), StringIO.StringIO()
|
||||
hStdIn, handle = win32popen.MakeSpyPipe(1, 0, (dbgIn,))
|
||||
x, hStdOut = win32popen.MakeSpyPipe(None, 1, (dbgOut,))
|
||||
x, hStdErr = win32popen.MakeSpyPipe(None, 1, (dbgErr,))
|
||||
else:
|
||||
hStdIn, handle = win32popen.CreatePipe(0, 1)
|
||||
hStdOut = None
|
||||
hStdErr = None
|
||||
|
||||
phandle, pid, thandle, tid = win32popen.CreateProcess(command, hStdIn, hStdOut, hStdErr)
|
||||
|
||||
if debug.SHOW_CHILD_PROCESSES:
|
||||
debug.Process(command, dbgIn, dbgOut, dbgErr)
|
||||
|
||||
return _pipe(win32popen.File2FileObject(handle, mode), phandle)
|
||||
|
||||
# flush the stdio buffers since we are about to change the FD under them
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
|
||||
r, w = os.pipe()
|
||||
pid = os.fork()
|
||||
if pid:
|
||||
# in the parent
|
||||
|
||||
# close the descriptor that we don't need and return the other one.
|
||||
if string.find(mode, 'r') >= 0:
|
||||
os.close(w)
|
||||
return _pipe(os.fdopen(r, mode), pid)
|
||||
os.close(r)
|
||||
return _pipe(os.fdopen(w, mode), pid)
|
||||
|
||||
# in the child
|
||||
|
||||
# we'll need /dev/null for the discarded I/O
|
||||
null = os.open('/dev/null', os.O_RDWR)
|
||||
|
||||
if string.find(mode, 'r') >= 0:
|
||||
# hook stdout/stderr to the "write" channel
|
||||
os.dup2(w, 1)
|
||||
# "close" stdin; the child shouldn't use it
|
||||
### this isn't quite right... we may want the child to read from stdin
|
||||
os.dup2(null, 0)
|
||||
# what to do with errors?
|
||||
if capture_err:
|
||||
os.dup2(w, 2)
|
||||
else:
|
||||
os.dup2(null, 2)
|
||||
else:
|
||||
# hook stdin to the "read" channel
|
||||
os.dup2(r, 0)
|
||||
# "close" stdout/stderr; the child shouldn't use them
|
||||
### this isn't quite right... we may want the child to write to these
|
||||
os.dup2(null, 1)
|
||||
os.dup2(null, 2)
|
||||
|
||||
# don't need these FDs any more
|
||||
os.close(null)
|
||||
os.close(r)
|
||||
os.close(w)
|
||||
|
||||
# the stdin/stdout/stderr are all set up. exec the target
|
||||
try:
|
||||
os.execvp(cmd, (cmd,) + tuple(args))
|
||||
except:
|
||||
# aid debugging, if the os.execvp above fails for some reason:
|
||||
print "<h2>exec failed:</h2><pre>", cmd, string.join(args), "</pre>"
|
||||
raise
|
||||
|
||||
# crap. shouldn't be here.
|
||||
sys.exit(127)
|
||||
|
||||
def pipe_cmds(cmds, out=None):
|
||||
"""Executes a sequence of commands. The output of each command is directed to
|
||||
the input of the next command. A _pipe object is returned for writing to the
|
||||
first command's input. The output of the last command is directed to the
|
||||
"out" file object or the standard output if "out" is None. If "out" is not an
|
||||
OS file descriptor, a separate thread will be spawned to send data to its
|
||||
write() method."""
|
||||
|
||||
if out is None:
|
||||
out = sys.stdout
|
||||
|
||||
if sys.platform == "win32":
|
||||
### FIXME: windows implementation ignores "out" argument, always
|
||||
### writing last command's output to standard out
|
||||
|
||||
if debug.SHOW_CHILD_PROCESSES:
|
||||
dbgIn = StringIO.StringIO()
|
||||
hStdIn, handle = win32popen.MakeSpyPipe(1, 0, (dbgIn,))
|
||||
|
||||
i = 0
|
||||
for cmd in cmds:
|
||||
i = i + 1
|
||||
|
||||
dbgOut, dbgErr = StringIO.StringIO(), StringIO.StringIO()
|
||||
|
||||
if i < len(cmds):
|
||||
nextStdIn, hStdOut = win32popen.MakeSpyPipe(1, 1, (dbgOut,))
|
||||
x, hStdErr = win32popen.MakeSpyPipe(None, 1, (dbgErr,))
|
||||
else:
|
||||
ehandle = win32event.CreateEvent(None, 1, 0, None)
|
||||
nextStdIn, hStdOut = win32popen.MakeSpyPipe(None, 1, (dbgOut, sapi.server.file()), ehandle)
|
||||
x, hStdErr = win32popen.MakeSpyPipe(None, 1, (dbgErr,))
|
||||
|
||||
command = win32popen.CommandLine(cmd[0], cmd[1:])
|
||||
phandle, pid, thandle, tid = win32popen.CreateProcess(command, hStdIn, hStdOut, hStdErr)
|
||||
if debug.SHOW_CHILD_PROCESSES:
|
||||
debug.Process(command, dbgIn, dbgOut, dbgErr)
|
||||
|
||||
dbgIn = dbgOut
|
||||
hStdIn = nextStdIn
|
||||
|
||||
|
||||
else:
|
||||
|
||||
hStdIn, handle = win32popen.CreatePipe(1, 0)
|
||||
spool = None
|
||||
|
||||
i = 0
|
||||
for cmd in cmds:
|
||||
i = i + 1
|
||||
if i < len(cmds):
|
||||
nextStdIn, hStdOut = win32popen.CreatePipe(1, 1)
|
||||
else:
|
||||
# very last process
|
||||
nextStdIn = None
|
||||
|
||||
if sapi.server.inheritableOut:
|
||||
# send child output to standard out
|
||||
hStdOut = win32popen.MakeInheritedHandle(win32popen.FileObject2File(sys.stdout),0)
|
||||
ehandle = None
|
||||
else:
|
||||
ehandle = win32event.CreateEvent(None, 1, 0, None)
|
||||
x, hStdOut = win32popen.MakeSpyPipe(None, 1, (sapi.server.file(),), ehandle)
|
||||
|
||||
command = win32popen.CommandLine(cmd[0], cmd[1:])
|
||||
phandle, pid, thandle, tid = win32popen.CreateProcess(command, hStdIn, hStdOut, None)
|
||||
hStdIn = nextStdIn
|
||||
|
||||
return _pipe(win32popen.File2FileObject(handle, 'wb'), phandle, ehandle)
|
||||
|
||||
# flush the stdio buffers since we are about to change the FD under them
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
|
||||
prev_r, parent_w = os.pipe()
|
||||
|
||||
null = os.open('/dev/null', os.O_RDWR)
|
||||
|
||||
child_pids = []
|
||||
|
||||
for cmd in cmds[:-1]:
|
||||
r, w = os.pipe()
|
||||
pid = os.fork()
|
||||
if not pid:
|
||||
# in the child
|
||||
|
||||
# hook up stdin to the "read" channel
|
||||
os.dup2(prev_r, 0)
|
||||
|
||||
# hook up stdout to the output channel
|
||||
os.dup2(w, 1)
|
||||
|
||||
# toss errors
|
||||
os.dup2(null, 2)
|
||||
|
||||
# close these extra descriptors
|
||||
os.close(prev_r)
|
||||
os.close(parent_w)
|
||||
os.close(null)
|
||||
os.close(r)
|
||||
os.close(w)
|
||||
|
||||
# time to run the command
|
||||
try:
|
||||
os.execvp(cmd[0], cmd)
|
||||
except:
|
||||
pass
|
||||
|
||||
sys.exit(127)
|
||||
|
||||
# in the parent
|
||||
child_pids.append(pid)
|
||||
|
||||
# we don't need these any more
|
||||
os.close(prev_r)
|
||||
os.close(w)
|
||||
|
||||
# the read channel of this pipe will feed into to the next command
|
||||
prev_r = r
|
||||
|
||||
# no longer needed
|
||||
os.close(null)
|
||||
|
||||
# done with most of the commands. set up the last command to write to "out"
|
||||
if not hasattr(out, 'fileno'):
|
||||
r, w = os.pipe()
|
||||
|
||||
pid = os.fork()
|
||||
if not pid:
|
||||
# in the child (the last command)
|
||||
|
||||
# hook up stdin to the "read" channel
|
||||
os.dup2(prev_r, 0)
|
||||
|
||||
# hook up stdout to "out"
|
||||
if hasattr(out, 'fileno'):
|
||||
if out.fileno() != 1:
|
||||
os.dup2(out.fileno(), 1)
|
||||
out.close()
|
||||
|
||||
else:
|
||||
# "out" can't be hooked up directly, so use a pipe and a thread
|
||||
os.dup2(w, 1)
|
||||
os.close(r)
|
||||
os.close(w)
|
||||
|
||||
# close these extra descriptors
|
||||
os.close(prev_r)
|
||||
os.close(parent_w)
|
||||
|
||||
# run the last command
|
||||
try:
|
||||
os.execvp(cmds[-1][0], cmds[-1])
|
||||
except:
|
||||
pass
|
||||
|
||||
sys.exit(127)
|
||||
|
||||
child_pids.append(pid)
|
||||
# not needed any more
|
||||
os.close(prev_r)
|
||||
|
||||
if not hasattr(out, 'fileno'):
|
||||
os.close(w)
|
||||
thread = _copy(r, out)
|
||||
thread.start()
|
||||
else:
|
||||
thread = None
|
||||
|
||||
# write into the first pipe, wait on the final process
|
||||
return _pipe(os.fdopen(parent_w, 'w'), child_pids, thread=thread)
|
||||
|
||||
class _copy(threading.Thread):
|
||||
def __init__(self, srcfd, destfile):
|
||||
self.srcfd = srcfd
|
||||
self.destfile = destfile
|
||||
threading.Thread.__init__(self)
|
||||
|
||||
def run(self):
|
||||
try:
|
||||
while 1:
|
||||
s = os.read(self.srcfd, 1024)
|
||||
if not s:
|
||||
break
|
||||
self.destfile.write(s)
|
||||
finally:
|
||||
os.close(self.srcfd)
|
||||
|
||||
class _pipe:
|
||||
"Wrapper for a file which can wait() on a child process at close time."
|
||||
|
||||
def __init__(self, file, child_pid, done_event = None, thread = None):
|
||||
self.file = file
|
||||
self.child_pid = child_pid
|
||||
if sys.platform == "win32":
|
||||
if done_event:
|
||||
self.wait_for = (child_pid, done_event)
|
||||
else:
|
||||
self.wait_for = (child_pid,)
|
||||
else:
|
||||
self.thread = thread
|
||||
|
||||
def eof(self):
|
||||
### should be calling file.eof() here instead of file.close(), there
|
||||
### may be data in the pipe or buffer after the process exits
|
||||
if sys.platform == "win32":
|
||||
r = win32event.WaitForMultipleObjects(self.wait_for, 1, 0)
|
||||
if r == win32event.WAIT_OBJECT_0:
|
||||
self.file.close()
|
||||
self.file = None
|
||||
return win32process.GetExitCodeProcess(self.child_pid)
|
||||
return None
|
||||
|
||||
if self.thread and self.thread.isAlive():
|
||||
return None
|
||||
|
||||
pid, status = os.waitpid(self.child_pid, os.WNOHANG)
|
||||
if pid:
|
||||
self.file.close()
|
||||
self.file = None
|
||||
return status
|
||||
return None
|
||||
|
||||
def close(self):
|
||||
if self.file:
|
||||
self.file.close()
|
||||
self.file = None
|
||||
if sys.platform == "win32":
|
||||
win32event.WaitForMultipleObjects(self.wait_for, 1, win32event.INFINITE)
|
||||
return win32process.GetExitCodeProcess(self.child_pid)
|
||||
else:
|
||||
if self.thread:
|
||||
self.thread.join()
|
||||
if type(self.child_pid) == type([]):
|
||||
for pid in self.child_pid:
|
||||
exit = os.waitpid(pid, 0)[1]
|
||||
return exit
|
||||
else:
|
||||
return os.waitpid(self.child_pid, 0)[1]
|
||||
return None
|
||||
|
||||
def __getattr__(self, name):
|
||||
return getattr(self.file, name)
|
||||
|
||||
def __del__(self):
|
||||
self.close()
|
|
@ -0,0 +1,541 @@
|
|||
#!/usr/bin/python -u
|
||||
|
||||
""" Python Highlighter Version: 0.8
|
||||
|
||||
py2html.py [options] files...
|
||||
|
||||
options:
|
||||
-h print help
|
||||
- read from stdin, write to stdout
|
||||
-stdout read from files, write to stdout
|
||||
-files read from files, write to filename+'.html' (default)
|
||||
-format:
|
||||
html output XHTML page (default)
|
||||
rawhtml output pure XHTML (without headers, titles, etc.)
|
||||
-mode:
|
||||
color output in color (default)
|
||||
mono output b/w (for printing)
|
||||
-title:Title use 'Title' as title of the generated page
|
||||
-bgcolor:color use color as background-color for page
|
||||
-header:file use contents of file as header
|
||||
-footer:file use contents of file as footer
|
||||
-URL replace all occurances of 'URL: link' with
|
||||
'<a href="link">link</a>'; this is always enabled
|
||||
in CGI mode
|
||||
-v verbose
|
||||
|
||||
Takes the input, assuming it is Python code and formats it into
|
||||
colored XHTML. When called without parameters the script tries to
|
||||
work in CGI mode. It looks for a field 'script=URL' and tries to
|
||||
use that URL as input file. If it can't find this field, the path
|
||||
info (the part of the URL following the CGI script name) is
|
||||
tried. In case no host is given, the host where the CGI script
|
||||
lives and HTTP are used.
|
||||
|
||||
* Uses Just van Rossum's PyFontify version 0.3 to tag Python scripts.
|
||||
You can get it via his homepage on starship:
|
||||
URL: http://starship.python.net/crew/just
|
||||
"""
|
||||
__comments__ = """
|
||||
|
||||
The following snippet is a small shell script I use for viewing
|
||||
Python scripts via less on Unix:
|
||||
|
||||
pyless:
|
||||
#!/bin/sh
|
||||
# Browse pretty printed Python code using ANSI codes for highlighting
|
||||
py2html -stdout -format:ansi -mode:color $* | less -r
|
||||
|
||||
History:
|
||||
|
||||
0.8: Added patch by Patrick Lynch to have py2html.py use style
|
||||
sheets for markup
|
||||
0.7: Added patch by Ville Skyttä to make py2html.py output
|
||||
valid XHTML.
|
||||
0.6: Fixed a bug in .escape_html(); thanks to Vespe Savikko for
|
||||
finding this one.
|
||||
0.5: Added a few suggestions by Kevin Ng to make the CGI version
|
||||
a little more robust.
|
||||
|
||||
"""
|
||||
__copyright__ = """\
|
||||
Copyright (c) 1998-2000, Marc-Andre Lemburg; mailto:mal@lemburg.com
|
||||
Copyright (c) 2000-2002, eGenix.com Software GmbH; mailto:info@egenix.com
|
||||
Distributed under the terms and conditions of the eGenix.com Public
|
||||
License. See http://www.egenix.com/files/python/mxLicense.html for
|
||||
details, or contact the author. All Rights Reserved.\
|
||||
"""
|
||||
|
||||
__version__ = '0.8'
|
||||
|
||||
__cgifooter__ = ('\n<pre># code highlighted using <a href='
|
||||
'"http://www.lemburg.com/files/python/">py2html.py</a> '
|
||||
'version %s</pre>\n' % __version__)
|
||||
|
||||
import sys,string,re
|
||||
|
||||
# Adjust path so that PyFontify is found...
|
||||
sys.path.append('.')
|
||||
|
||||
### Constants
|
||||
|
||||
# URL of the input form the user is redirected to in case no script=xxx
|
||||
# form field is given. The URL *must* be absolute. Leave blank to
|
||||
# have the script issue an error instead.
|
||||
INPUT_FORM = 'http://www.lemburg.com/files/python/SoftwareDescriptions.html#py2html.py'
|
||||
|
||||
# HTML DOCTYPE and XML namespace
|
||||
HTML_DOCTYPE = '<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">'
|
||||
HTML_XMLNS = ' xmlns="http://www.w3.org/1999/xhtml"'
|
||||
|
||||
### Helpers
|
||||
|
||||
def fileio(file, mode='rb', data=None, close=0):
|
||||
|
||||
if type(file) == type(''):
|
||||
f = open(file,mode)
|
||||
close = 1
|
||||
else:
|
||||
f = file
|
||||
if data:
|
||||
f.write(data)
|
||||
else:
|
||||
data = f.read()
|
||||
if close: f.close()
|
||||
return data
|
||||
|
||||
### Converter class
|
||||
|
||||
class PrettyPrint:
|
||||
|
||||
""" generic Pretty Printer class
|
||||
|
||||
* supports tagging Python scripts in the following ways:
|
||||
|
||||
# format/mode | color mono
|
||||
# --------------------------
|
||||
# rawhtml | x x (HTML without headers, etc.)
|
||||
# html | x x (a HTML page with HEAD&BODY:)
|
||||
# ansi | x x (with Ansi-escape sequences)
|
||||
|
||||
* interfaces:
|
||||
|
||||
file_filter -- takes two files: input & output (may be stdin/stdout)
|
||||
filter -- takes a string and returns the highlighted version
|
||||
|
||||
* to create an instance use:
|
||||
|
||||
c = PrettyPrint(tagfct,format,mode)
|
||||
|
||||
where format and mode must be strings according to the
|
||||
above table if you plan to use PyFontify.fontify as
|
||||
tagfct
|
||||
|
||||
* the tagfct has to take one argument, text, and return a taglist
|
||||
(format: [(id,left,right,sublist),...], where id is the
|
||||
"name" given to the slice left:right in text and sublist is a
|
||||
taglist for tags inside the slice or None)
|
||||
|
||||
"""
|
||||
|
||||
# misc settings
|
||||
title = ''
|
||||
bgcolor = '#FFFFFF'
|
||||
css = ''
|
||||
header = ''
|
||||
footer = ''
|
||||
replace_URLs = 0
|
||||
# formats to be used
|
||||
formats = {}
|
||||
|
||||
def __init__(self,tagfct=None,format='html',mode='color'):
|
||||
|
||||
self.tag = tagfct
|
||||
self.set_mode = getattr(self,'set_mode_%s_%s' % (format, mode))
|
||||
self.filter = getattr(self,'filter_%s' % format)
|
||||
|
||||
def file_filter(self,infile,outfile):
|
||||
|
||||
self.set_mode()
|
||||
text = fileio(infile,'r')
|
||||
if type(infile) == type('') and self.title == '':
|
||||
self.title = infile
|
||||
fileio(outfile,'w',self.filter(text))
|
||||
|
||||
### Set pre- and postfixes for formats & modes
|
||||
#
|
||||
# These methods must set self.formats to a dictionary having
|
||||
# an entry for every tag returned by the tagging function.
|
||||
#
|
||||
# The format used is simple:
|
||||
# tag:(prefix,postfix)
|
||||
# where prefix and postfix are either strings or callable objects,
|
||||
# that return a string (they are called with the matching tag text
|
||||
# as only parameter). prefix is inserted in front of the tag, postfix
|
||||
# is inserted right after the tag.
|
||||
|
||||
def set_mode_html_color(self):
|
||||
|
||||
self.css = """
|
||||
<STYLE TYPE="text/css">
|
||||
<!--
|
||||
body{ background: %s; }
|
||||
.PY_KEYWORD{ color: #0000C0; font-weight: bold; }
|
||||
.PY_COMMENT{ color: #000080; }
|
||||
.PY_PARAMETER{ color: #C00000; }
|
||||
.PY_IDENTIFIER{ color: #C00000; font-weight: bold; }
|
||||
.PY_STRING{ color: #008000; }
|
||||
-->
|
||||
</STYLE> """ % self.bgcolor
|
||||
|
||||
self.formats = {
|
||||
'all':('<pre>','</pre>'),
|
||||
'comment':('<span class="PY_COMMENT">','</span>'),
|
||||
'keyword':('<span class="PY_KEYWORD">','</span>'),
|
||||
'parameter':('<span class="PY_PARAMETER">','</span>'),
|
||||
'identifier':( lambda x,strip=string.strip:
|
||||
'<a name="%s"><span class="PY_IDENTIFIER">' % (strip(x)),
|
||||
'</span></a>'),
|
||||
'string':('<span class="PY_STRING">','</span>')
|
||||
}
|
||||
|
||||
set_mode_rawhtml_color = set_mode_html_color
|
||||
|
||||
def set_mode_html_mono(self):
|
||||
|
||||
self.css = """
|
||||
<STYLE TYPE="text/css">
|
||||
<!--
|
||||
body{ background-color: %s }
|
||||
.PY_KEYWORD{ text-decoration: underline }
|
||||
.PY_COMMENT{ }
|
||||
.PY_PARAMETER{ }
|
||||
.PY_IDENTIFIER{ font-weight: bold}
|
||||
.PY_STRING{ font-style: italic}
|
||||
-->
|
||||
</STYLE> """ % self.bgcolor
|
||||
|
||||
self.formats = {
|
||||
'all':('<pre>','</pre>'),
|
||||
'comment':('<span class="PY_COMMENT">','</span>'),
|
||||
'keyword':( '<span class="PY_KEYWORD">','</span>'),
|
||||
'parameter':('<span class="PY_PARAMETER">','</span>'),
|
||||
'identifier':( lambda x,strip=string.strip:
|
||||
'<a name="%s"><span class="PY_IDENTIFIER">' % (strip(x)),
|
||||
'</span></a>'),
|
||||
'string':('<span class="PY_STRING">','</span>')
|
||||
}
|
||||
|
||||
set_mode_rawhtml_mono = set_mode_html_mono
|
||||
|
||||
def set_mode_ansi_mono(self):
|
||||
|
||||
self.formats = {
|
||||
'all':('',''),
|
||||
'comment':('\033[2m','\033[m'),
|
||||
'keyword':('\033[4m','\033[m'),
|
||||
'parameter':('',''),
|
||||
'identifier':('\033[1m','\033[m'),
|
||||
'string':('','')
|
||||
}
|
||||
|
||||
def set_mode_ansi_color(self):
|
||||
|
||||
self.formats = {
|
||||
'all':('',''),
|
||||
'comment':('\033[34;2m','\033[m'),
|
||||
'keyword':('\033[1;34m','\033[m'),
|
||||
'parameter':('',''),
|
||||
'identifier':('\033[1;31m','\033[m'),
|
||||
'string':('\033[32;2m','\033[m')
|
||||
}
|
||||
|
||||
### Filters for Python scripts given as string
|
||||
|
||||
def escape_html(self,text):
|
||||
|
||||
t = (('&','&'),('<','<'),('>','>'))
|
||||
for x,y in t:
|
||||
text = string.join(string.split(text,x),y)
|
||||
return text
|
||||
|
||||
def filter_html(self,text):
|
||||
|
||||
output = self.fontify(self.escape_html(text))
|
||||
if self.replace_URLs:
|
||||
output = re.sub('URL:([ \t]+)([^ \n\r<]+)',
|
||||
'URL:\\1<a href="\\2">\\2</a>',output)
|
||||
html = """%s<html%s>
|
||||
<head>
|
||||
<title>%s</title>
|
||||
<!--css-->
|
||||
%s
|
||||
</head>
|
||||
<body>
|
||||
<!--header-->
|
||||
%s
|
||||
<!--script-->
|
||||
%s
|
||||
<!--footer-->
|
||||
%s
|
||||
</body></html>\n"""%(HTML_DOCTYPE,
|
||||
HTML_XMLNS,
|
||||
self.title,
|
||||
self.css,
|
||||
self.header,
|
||||
output,
|
||||
self.footer)
|
||||
return html
|
||||
|
||||
def filter_rawhtml(self,text):
|
||||
|
||||
output = self.fontify(self.escape_html(text))
|
||||
if self.replace_URLs:
|
||||
output = re.sub('URL:([ \t]+)([^ \n\r<]+)',
|
||||
'URL:\\1<a href="\\2">\\2</a>',output)
|
||||
return self.header + output + self.footer
|
||||
|
||||
def filter_ansi(self,text):
|
||||
|
||||
output = self.fontify(text)
|
||||
return self.header + output + self.footer
|
||||
|
||||
### Fontify engine
|
||||
|
||||
def fontify(self,pytext):
|
||||
|
||||
# parse
|
||||
taglist = self.tag(pytext)
|
||||
|
||||
# prepend special 'all' tag:
|
||||
taglist[:0] = [('all',0,len(pytext),None)]
|
||||
|
||||
# prepare splitting
|
||||
splits = []
|
||||
addsplits(splits,pytext,self.formats,taglist)
|
||||
|
||||
# do splitting & inserting
|
||||
splits.sort()
|
||||
l = []
|
||||
li = 0
|
||||
for ri,dummy,insert in splits:
|
||||
if ri > li: l.append(pytext[li:ri])
|
||||
l.append(insert)
|
||||
li = ri
|
||||
if li < len(pytext): l.append(pytext[li:])
|
||||
|
||||
return string.join(l,'')
|
||||
|
||||
def addsplits(splits,text,formats,taglist):
|
||||
|
||||
""" Helper for .fontify()
|
||||
"""
|
||||
for id,left,right,sublist in taglist:
|
||||
try:
|
||||
pre,post = formats[id]
|
||||
except KeyError:
|
||||
# sys.stderr.write('Warning: no format for %s specified\n'%repr(id))
|
||||
pre,post = '',''
|
||||
if type(pre) != type(''):
|
||||
pre = pre(text[left:right])
|
||||
if type(post) != type(''):
|
||||
post = post(text[left:right])
|
||||
# len(splits) is a dummy used to make sorting stable
|
||||
splits.append((left,len(splits),pre))
|
||||
if sublist:
|
||||
addsplits(splits,text,formats,sublist)
|
||||
splits.append((right,len(splits),post))
|
||||
|
||||
def write_html_error(titel,text):
|
||||
|
||||
print """\
|
||||
%s<html%s><head><title>%s</title></head>
|
||||
<body>
|
||||
<h2>%s</h2>
|
||||
%s
|
||||
</body></html>
|
||||
""" % (HTML_DOCTYPE,HTML_XMLNS,titel,titel,text)
|
||||
|
||||
def redirect_to(url):
|
||||
|
||||
sys.stdout.write('Content-Type: text/html\r\n')
|
||||
sys.stdout.write('Status: 302\r\n')
|
||||
sys.stdout.write('Location: %s\r\n\r\n' % url)
|
||||
print """
|
||||
%s<html%s><head>
|
||||
<title>302 Moved Temporarily</title>
|
||||
</head><body>
|
||||
<h1>302 Moved Temporarily</h1>
|
||||
The document has moved to <a href="%s">%s</a>.<p></p>
|
||||
</body></html>
|
||||
""" % (HTML_DOCTYPE,HTML_XMLNS,url,url)
|
||||
|
||||
def main(cmdline):
|
||||
|
||||
""" main(cmdline) -- process cmdline as if it were sys.argv
|
||||
"""
|
||||
# parse options/files
|
||||
options = []
|
||||
optvalues = {}
|
||||
for o in cmdline[1:]:
|
||||
if o[0] == '-':
|
||||
if ':' in o:
|
||||
k,v = tuple(string.split(o,':'))
|
||||
optvalues[k] = v
|
||||
options.append(k)
|
||||
else:
|
||||
options.append(o)
|
||||
else:
|
||||
break
|
||||
files = cmdline[len(options)+1:]
|
||||
|
||||
### create converting object
|
||||
|
||||
# load fontifier
|
||||
if '-marcs' in options:
|
||||
# use mxTextTool's tagging engine as fontifier
|
||||
from mx.TextTools import tag
|
||||
from mx.TextTools.Examples.Python import python_script
|
||||
tagfct = lambda text,tag=tag,pytable=python_script: \
|
||||
tag(text,pytable)[1]
|
||||
print "Py2HTML: using Marc's tagging engine"
|
||||
else:
|
||||
# load Just's fontifier
|
||||
try:
|
||||
import PyFontify
|
||||
if PyFontify.__version__ < '0.3': raise ValueError
|
||||
tagfct = PyFontify.fontify
|
||||
except:
|
||||
print """
|
||||
Sorry, but this script needs the PyFontify.py module version 0.3;
|
||||
You can download it from Just's homepage at
|
||||
|
||||
URL: http://starship.python.net/crew/just
|
||||
"""
|
||||
sys.exit()
|
||||
|
||||
|
||||
if '-format' in options:
|
||||
format = optvalues['-format']
|
||||
else:
|
||||
# use default
|
||||
format = 'html'
|
||||
|
||||
if '-mode' in options:
|
||||
mode = optvalues['-mode']
|
||||
else:
|
||||
# use default
|
||||
mode = 'color'
|
||||
|
||||
c = PrettyPrint(tagfct,format,mode)
|
||||
convert = c.file_filter
|
||||
|
||||
### start working
|
||||
|
||||
if '-title' in options:
|
||||
c.title = optvalues['-title']
|
||||
|
||||
if '-bgcolor' in options:
|
||||
c.bgcolor = optvalues['-bgcolor']
|
||||
|
||||
if '-header' in options:
|
||||
try:
|
||||
f = open(optvalues['-header'])
|
||||
c.header = f.read()
|
||||
f.close()
|
||||
except IOError:
|
||||
if verbose: print 'IOError: header file not found'
|
||||
|
||||
if '-footer' in options:
|
||||
try:
|
||||
f = open(optvalues['-footer'])
|
||||
c.footer = f.read()
|
||||
f.close()
|
||||
except IOError:
|
||||
if verbose: print 'IOError: footer file not found'
|
||||
|
||||
if '-URL' in options:
|
||||
c.replace_URLs = 1
|
||||
|
||||
if '-' in options:
|
||||
convert(sys.stdin,sys.stdout)
|
||||
sys.exit()
|
||||
|
||||
if '-h' in options:
|
||||
print __doc__
|
||||
sys.exit()
|
||||
|
||||
if len(files) == 0:
|
||||
# Turn URL processing on
|
||||
c.replace_URLs = 1
|
||||
# Try CGI processing...
|
||||
import cgi,urllib,urlparse,os
|
||||
form = cgi.FieldStorage()
|
||||
if not form.has_key('script'):
|
||||
# Ok, then try pathinfo
|
||||
if not os.environ.has_key('PATH_INFO'):
|
||||
if INPUT_FORM:
|
||||
redirect_to(INPUT_FORM)
|
||||
else:
|
||||
sys.stdout.write('Content-Type: text/html\r\n\r\n')
|
||||
write_html_error('Missing Parameter',
|
||||
'Missing script=URL field in request')
|
||||
sys.exit(1)
|
||||
url = os.environ['PATH_INFO'][1:] # skip the leading slash
|
||||
else:
|
||||
url = form['script'].value
|
||||
sys.stdout.write('Content-Type: text/html\r\n\r\n')
|
||||
scheme, host, path, params, query, frag = urlparse.urlparse(url)
|
||||
if not host:
|
||||
scheme = 'http'
|
||||
if os.environ.has_key('HTTP_HOST'):
|
||||
host = os.environ['HTTP_HOST']
|
||||
else:
|
||||
host = 'localhost'
|
||||
url = urlparse.urlunparse((scheme, host, path, params, query, frag))
|
||||
#print url; sys.exit()
|
||||
network = urllib.URLopener()
|
||||
try:
|
||||
tempfile,headers = network.retrieve(url)
|
||||
except IOError,reason:
|
||||
write_html_error('Error opening "%s"' % url,
|
||||
'The given URL could not be opened. Reason: %s' %\
|
||||
str(reason))
|
||||
sys.exit(1)
|
||||
f = open(tempfile,'rb')
|
||||
c.title = url
|
||||
c.footer = __cgifooter__
|
||||
convert(f,sys.stdout)
|
||||
f.close()
|
||||
network.close()
|
||||
sys.exit()
|
||||
|
||||
if '-stdout' in options:
|
||||
filebreak = '-'*72
|
||||
for f in files:
|
||||
try:
|
||||
if len(files) > 1:
|
||||
print filebreak
|
||||
print 'File:',f
|
||||
print filebreak
|
||||
convert(f,sys.stdout)
|
||||
except IOError:
|
||||
pass
|
||||
else:
|
||||
verbose = ('-v' in options)
|
||||
if verbose:
|
||||
print 'Py2HTML: working on',
|
||||
for f in files:
|
||||
try:
|
||||
if verbose: print f,
|
||||
convert(f,f+'.html')
|
||||
except IOError:
|
||||
if verbose: print '(IOError!)',
|
||||
if verbose:
|
||||
print
|
||||
print 'Done.'
|
||||
|
||||
if __name__=='__main__':
|
||||
main(sys.argv)
|
||||
|
||||
|
|
@ -0,0 +1,460 @@
|
|||
#!/usr/bin/env python
|
||||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2008 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# CGI script to process and display queries to CVSdb
|
||||
#
|
||||
# This script is part of the ViewVC package. More information can be
|
||||
# found at http://viewvc.org
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
import os
|
||||
import sys
|
||||
import string
|
||||
import time
|
||||
|
||||
import cvsdb
|
||||
import viewvc
|
||||
import ezt
|
||||
import debug
|
||||
import urllib
|
||||
import fnmatch
|
||||
|
||||
class FormData:
|
||||
def __init__(self, form):
|
||||
self.valid = 0
|
||||
|
||||
self.repository = ""
|
||||
self.branch = ""
|
||||
self.directory = ""
|
||||
self.file = ""
|
||||
self.who = ""
|
||||
self.sortby = ""
|
||||
self.date = ""
|
||||
self.hours = 0
|
||||
|
||||
self.decode_thyself(form)
|
||||
|
||||
def decode_thyself(self, form):
|
||||
try:
|
||||
self.repository = string.strip(form["repository"].value)
|
||||
except KeyError:
|
||||
pass
|
||||
except TypeError:
|
||||
pass
|
||||
else:
|
||||
self.valid = 1
|
||||
|
||||
try:
|
||||
self.branch = string.strip(form["branch"].value)
|
||||
except KeyError:
|
||||
pass
|
||||
except TypeError:
|
||||
pass
|
||||
else:
|
||||
self.valid = 1
|
||||
|
||||
try:
|
||||
self.directory = string.strip(form["directory"].value)
|
||||
except KeyError:
|
||||
pass
|
||||
except TypeError:
|
||||
pass
|
||||
else:
|
||||
self.valid = 1
|
||||
|
||||
try:
|
||||
self.file = string.strip(form["file"].value)
|
||||
except KeyError:
|
||||
pass
|
||||
except TypeError:
|
||||
pass
|
||||
else:
|
||||
self.valid = 1
|
||||
|
||||
try:
|
||||
self.who = string.strip(form["who"].value)
|
||||
except KeyError:
|
||||
pass
|
||||
except TypeError:
|
||||
pass
|
||||
else:
|
||||
self.valid = 1
|
||||
|
||||
try:
|
||||
self.sortby = string.strip(form["sortby"].value)
|
||||
except KeyError:
|
||||
pass
|
||||
except TypeError:
|
||||
pass
|
||||
|
||||
try:
|
||||
self.date = string.strip(form["date"].value)
|
||||
except KeyError:
|
||||
pass
|
||||
except TypeError:
|
||||
pass
|
||||
|
||||
try:
|
||||
self.hours = int(form["hours"].value)
|
||||
except KeyError:
|
||||
pass
|
||||
except TypeError:
|
||||
pass
|
||||
except ValueError:
|
||||
pass
|
||||
else:
|
||||
self.valid = 1
|
||||
|
||||
## returns a tuple-list (mod-str, string)
|
||||
def listparse_string(str):
|
||||
return_list = []
|
||||
|
||||
cmd = ""
|
||||
temp = ""
|
||||
escaped = 0
|
||||
state = "eat leading whitespace"
|
||||
|
||||
for c in str:
|
||||
## handle escaped charactors
|
||||
if not escaped and c == "\\":
|
||||
escaped = 1
|
||||
continue
|
||||
|
||||
## strip leading white space
|
||||
if state == "eat leading whitespace":
|
||||
if c in string.whitespace:
|
||||
continue
|
||||
else:
|
||||
state = "get command or data"
|
||||
|
||||
## parse to '"' or ","
|
||||
if state == "get command or data":
|
||||
|
||||
## just add escaped charactors
|
||||
if escaped:
|
||||
escaped = 0
|
||||
temp = temp + c
|
||||
continue
|
||||
|
||||
## the data is in quotes after the command
|
||||
elif c == "\"":
|
||||
cmd = temp
|
||||
temp = ""
|
||||
state = "get quoted data"
|
||||
continue
|
||||
|
||||
## this tells us there was no quoted data, therefore no
|
||||
## command; add the command and start over
|
||||
elif c == ",":
|
||||
## strip ending whitespace on un-quoted data
|
||||
temp = string.rstrip(temp)
|
||||
return_list.append( ("", temp) )
|
||||
temp = ""
|
||||
state = "eat leading whitespace"
|
||||
continue
|
||||
|
||||
## record the data
|
||||
else:
|
||||
temp = temp + c
|
||||
continue
|
||||
|
||||
## parse until ending '"'
|
||||
if state == "get quoted data":
|
||||
|
||||
## just add escaped charactors
|
||||
if escaped:
|
||||
escaped = 0
|
||||
temp = temp + c
|
||||
continue
|
||||
|
||||
## look for ending '"'
|
||||
elif c == "\"":
|
||||
return_list.append( (cmd, temp) )
|
||||
cmd = ""
|
||||
temp = ""
|
||||
state = "eat comma after quotes"
|
||||
continue
|
||||
|
||||
## record the data
|
||||
else:
|
||||
temp = temp + c
|
||||
continue
|
||||
|
||||
## parse until ","
|
||||
if state == "eat comma after quotes":
|
||||
if c in string.whitespace:
|
||||
continue
|
||||
|
||||
elif c == ",":
|
||||
state = "eat leading whitespace"
|
||||
continue
|
||||
|
||||
else:
|
||||
print "format error"
|
||||
sys.exit(1)
|
||||
|
||||
if cmd or temp:
|
||||
return_list.append((cmd, temp))
|
||||
|
||||
return return_list
|
||||
|
||||
def decode_command(cmd):
|
||||
if cmd == "r":
|
||||
return "regex"
|
||||
elif cmd == "l":
|
||||
return "like"
|
||||
else:
|
||||
return "exact"
|
||||
|
||||
def form_to_cvsdb_query(form_data):
|
||||
query = cvsdb.CreateCheckinQuery()
|
||||
|
||||
if form_data.repository:
|
||||
for cmd, str in listparse_string(form_data.repository):
|
||||
cmd = decode_command(cmd)
|
||||
query.SetRepository(str, cmd)
|
||||
|
||||
if form_data.branch:
|
||||
for cmd, str in listparse_string(form_data.branch):
|
||||
cmd = decode_command(cmd)
|
||||
query.SetBranch(str, cmd)
|
||||
|
||||
if form_data.directory:
|
||||
for cmd, str in listparse_string(form_data.directory):
|
||||
cmd = decode_command(cmd)
|
||||
query.SetDirectory(str, cmd)
|
||||
|
||||
if form_data.file:
|
||||
for cmd, str in listparse_string(form_data.file):
|
||||
cmd = decode_command(cmd)
|
||||
query.SetFile(str, cmd)
|
||||
|
||||
if form_data.who:
|
||||
for cmd, str in listparse_string(form_data.who):
|
||||
cmd = decode_command(cmd)
|
||||
query.SetAuthor(str, cmd)
|
||||
|
||||
if form_data.sortby == "author":
|
||||
query.SetSortMethod("author")
|
||||
elif form_data.sortby == "file":
|
||||
query.SetSortMethod("file")
|
||||
else:
|
||||
query.SetSortMethod("date")
|
||||
|
||||
if form_data.date:
|
||||
if form_data.date == "hours" and form_data.hours:
|
||||
query.SetFromDateHoursAgo(form_data.hours)
|
||||
elif form_data.date == "day":
|
||||
query.SetFromDateDaysAgo(1)
|
||||
elif form_data.date == "week":
|
||||
query.SetFromDateDaysAgo(7)
|
||||
elif form_data.date == "month":
|
||||
query.SetFromDateDaysAgo(31)
|
||||
|
||||
return query
|
||||
|
||||
def prev_rev(rev):
|
||||
'''Returns a string representing the previous revision of the argument.'''
|
||||
r = string.split(rev, '.')
|
||||
# decrement final revision component
|
||||
r[-1] = str(int(r[-1]) - 1)
|
||||
# prune if we pass the beginning of the branch
|
||||
if len(r) > 2 and r[-1] == '0':
|
||||
r = r[:-2]
|
||||
return string.join(r, '.')
|
||||
|
||||
def is_forbidden(cfg, cvsroot_name, module):
|
||||
auth_params = cfg.get_authorizer_params('forbidden', cvsroot_name)
|
||||
forbidden = auth_params.get('forbidden', '')
|
||||
forbidden = map(string.strip, filter(None, string.split(forbidden, ',')))
|
||||
default = 0
|
||||
for pat in forbidden:
|
||||
if pat[0] == '!':
|
||||
default = 1
|
||||
if fnmatch.fnmatchcase(module, pat[1:]):
|
||||
return 0
|
||||
elif fnmatch.fnmatchcase(module, pat):
|
||||
return 1
|
||||
return default
|
||||
|
||||
def build_commit(server, cfg, desc, files, cvsroots, viewvc_link):
|
||||
ob = _item(num_files=len(files), files=[])
|
||||
|
||||
if desc:
|
||||
ob.log = string.replace(server.escape(desc), '\n', '<br />')
|
||||
else:
|
||||
ob.log = ' '
|
||||
|
||||
for commit in files:
|
||||
repository = commit.GetRepository()
|
||||
directory = commit.GetDirectory()
|
||||
cvsroot_name = cvsroots.get(repository)
|
||||
|
||||
## find the module name (if any)
|
||||
try:
|
||||
module = filter(None, string.split(directory, '/'))[0]
|
||||
except IndexError:
|
||||
module = None
|
||||
|
||||
## skip commits we aren't supposed to show
|
||||
if module and ((module == 'CVSROOT' and cfg.options.hide_cvsroot) \
|
||||
or is_forbidden(cfg, cvsroot_name, module)):
|
||||
continue
|
||||
|
||||
ctime = commit.GetTime()
|
||||
if not ctime:
|
||||
ctime = " "
|
||||
else:
|
||||
if (cfg.options.use_localtime):
|
||||
ctime = time.strftime("%y/%m/%d %H:%M %Z", time.localtime(ctime))
|
||||
else:
|
||||
ctime = time.strftime("%y/%m/%d %H:%M", time.gmtime(ctime)) \
|
||||
+ ' UTC'
|
||||
|
||||
## make the file link
|
||||
try:
|
||||
file = (directory and directory + "/") + commit.GetFile()
|
||||
except:
|
||||
raise Exception, str([directory, commit.GetFile()])
|
||||
|
||||
## if we couldn't find the cvsroot path configured in the
|
||||
## viewvc.conf file, then don't make the link
|
||||
if cvsroot_name:
|
||||
flink = '[%s] <a href="%s/%s?root=%s">%s</a>' % (
|
||||
cvsroot_name, viewvc_link, urllib.quote(file),
|
||||
cvsroot_name, file)
|
||||
if commit.GetType() == commit.CHANGE:
|
||||
dlink = '%s/%s?root=%s&view=diff&r1=%s&r2=%s' % (
|
||||
viewvc_link, urllib.quote(file), cvsroot_name,
|
||||
prev_rev(commit.GetRevision()), commit.GetRevision())
|
||||
else:
|
||||
dlink = None
|
||||
else:
|
||||
flink = '[%s] %s' % (repository, file)
|
||||
dlink = None
|
||||
|
||||
ob.files.append(_item(date=ctime,
|
||||
author=commit.GetAuthor(),
|
||||
link=flink,
|
||||
rev=commit.GetRevision(),
|
||||
branch=commit.GetBranch(),
|
||||
plus=int(commit.GetPlusCount()),
|
||||
minus=int(commit.GetMinusCount()),
|
||||
type=commit.GetTypeString(),
|
||||
difflink=dlink,
|
||||
))
|
||||
|
||||
return ob
|
||||
|
||||
def run_query(server, cfg, form_data, viewvc_link):
|
||||
query = form_to_cvsdb_query(form_data)
|
||||
db = cvsdb.ConnectDatabaseReadOnly(cfg)
|
||||
db.RunQuery(query)
|
||||
|
||||
if not query.commit_list:
|
||||
return [ ]
|
||||
|
||||
commits = [ ]
|
||||
files = [ ]
|
||||
|
||||
cvsroots = {}
|
||||
rootitems = cfg.general.svn_roots.items() + cfg.general.cvs_roots.items()
|
||||
for key, value in rootitems:
|
||||
cvsroots[cvsdb.CleanRepository(value)] = key
|
||||
|
||||
current_desc = query.commit_list[0].GetDescription()
|
||||
for commit in query.commit_list:
|
||||
desc = commit.GetDescription()
|
||||
if current_desc == desc:
|
||||
files.append(commit)
|
||||
continue
|
||||
|
||||
commits.append(build_commit(server, cfg, current_desc, files,
|
||||
cvsroots, viewvc_link))
|
||||
|
||||
files = [ commit ]
|
||||
current_desc = desc
|
||||
|
||||
## add the last file group to the commit list
|
||||
commits.append(build_commit(server, cfg, current_desc, files,
|
||||
cvsroots, viewvc_link))
|
||||
|
||||
# Strip out commits that don't have any files attached to them. The
|
||||
# files probably aren't present because they've been blocked via
|
||||
# forbiddenness.
|
||||
def _only_with_files(commit):
|
||||
return len(commit.files) > 0
|
||||
commits = filter(_only_with_files, commits)
|
||||
|
||||
return commits
|
||||
|
||||
def main(server, cfg, viewvc_link):
|
||||
try:
|
||||
|
||||
form = server.FieldStorage()
|
||||
form_data = FormData(form)
|
||||
|
||||
if form_data.valid:
|
||||
commits = run_query(server, cfg, form_data, viewvc_link)
|
||||
query = None
|
||||
else:
|
||||
commits = [ ]
|
||||
query = 'skipped'
|
||||
|
||||
script_name = server.getenv('SCRIPT_NAME', '')
|
||||
|
||||
data = {
|
||||
'cfg' : cfg,
|
||||
'address' : cfg.general.address,
|
||||
'vsn' : viewvc.__version__,
|
||||
|
||||
'repository' : server.escape(form_data.repository, 1),
|
||||
'branch' : server.escape(form_data.branch, 1),
|
||||
'directory' : server.escape(form_data.directory, 1),
|
||||
'file' : server.escape(form_data.file, 1),
|
||||
'who' : server.escape(form_data.who, 1),
|
||||
'docroot' : cfg.options.docroot is None \
|
||||
and viewvc_link + '/' + viewvc.docroot_magic_path \
|
||||
or cfg.options.docroot,
|
||||
|
||||
'sortby' : form_data.sortby,
|
||||
'date' : form_data.date,
|
||||
|
||||
'query' : query,
|
||||
'commits' : commits,
|
||||
'num_commits' : len(commits),
|
||||
'rss_href' : None,
|
||||
}
|
||||
|
||||
if form_data.hours:
|
||||
data['hours'] = form_data.hours
|
||||
else:
|
||||
data['hours'] = 2
|
||||
|
||||
server.header()
|
||||
|
||||
# generate the page
|
||||
template = viewvc.get_view_template(cfg, "query")
|
||||
template.generate(server.file(), data)
|
||||
|
||||
except SystemExit, e:
|
||||
pass
|
||||
except:
|
||||
exc_info = debug.GetExceptionData()
|
||||
server.header(status=exc_info['status'])
|
||||
debug.PrintException(server, exc_info)
|
||||
|
||||
class _item:
|
||||
def __init__(self, **kw):
|
||||
vars(self).update(kw)
|
|
@ -0,0 +1,391 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2006 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# generic server api - currently supports normal cgi, mod_python, and
|
||||
# active server pages
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
import types
|
||||
import string
|
||||
import os
|
||||
import sys
|
||||
import re
|
||||
|
||||
|
||||
# global server object. It will be either a CgiServer or a proxy to
|
||||
# an AspServer or ModPythonServer object.
|
||||
server = None
|
||||
|
||||
|
||||
class Server:
|
||||
def __init__(self):
|
||||
self.pageGlobals = {}
|
||||
|
||||
def self(self):
|
||||
return self
|
||||
|
||||
def close(self):
|
||||
pass
|
||||
|
||||
|
||||
class ThreadedServer(Server):
|
||||
def __init__(self):
|
||||
Server.__init__(self)
|
||||
|
||||
self.inheritableOut = 0
|
||||
|
||||
global server
|
||||
if not isinstance(server, ThreadedServerProxy):
|
||||
server = ThreadedServerProxy()
|
||||
if not isinstance(sys.stdout, File):
|
||||
sys.stdout = File(server)
|
||||
server.registerThread(self)
|
||||
|
||||
def file(self):
|
||||
return File(self)
|
||||
|
||||
def close(self):
|
||||
server.unregisterThread()
|
||||
|
||||
|
||||
class ThreadedServerProxy:
|
||||
"""In a multithreaded server environment, ThreadedServerProxy stores the
|
||||
different server objects being used to display pages and transparently
|
||||
forwards access to them based on the current thread id."""
|
||||
|
||||
def __init__(self):
|
||||
self.__dict__['servers'] = { }
|
||||
global thread
|
||||
import thread
|
||||
|
||||
def registerThread(self, server):
|
||||
self.__dict__['servers'][thread.get_ident()] = server
|
||||
|
||||
def unregisterThread(self):
|
||||
del self.__dict__['servers'][thread.get_ident()]
|
||||
|
||||
def self(self):
|
||||
"""This function bypasses the getattr and setattr trickery and returns
|
||||
the actual server object."""
|
||||
return self.__dict__['servers'][thread.get_ident()]
|
||||
|
||||
def __getattr__(self, key):
|
||||
return getattr(self.self(), key)
|
||||
|
||||
def __setattr__(self, key, value):
|
||||
setattr(self.self(), key, value)
|
||||
|
||||
def __delattr__(self, key):
|
||||
delattr(self.self(), key)
|
||||
|
||||
|
||||
class File:
|
||||
def __init__(self, server):
|
||||
self.closed = 0
|
||||
self.mode = 'w'
|
||||
self.name = "<AspFile file>"
|
||||
self.softspace = 0
|
||||
self.server = server
|
||||
|
||||
def write(self, s):
|
||||
self.server.write(s)
|
||||
|
||||
def writelines(self, list):
|
||||
for s in list:
|
||||
self.server.write(s)
|
||||
|
||||
def flush(self):
|
||||
self.server.flush()
|
||||
|
||||
def truncate(self, size):
|
||||
pass
|
||||
|
||||
def close(self):
|
||||
pass
|
||||
|
||||
|
||||
class CgiServer(Server):
|
||||
def __init__(self, inheritableOut = 1):
|
||||
Server.__init__(self)
|
||||
self.headerSent = 0
|
||||
self.headers = []
|
||||
self.inheritableOut = inheritableOut
|
||||
self.iis = os.environ.get('SERVER_SOFTWARE', '')[:13] == 'Microsoft-IIS'
|
||||
|
||||
if sys.platform == "win32" and inheritableOut:
|
||||
import msvcrt
|
||||
msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
|
||||
|
||||
global server
|
||||
server = self
|
||||
|
||||
global cgi
|
||||
import cgi
|
||||
|
||||
def addheader(self, name, value):
|
||||
self.headers.append((name, value))
|
||||
|
||||
def header(self, content_type='text/html; charset=UTF-8', status=None):
|
||||
if not self.headerSent:
|
||||
self.headerSent = 1
|
||||
|
||||
extraheaders = ''
|
||||
for (name, value) in self.headers:
|
||||
extraheaders = extraheaders + '%s: %s\r\n' % (name, value)
|
||||
|
||||
# The only way ViewVC pages and error messages are visible under
|
||||
# IIS is if a 200 error code is returned. Otherwise IIS instead
|
||||
# sends the static error page corresponding to the code number.
|
||||
if status is None or (status[:3] != '304' and self.iis):
|
||||
status = ''
|
||||
else:
|
||||
status = 'Status: %s\r\n' % status
|
||||
|
||||
sys.stdout.write('%sContent-Type: %s\r\n%s\r\n'
|
||||
% (status, content_type, extraheaders))
|
||||
|
||||
def redirect(self, url):
|
||||
if self.iis: url = fix_iis_url(self, url)
|
||||
self.addheader('Location', url)
|
||||
self.header(status='301 Moved')
|
||||
print 'This document is located <a href="%s">here</a>.' % url
|
||||
sys.exit(0)
|
||||
|
||||
def escape(self, s, quote = None):
|
||||
return cgi.escape(s, quote)
|
||||
|
||||
def getenv(self, name, value=None):
|
||||
ret = os.environ.get(name, value)
|
||||
if self.iis and name == 'PATH_INFO' and ret:
|
||||
ret = fix_iis_path_info(self, ret)
|
||||
return ret
|
||||
|
||||
def params(self):
|
||||
return cgi.parse()
|
||||
|
||||
def FieldStorage(fp=None, headers=None, outerboundary="",
|
||||
environ=os.environ, keep_blank_values=0, strict_parsing=0):
|
||||
return cgi.FieldStorage(fp, headers, outerboundary, environ,
|
||||
keep_blank_values, strict_parsing)
|
||||
|
||||
def write(self, s):
|
||||
sys.stdout.write(s)
|
||||
|
||||
def flush(self):
|
||||
sys.stdout.flush()
|
||||
|
||||
def file(self):
|
||||
return sys.stdout
|
||||
|
||||
|
||||
class AspServer(ThreadedServer):
|
||||
def __init__(self, Server, Request, Response, Application):
|
||||
ThreadedServer.__init__(self)
|
||||
self.headerSent = 0
|
||||
self.server = Server
|
||||
self.request = Request
|
||||
self.response = Response
|
||||
self.application = Application
|
||||
|
||||
def addheader(self, name, value):
|
||||
self.response.AddHeader(name, value)
|
||||
|
||||
def header(self, content_type=None, status=None):
|
||||
# Normally, setting self.response.ContentType after headers have already
|
||||
# been sent simply results in an AttributeError exception, but sometimes
|
||||
# it leads to a fatal ASP error. For this reason I'm keeping the
|
||||
# self.headerSent member and only checking for the exception as a
|
||||
# secondary measure
|
||||
if not self.headerSent:
|
||||
try:
|
||||
self.headerSent = 1
|
||||
if content_type is None:
|
||||
self.response.ContentType = 'text/html; charset=UTF-8'
|
||||
else:
|
||||
self.response.ContentType = content_type
|
||||
if status is not None: self.response.Status = status
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
def redirect(self, url):
|
||||
self.response.Redirect(url)
|
||||
sys.exit()
|
||||
|
||||
def escape(self, s, quote = None):
|
||||
return self.server.HTMLEncode(str(s))
|
||||
|
||||
def getenv(self, name, value = None):
|
||||
ret = self.request.ServerVariables(name)()
|
||||
if not type(ret) is types.UnicodeType:
|
||||
return value
|
||||
ret = str(ret)
|
||||
if name == 'PATH_INFO':
|
||||
ret = fix_iis_path_info(self, ret)
|
||||
return ret
|
||||
|
||||
def params(self):
|
||||
p = {}
|
||||
for i in self.request.Form:
|
||||
p[str(i)] = map(str, self.request.Form[i])
|
||||
for i in self.request.QueryString:
|
||||
p[str(i)] = map(str, self.request.QueryString[i])
|
||||
return p
|
||||
|
||||
def FieldStorage(self, fp=None, headers=None, outerboundary="",
|
||||
environ=os.environ, keep_blank_values=0, strict_parsing=0):
|
||||
|
||||
# Code based on a very helpful usenet post by "Max M" (maxm@mxm.dk)
|
||||
# Subject "Re: Help! IIS and Python"
|
||||
# http://groups.google.com/groups?selm=3C7C0AB6.2090307%40mxm.dk
|
||||
|
||||
from StringIO import StringIO
|
||||
from cgi import FieldStorage
|
||||
|
||||
environ = {}
|
||||
for i in self.request.ServerVariables:
|
||||
environ[str(i)] = str(self.request.ServerVariables(i)())
|
||||
|
||||
# this would be bad for uploaded files, could use a lot of memory
|
||||
binaryContent, size = self.request.BinaryRead(int(environ['CONTENT_LENGTH']))
|
||||
|
||||
fp = StringIO(str(binaryContent))
|
||||
fs = FieldStorage(fp, None, "", environ, keep_blank_values, strict_parsing)
|
||||
fp.close()
|
||||
return fs
|
||||
|
||||
def write(self, s):
|
||||
t = type(s)
|
||||
if t is types.StringType:
|
||||
s = buffer(s)
|
||||
elif not t is types.BufferType:
|
||||
s = buffer(str(s))
|
||||
|
||||
self.response.BinaryWrite(s)
|
||||
|
||||
def flush(self):
|
||||
self.response.Flush()
|
||||
|
||||
|
||||
_re_status = re.compile("\\d+")
|
||||
|
||||
|
||||
class ModPythonServer(ThreadedServer):
|
||||
def __init__(self, request):
|
||||
ThreadedServer.__init__(self)
|
||||
self.request = request
|
||||
self.headerSent = 0
|
||||
|
||||
global cgi
|
||||
import cgi
|
||||
|
||||
def addheader(self, name, value):
|
||||
self.request.headers_out.add(name, value)
|
||||
|
||||
def header(self, content_type=None, status=None):
|
||||
if content_type is None:
|
||||
self.request.content_type = 'text/html; charset=UTF-8'
|
||||
else:
|
||||
self.request.content_type = content_type
|
||||
self.headerSent = 1
|
||||
|
||||
if status is not None:
|
||||
m = _re_status.match(status)
|
||||
if not m is None:
|
||||
self.request.status = int(m.group())
|
||||
|
||||
def redirect(self, url):
|
||||
import mod_python.apache
|
||||
self.request.headers_out['Location'] = url
|
||||
self.request.status = mod_python.apache.HTTP_MOVED_TEMPORARILY
|
||||
self.request.write("You are being redirected to <a href=\"%s\">%s</a>"
|
||||
% (url, url))
|
||||
sys.exit()
|
||||
|
||||
def escape(self, s, quote = None):
|
||||
return cgi.escape(s, quote)
|
||||
|
||||
def getenv(self, name, value = None):
|
||||
try:
|
||||
return self.request.subprocess_env[name]
|
||||
except KeyError:
|
||||
return value
|
||||
|
||||
def params(self):
|
||||
import mod_python.util
|
||||
if self.request.args is None:
|
||||
return {}
|
||||
else:
|
||||
return mod_python.util.parse_qs(self.request.args)
|
||||
|
||||
def FieldStorage(self, fp=None, headers=None, outerboundary="",
|
||||
environ=os.environ, keep_blank_values=0, strict_parsing=0):
|
||||
import mod_python.util
|
||||
return mod_python.util.FieldStorage(self.request, keep_blank_values, strict_parsing)
|
||||
|
||||
def write(self, s):
|
||||
self.request.write(s)
|
||||
|
||||
def flush(self):
|
||||
pass
|
||||
|
||||
|
||||
def fix_iis_url(server, url):
|
||||
"""When a CGI application under IIS outputs a "Location" header with a url
|
||||
beginning with a forward slash, IIS tries to optimise the redirect by not
|
||||
returning any output from the original CGI script at all and instead just
|
||||
returning the new page in its place. Because of this, the browser does
|
||||
not know it is getting a different page than it requested. As a result,
|
||||
The address bar that appears in the browser window shows the wrong location
|
||||
and if the new page is in a different folder than the old one, any relative
|
||||
links on it will be broken.
|
||||
|
||||
This function can be used to circumvent the IIS "optimization" of local
|
||||
redirects. If it is passed a location that begins with a forward slash it
|
||||
will return a URL constructed with the information in CGI environment.
|
||||
If it is passed a URL or any location that doens't begin with a forward slash
|
||||
it will return just argument unaltered.
|
||||
"""
|
||||
if url[0] == '/':
|
||||
if server.getenv('HTTPS') == 'on':
|
||||
dport = "443"
|
||||
prefix = "https://"
|
||||
else:
|
||||
dport = "80"
|
||||
prefix = "http://"
|
||||
prefix = prefix + server.getenv('HTTP_HOST')
|
||||
if server.getenv('SERVER_PORT') != dport:
|
||||
prefix = prefix + ":" + server.getenv('SERVER_PORT')
|
||||
return prefix + url
|
||||
return url
|
||||
|
||||
|
||||
def fix_iis_path_info(server, path_info):
|
||||
"""Fix the PATH_INFO value in IIS"""
|
||||
# If the viewvc cgi's are in the /viewvc/ folder on the web server and a
|
||||
# request looks like
|
||||
#
|
||||
# /viewvc/viewvc.cgi/myproject/?someoption
|
||||
#
|
||||
# The CGI environment variables on IIS will look like this:
|
||||
#
|
||||
# SCRIPT_NAME = /viewvc/viewvc.cgi
|
||||
# PATH_INFO = /viewvc/viewvc.cgi/myproject/
|
||||
#
|
||||
# Whereas on Apache they look like:
|
||||
#
|
||||
# SCRIPT_NAME = /viewvc/viewvc.cgi
|
||||
# PATH_INFO = /myproject/
|
||||
#
|
||||
# This function converts the IIS PATH_INFO into the nonredundant form
|
||||
# expected by ViewVC
|
||||
return path_info[len(server.getenv('SCRIPT_NAME', '')):]
|
|
@ -0,0 +1,49 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 2006-2008 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
"""Generic API for implementing authorization checks employed by ViewVC."""
|
||||
|
||||
import string
|
||||
import vclib
|
||||
|
||||
|
||||
class GenericViewVCAuthorizer:
|
||||
"""Abstract class encapsulating version control authorization routines."""
|
||||
|
||||
def __init__(self, username=None, params={}):
|
||||
"""Create a GenericViewVCAuthorizer object which will be used to
|
||||
validate that USERNAME has the permissions needed to view version
|
||||
control repositories (in whole or in part). PARAMS is a
|
||||
dictionary of custom parameters for the authorizer."""
|
||||
pass
|
||||
|
||||
def check_root_access(self, rootname):
|
||||
"""Return 1 iff the associated username is permitted to read ROOTNAME."""
|
||||
pass
|
||||
|
||||
def check_path_access(self, rootname, path_parts, pathtype, rev=None):
|
||||
"""Return 1 iff the associated username is permitted to read
|
||||
revision REV of the path PATH_PARTS (of type PATHTYPE) in
|
||||
repository ROOTNAME."""
|
||||
pass
|
||||
|
||||
|
||||
|
||||
##############################################################################
|
||||
|
||||
class ViewVCAuthorizer(GenericViewVCAuthorizer):
|
||||
"""The uber-permissive authorizer."""
|
||||
def check_root_access(self, rootname):
|
||||
return 1
|
||||
|
||||
def check_path_access(self, rootname, path_parts, pathtype, rev=None):
|
||||
return 1
|
|
@ -0,0 +1,46 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 2006-2008 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
import vcauth
|
||||
import vclib
|
||||
import fnmatch
|
||||
import string
|
||||
|
||||
class ViewVCAuthorizer(vcauth.GenericViewVCAuthorizer):
|
||||
"""A simple top-level module authorizer."""
|
||||
def __init__(self, username, params={}):
|
||||
forbidden = params.get('forbidden', '')
|
||||
self.forbidden = map(string.strip,
|
||||
filter(None, string.split(forbidden, ',')))
|
||||
|
||||
def check_root_access(self, rootname):
|
||||
return 1
|
||||
|
||||
def check_path_access(self, rootname, path_parts, pathtype, rev=None):
|
||||
# No path? No problem.
|
||||
if not path_parts:
|
||||
return 1
|
||||
|
||||
# Not a directory? We aren't interested.
|
||||
if pathtype != vclib.DIR:
|
||||
return 1
|
||||
|
||||
# At this point we're looking at a directory path.
|
||||
module = path_parts[0]
|
||||
default = 1
|
||||
for pat in self.forbidden:
|
||||
if pat[0] == '!':
|
||||
default = 0
|
||||
if fnmatch.fnmatchcase(module, pat[1:]):
|
||||
return 1
|
||||
elif fnmatch.fnmatchcase(module, pat):
|
||||
return 0
|
||||
return default
|
|
@ -0,0 +1,58 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 2008 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
import vcauth
|
||||
import vclib
|
||||
import fnmatch
|
||||
import string
|
||||
import re
|
||||
|
||||
|
||||
def _split_regexp(restr):
|
||||
"""Return a 2-tuple consisting of a compiled regular expression
|
||||
object and a boolean flag indicating if that object should be
|
||||
interpreted inversely."""
|
||||
if restr[0] == '!':
|
||||
return re.compile(restr[1:]), 1
|
||||
return re.compile(restr), 0
|
||||
|
||||
|
||||
class ViewVCAuthorizer(vcauth.GenericViewVCAuthorizer):
|
||||
"""A simple regular-expression-based authorizer."""
|
||||
def __init__(self, username, params={}):
|
||||
forbidden = params.get('forbiddenre', '')
|
||||
self.forbidden = map(lambda x: _split_regexp(string.strip(x)),
|
||||
filter(None, string.split(forbidden, ',')))
|
||||
|
||||
def _check_root_path_access(self, root_path):
|
||||
default = 1
|
||||
for forbidden, negated in self.forbidden:
|
||||
if negated:
|
||||
default = 0
|
||||
if forbidden.search(root_path):
|
||||
return 1
|
||||
elif forbidden.search(root_path):
|
||||
return 0
|
||||
return default
|
||||
|
||||
def check_root_access(self, rootname):
|
||||
return self._check_root_path_access(rootname)
|
||||
|
||||
def check_path_access(self, rootname, path_parts, pathtype, rev=None):
|
||||
root_path = rootname
|
||||
if path_parts:
|
||||
root_path = root_path + '/' + string.join(path_parts, '/')
|
||||
if pathtype == vclib.DIR:
|
||||
root_path = root_path + '/'
|
||||
else:
|
||||
root_path = root_path + '/'
|
||||
return self._check_root_path_access(root_path)
|
||||
|
|
@ -0,0 +1,223 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 2006-2008 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
# (c) 2006 Sergey Lapin <slapin@dataart.com>
|
||||
|
||||
import vcauth
|
||||
import string
|
||||
import os.path
|
||||
import debug
|
||||
|
||||
from ConfigParser import ConfigParser
|
||||
|
||||
class ViewVCAuthorizer(vcauth.GenericViewVCAuthorizer):
|
||||
"""Subversion authz authorizer module"""
|
||||
|
||||
def __init__(self, username, params={}):
|
||||
self.username = username
|
||||
self.rootpaths = { } # {root -> { paths -> access boolean for USERNAME }}
|
||||
|
||||
# Get the authz file location from a passed-in parameter.
|
||||
self.authz_file = params.get('authzfile')
|
||||
if not self.authz_file:
|
||||
raise debug.ViewVCException("No authzfile configured")
|
||||
if not os.path.exists(self.authz_file):
|
||||
raise debug.ViewVCException("Configured authzfile file not found")
|
||||
|
||||
def _get_paths_for_root(self, rootname):
|
||||
if self.rootpaths.has_key(rootname):
|
||||
return self.rootpaths[rootname]
|
||||
|
||||
paths_for_root = { }
|
||||
|
||||
# Parse the authz file.
|
||||
cp = ConfigParser()
|
||||
cp.read(self.authz_file)
|
||||
|
||||
# Figure out if there are any aliases for the current username
|
||||
aliases = []
|
||||
if cp.has_section('aliases'):
|
||||
for alias in cp.options('aliases'):
|
||||
entry = cp.get('aliases', alias)
|
||||
if entry == self.username:
|
||||
aliases.append(alias)
|
||||
|
||||
# Figure out which groups USERNAME has a part of.
|
||||
groups = []
|
||||
if cp.has_section('groups'):
|
||||
all_groups = []
|
||||
|
||||
def _process_group(groupname):
|
||||
"""Inline function to handle groups within groups.
|
||||
|
||||
For a group to be within another group in SVN, the group
|
||||
definitions must be in the correct order in the config file.
|
||||
ie. If group A is a member of group B then group A must be
|
||||
defined before group B in the [groups] section.
|
||||
|
||||
Unfortunately, the ConfigParser class provides no way of
|
||||
finding the order in which groups were defined so, for reasons
|
||||
of practicality, this function lets you get away with them
|
||||
being defined in the wrong order. Recursion is guarded
|
||||
against though."""
|
||||
|
||||
# If we already know the user is part of this already-
|
||||
# processed group, return that fact.
|
||||
if groupname in groups:
|
||||
return 1
|
||||
# Otherwise, ensure we don't process a group twice.
|
||||
if groupname in all_groups:
|
||||
return 0
|
||||
# Store the group name in a global list so it won't be processed again
|
||||
all_groups.append(groupname)
|
||||
group_member = 0
|
||||
groupname = groupname.strip()
|
||||
entries = string.split(cp.get('groups', groupname), ',')
|
||||
for entry in entries:
|
||||
entry = string.strip(entry)
|
||||
if entry == self.username:
|
||||
group_member = 1
|
||||
break
|
||||
elif entry[0:1] == "@" and _process_group(entry[1:]):
|
||||
group_member = 1
|
||||
break
|
||||
elif entry[0:1] == "&" and entry[1:] in aliases:
|
||||
group_member = 1
|
||||
break
|
||||
if group_member:
|
||||
groups.append(groupname)
|
||||
return group_member
|
||||
|
||||
# Process the groups
|
||||
for group in cp.options('groups'):
|
||||
_process_group(group)
|
||||
|
||||
def _userspec_matches_user(userspec):
|
||||
# If there is an inversion character, recurse and return the
|
||||
# opposite result.
|
||||
if userspec[0:1] == '~':
|
||||
return not _userspec_matches_user(userspec[1:])
|
||||
|
||||
# See if the userspec applies to our current user.
|
||||
return userspec == '*' \
|
||||
or userspec == self.username \
|
||||
or (self.username is not None and userspec == "$authenticated") \
|
||||
or (self.username is None and userspec == "$anonymous") \
|
||||
or (userspec[0:1] == "@" and userspec[1:] in groups) \
|
||||
or (userspec[0:1] == "&" and userspec[1:] in aliases)
|
||||
|
||||
def _process_access_section(section):
|
||||
"""Inline function for determining user access in a single
|
||||
config secction. Return a two-tuple (ALLOW, DENY) containing
|
||||
the access determination for USERNAME in a given authz file
|
||||
SECTION (if any)."""
|
||||
|
||||
# Figure if this path is explicitly allowed or denied to USERNAME.
|
||||
allow = deny = 0
|
||||
for user in cp.options(section):
|
||||
user = string.strip(user)
|
||||
if _userspec_matches_user(user):
|
||||
# See if the 'r' permission is among the ones granted to
|
||||
# USER. If so, we can stop looking. (Entry order is not
|
||||
# relevant -- we'll use the most permissive entry, meaning
|
||||
# one 'allow' is all we need.)
|
||||
allow = string.find(cp.get(section, user), 'r') != -1
|
||||
deny = not allow
|
||||
if allow:
|
||||
break
|
||||
return allow, deny
|
||||
|
||||
# Read the other (non-"groups") sections, and figure out in which
|
||||
# repositories USERNAME or his groups have read rights. We'll
|
||||
# first check groups that have no specific repository designation,
|
||||
# then superimpose those that have a repository designation which
|
||||
# matches the one we're asking about.
|
||||
root_sections = []
|
||||
for section in cp.sections():
|
||||
|
||||
# Skip the "groups" section -- we handled that already.
|
||||
if section == 'groups':
|
||||
continue
|
||||
|
||||
if section == 'aliases':
|
||||
continue
|
||||
|
||||
# Process root-agnostic access sections; skip (but remember)
|
||||
# root-specific ones that match our root; ignore altogether
|
||||
# root-specific ones that don't match our root. While we're at
|
||||
# it, go ahead and figure out the repository path we're talking
|
||||
# about.
|
||||
if section.find(':') == -1:
|
||||
path = section
|
||||
else:
|
||||
name, path = string.split(section, ':', 1)
|
||||
if name == rootname:
|
||||
root_sections.append(section)
|
||||
continue
|
||||
|
||||
# Check for a specific access determination.
|
||||
allow, deny = _process_access_section(section)
|
||||
|
||||
# If we got an explicit access determination for this path and this
|
||||
# USERNAME, record it.
|
||||
if allow or deny:
|
||||
if path != '/':
|
||||
path = '/' + string.join(filter(None, string.split(path, '/')), '/')
|
||||
paths_for_root[path] = allow
|
||||
|
||||
# Okay. Superimpose those root-specific values now.
|
||||
for section in root_sections:
|
||||
|
||||
# Get the path again.
|
||||
name, path = string.split(section, ':', 1)
|
||||
|
||||
# Check for a specific access determination.
|
||||
allow, deny = _process_access_section(section)
|
||||
|
||||
# If we got an explicit access determination for this path and this
|
||||
# USERNAME, record it.
|
||||
if allow or deny:
|
||||
if path != '/':
|
||||
path = '/' + string.join(filter(None, string.split(path, '/')), '/')
|
||||
paths_for_root[path] = allow
|
||||
|
||||
# If the root isn't readable, there's no point in caring about all
|
||||
# the specific paths the user can't see. Just point the rootname
|
||||
# to a None paths dictionary.
|
||||
root_is_readable = 0
|
||||
for path in paths_for_root.keys():
|
||||
if paths_for_root[path]:
|
||||
root_is_readable = 1
|
||||
break
|
||||
if not root_is_readable:
|
||||
paths_for_root = None
|
||||
|
||||
self.rootpaths[rootname] = paths_for_root
|
||||
return paths_for_root
|
||||
|
||||
def check_root_access(self, rootname):
|
||||
paths = self._get_paths_for_root(rootname)
|
||||
return (paths is not None) and 1 or 0
|
||||
|
||||
def check_path_access(self, rootname, path_parts, pathtype, rev=None):
|
||||
# Crawl upward from the path represented by PATH_PARTS toward to
|
||||
# the root of the repository, looking for an explicitly grant or
|
||||
# denial of access.
|
||||
paths = self._get_paths_for_root(rootname)
|
||||
if paths is None:
|
||||
return 0
|
||||
parts = path_parts[:]
|
||||
while parts:
|
||||
path = '/' + string.join(parts, '/')
|
||||
if paths.has_key(path):
|
||||
return paths[path]
|
||||
del parts[-1]
|
||||
return paths.get('/', 0)
|
|
@ -0,0 +1,420 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2008 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
"""Version Control lib is an abstract API to access versioning systems
|
||||
such as CVS.
|
||||
"""
|
||||
|
||||
import string
|
||||
import types
|
||||
|
||||
|
||||
# item types returned by Repository.itemtype().
|
||||
FILE = 'FILE'
|
||||
DIR = 'DIR'
|
||||
|
||||
# diff types recognized by Repository.rawdiff().
|
||||
UNIFIED = 1
|
||||
CONTEXT = 2
|
||||
SIDE_BY_SIDE = 3
|
||||
|
||||
# root types returned by Repository.roottype().
|
||||
CVS = 'cvs'
|
||||
SVN = 'svn'
|
||||
|
||||
# action kinds found in ChangedPath.action
|
||||
ADDED = 'added'
|
||||
DELETED = 'deleted'
|
||||
REPLACED = 'replaced'
|
||||
MODIFIED = 'modified'
|
||||
|
||||
# log sort keys
|
||||
SORTBY_DEFAULT = 0 # default/no sorting
|
||||
SORTBY_DATE = 1 # sorted by date, youngest first
|
||||
SORTBY_REV = 2 # sorted by revision, youngest first
|
||||
|
||||
|
||||
# ======================================================================
|
||||
#
|
||||
class Repository:
|
||||
"""Abstract class representing a repository."""
|
||||
|
||||
def rootname(self):
|
||||
"""Return the name of this repository."""
|
||||
|
||||
def roottype(self):
|
||||
"""Return the type of this repository (vclib.CVS, vclib.SVN, ...)."""
|
||||
|
||||
def rootpath(self):
|
||||
"""Return the location of this repository."""
|
||||
|
||||
def authorizer(self):
|
||||
"""Return the vcauth.Authorizer object associated with this
|
||||
repository, or None if no such association has been made."""
|
||||
|
||||
def open(self):
|
||||
"""Open a connection to the repository."""
|
||||
|
||||
def itemtype(self, path_parts, rev):
|
||||
"""Return the type of the item (file or dir) at the given path and revision
|
||||
|
||||
The result will be vclib.DIR or vclib.FILE
|
||||
|
||||
The path is specified as a list of components, relative to the root
|
||||
of the repository. e.g. ["subdir1", "subdir2", "filename"]
|
||||
|
||||
rev is the revision of the item to check
|
||||
"""
|
||||
pass
|
||||
|
||||
def openfile(self, path_parts, rev):
|
||||
"""Open a file object to read file contents at a given path and revision.
|
||||
|
||||
The return value is a 2-tuple of containg the file object and revision
|
||||
number in canonical form.
|
||||
|
||||
The path is specified as a list of components, relative to the root
|
||||
of the repository. e.g. ["subdir1", "subdir2", "filename"]
|
||||
|
||||
rev is the revision of the file to check out
|
||||
"""
|
||||
|
||||
def listdir(self, path_parts, rev, options):
|
||||
"""Return list of files in a directory
|
||||
|
||||
The result is a list of DirEntry objects
|
||||
|
||||
The path is specified as a list of components, relative to the root
|
||||
of the repository. e.g. ["subdir1", "subdir2", "filename"]
|
||||
|
||||
rev is the revision of the directory to list
|
||||
|
||||
options is a dictionary of implementation specific options
|
||||
"""
|
||||
|
||||
def dirlogs(self, path_parts, rev, entries, options):
|
||||
"""Augment directory entries with log information
|
||||
|
||||
New properties will be set on all of the DirEntry objects in the entries
|
||||
list. At the very least, a "rev" property will be set to a revision
|
||||
number or None if the entry doesn't have a number. Other properties that
|
||||
may be set include "date", "author", "log", "size", and "lockinfo".
|
||||
|
||||
The path is specified as a list of components, relative to the root
|
||||
of the repository. e.g. ["subdir1", "subdir2", "filename"]
|
||||
|
||||
rev is the revision of the directory listing and will effect which log
|
||||
messages are returned
|
||||
|
||||
entries is a list of DirEntry objects returned from a previous call to
|
||||
the listdir() method
|
||||
|
||||
options is a dictionary of implementation specific options
|
||||
"""
|
||||
|
||||
def itemlog(self, path_parts, rev, sortby, first, limit, options):
|
||||
"""Retrieve an item's log information
|
||||
|
||||
The result is a list of Revision objects
|
||||
|
||||
The path is specified as a list of components, relative to the root
|
||||
of the repository. e.g. ["subdir1", "subdir2", "filename"]
|
||||
|
||||
rev is the revision of the item to return information about
|
||||
|
||||
sortby indicates the way in which the returned list should be
|
||||
sorted (SORTBY_DEFAULT, SORTBY_DATE, SORTBY_REV)
|
||||
|
||||
first is the 0-based index of the first Revision returned (after
|
||||
sorting, if any, has occured)
|
||||
|
||||
limit is the maximum number of returned Revisions, or 0 to return
|
||||
all available data
|
||||
|
||||
options is a dictionary of implementation specific options
|
||||
"""
|
||||
|
||||
def itemprops(self, path_parts, rev):
|
||||
"""Return a dictionary mapping property names to property values
|
||||
for properties stored on an item.
|
||||
|
||||
The path is specified as a list of components, relative to the root
|
||||
of the repository. e.g. ["subdir1", "subdir2", "filename"]
|
||||
|
||||
rev is the revision of the item to return information about.
|
||||
"""
|
||||
|
||||
def rawdiff(self, path_parts1, rev1, path_parts2, rev2, type, options={}):
|
||||
"""Return a diff (in GNU diff format) of two file revisions
|
||||
|
||||
type is the requested diff type (UNIFIED, CONTEXT, etc)
|
||||
|
||||
options is a dictionary that can contain the following options plus
|
||||
implementation-specific options
|
||||
|
||||
context - integer, number of context lines to include
|
||||
funout - boolean, include C function names
|
||||
ignore_white - boolean, ignore whitespace
|
||||
|
||||
Return value is a python file object
|
||||
"""
|
||||
|
||||
def annotate(self, path_parts, rev):
|
||||
"""Return a list of annotate file content lines and a revision.
|
||||
|
||||
The result is a list of Annotation objects, sorted by their
|
||||
line_number components.
|
||||
"""
|
||||
|
||||
def revinfo(self, rev):
|
||||
"""Return information about a global revision
|
||||
|
||||
rev is the revision of the item to return information about
|
||||
|
||||
Return value is a 4-tuple containing the date, author, log
|
||||
message, and a list of ChangedPath items representing paths changed
|
||||
|
||||
Raise vclib.UnsupportedFeature if the version control system
|
||||
doesn't support a global revision concept.
|
||||
"""
|
||||
|
||||
def isexecutable(self, path_parts, rev):
|
||||
"""Return true iff a given revision of a versioned file is to be
|
||||
considered an executable program or script.
|
||||
|
||||
The path is specified as a list of components, relative to the root
|
||||
of the repository. e.g. ["subdir1", "subdir2", "filename"]
|
||||
|
||||
rev is the revision of the item to return information about
|
||||
"""
|
||||
|
||||
|
||||
# ======================================================================
|
||||
class DirEntry:
|
||||
"""Instances represent items in a directory listing"""
|
||||
|
||||
def __init__(self, name, kind, errors=[]):
|
||||
"""Create a new DirEntry() item:
|
||||
NAME: The name of the directory entry
|
||||
KIND: The path kind of the entry (vclib.DIR, vclib.FILE)
|
||||
ERRORS: A list of error strings representing problems encountered
|
||||
while determining the other info about this entry
|
||||
"""
|
||||
self.name = name
|
||||
self.kind = kind
|
||||
self.errors = errors
|
||||
|
||||
class Revision:
|
||||
"""Instances holds information about revisions of versioned resources"""
|
||||
|
||||
def __init__(self, number, string, date, author, changed, log, size, lockinfo):
|
||||
"""Create a new Revision() item:
|
||||
NUMBER: Revision in an integer-based, sortable format
|
||||
STRING: Revision as a string
|
||||
DATE: Seconds since Epoch (GMT) that this revision was created
|
||||
AUTHOR: Author of the revision
|
||||
CHANGED: Lines-changed (contextual diff) information
|
||||
LOG: Log message associated with the creation of this revision
|
||||
SIZE: Size (in bytes) of this revision's fulltext (files only)
|
||||
LOCKINFO: Information about locks held on this revision
|
||||
"""
|
||||
self.number = number
|
||||
self.string = string
|
||||
self.date = date
|
||||
self.author = author
|
||||
self.changed = changed
|
||||
self.log = log
|
||||
self.size = size
|
||||
self.lockinfo = lockinfo
|
||||
|
||||
def __cmp__(self, other):
|
||||
return cmp(self.number, other.number)
|
||||
|
||||
class Annotation:
|
||||
"""Instances represent per-line file annotation information"""
|
||||
|
||||
def __init__(self, text, line_number, rev, prev_rev, author, date):
|
||||
"""Create a new Annotation() item:
|
||||
TEXT: Raw text of a line of file contents
|
||||
LINE_NUMBER: Line number on which the line is found
|
||||
REV: Revision in which the line was last modified
|
||||
PREV_REV: Revision prior to 'rev'
|
||||
AUTHOR: Author who last modified the line
|
||||
DATE: Date on which the line was last modified, in seconds since
|
||||
the epoch, GMT
|
||||
"""
|
||||
self.text = text
|
||||
self.line_number = line_number
|
||||
self.rev = rev
|
||||
self.prev_rev = prev_rev
|
||||
self.author = author
|
||||
self.date = date
|
||||
|
||||
class ChangedPath:
|
||||
"""Instances represent changes to paths"""
|
||||
|
||||
def __init__(self, path_parts, rev, pathtype, base_path_parts,
|
||||
base_rev, action, copied, text_changed, props_changed):
|
||||
"""Create a new ChangedPath() item:
|
||||
PATH_PARTS: Path that was changed
|
||||
REV: Revision represented by this change
|
||||
PATHTYPE: Type of this path (vclib.DIR, vclib.FILE, ...)
|
||||
BASE_PATH_PARTS: Previous path for this changed item
|
||||
BASE_REV: Previous revision for this changed item
|
||||
ACTION: Kind of change (vclib.ADDED, vclib.DELETED, ...)
|
||||
COPIED: Boolean -- was this path copied from elsewhere?
|
||||
TEXT_CHANGED: Boolean -- did the file's text change?
|
||||
PROPS_CHANGED: Boolean -- did the item's metadata change?
|
||||
"""
|
||||
self.path_parts = path_parts
|
||||
self.rev = rev
|
||||
self.pathtype = pathtype
|
||||
self.base_path_parts = base_path_parts
|
||||
self.base_rev = base_rev
|
||||
self.action = action
|
||||
self.copied = copied
|
||||
self.text_changed = text_changed
|
||||
self.props_changed = props_changed
|
||||
|
||||
|
||||
# ======================================================================
|
||||
|
||||
class Error(Exception):
|
||||
pass
|
||||
|
||||
class ReposNotFound(Error):
|
||||
pass
|
||||
|
||||
class UnsupportedFeature(Error):
|
||||
pass
|
||||
|
||||
class ItemNotFound(Error):
|
||||
def __init__(self, path):
|
||||
# use '/' rather than os.sep because this is for user consumption, and
|
||||
# it was defined using URL separators
|
||||
if type(path) in (types.TupleType, types.ListType):
|
||||
path = string.join(path, '/')
|
||||
Error.__init__(self, path)
|
||||
|
||||
class InvalidRevision(Error):
|
||||
def __init__(self, revision=None):
|
||||
if revision is None:
|
||||
Error.__init__(self, "Invalid revision")
|
||||
else:
|
||||
Error.__init__(self, "Invalid revision " + str(revision))
|
||||
|
||||
class NonTextualFileContents(Error):
|
||||
pass
|
||||
|
||||
# ======================================================================
|
||||
# Implementation code used by multiple vclib modules
|
||||
|
||||
import popen
|
||||
import os
|
||||
import time
|
||||
|
||||
def _diff_args(type, options):
|
||||
"""generate argument list to pass to diff or rcsdiff"""
|
||||
args = []
|
||||
if type == CONTEXT:
|
||||
if options.has_key('context'):
|
||||
if options['context'] is None:
|
||||
args.append('--context=-1')
|
||||
else:
|
||||
args.append('--context=%i' % options['context'])
|
||||
else:
|
||||
args.append('-c')
|
||||
elif type == UNIFIED:
|
||||
if options.has_key('context'):
|
||||
if options['context'] is None:
|
||||
args.append('--unified=-1')
|
||||
else:
|
||||
args.append('--unified=%i' % options['context'])
|
||||
else:
|
||||
args.append('-u')
|
||||
elif type == SIDE_BY_SIDE:
|
||||
args.append('--side-by-side')
|
||||
args.append('--width=164')
|
||||
else:
|
||||
raise NotImplementedError
|
||||
|
||||
if options.get('funout', 0):
|
||||
args.append('-p')
|
||||
|
||||
if options.get('ignore_white', 0):
|
||||
args.append('-w')
|
||||
|
||||
return args
|
||||
|
||||
class _diff_fp:
|
||||
"""File object reading a diff between temporary files, cleaning up
|
||||
on close"""
|
||||
|
||||
def __init__(self, temp1, temp2, info1=None, info2=None, diff_cmd='diff', diff_opts=[]):
|
||||
self.temp1 = temp1
|
||||
self.temp2 = temp2
|
||||
args = diff_opts[:]
|
||||
if info1 and info2:
|
||||
args.extend(["-L", self._label(info1), "-L", self._label(info2)])
|
||||
args.extend([temp1, temp2])
|
||||
self.fp = popen.popen(diff_cmd, args, "r")
|
||||
|
||||
def read(self, bytes):
|
||||
return self.fp.read(bytes)
|
||||
|
||||
def readline(self):
|
||||
return self.fp.readline()
|
||||
|
||||
def close(self):
|
||||
try:
|
||||
if self.fp:
|
||||
self.fp.close()
|
||||
self.fp = None
|
||||
finally:
|
||||
try:
|
||||
if self.temp1:
|
||||
os.remove(self.temp1)
|
||||
self.temp1 = None
|
||||
finally:
|
||||
if self.temp2:
|
||||
os.remove(self.temp2)
|
||||
self.temp2 = None
|
||||
|
||||
def __del__(self):
|
||||
self.close()
|
||||
|
||||
def _label(self, (path, date, rev)):
|
||||
date = date and time.strftime('%Y/%m/%d %H:%M:%S', time.gmtime(date))
|
||||
return "%s\t%s\t%s" % (path, date, rev)
|
||||
|
||||
|
||||
def check_root_access(repos):
|
||||
"""Return 1 iff the associated username is permitted to read REPOS,
|
||||
as determined by consulting REPOS's Authorizer object (if any)."""
|
||||
|
||||
auth = repos.authorizer()
|
||||
if not auth:
|
||||
return 1
|
||||
return auth.check_root_access(repos.rootname())
|
||||
|
||||
def check_path_access(repos, path_parts, pathtype=None, rev=None):
|
||||
"""Return 1 iff the associated username is permitted to read
|
||||
revision REV of the path PATH_PARTS (of type PATHTYPE) in repository
|
||||
REPOS, as determined by consulting REPOS's Authorizer object (if any)."""
|
||||
|
||||
auth = repos.authorizer()
|
||||
if not auth:
|
||||
return 1
|
||||
if not pathtype:
|
||||
pathtype = repos.itemtype(path_parts, rev)
|
||||
return auth.check_path_access(repos.rootname(), path_parts, pathtype, rev)
|
||||
|
|
@ -0,0 +1,43 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2008 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
import os
|
||||
import os.path
|
||||
|
||||
|
||||
def canonicalize_rootpath(rootpath):
|
||||
return os.path.normpath(rootpath)
|
||||
|
||||
|
||||
def expand_root_parent(parent_path):
|
||||
# Each subdirectory of PARENT_PATH that contains a child
|
||||
# "CVSROOT/config" is added the set of returned roots. Or, if the
|
||||
# PARENT_PATH itself contains a child "CVSROOT/config", then all its
|
||||
# subdirectories are returned as roots.
|
||||
roots = {}
|
||||
subpaths = os.listdir(parent_path)
|
||||
cvsroot = os.path.exists(os.path.join(parent_path, "CVSROOT", "config"))
|
||||
for rootname in subpaths:
|
||||
rootpath = os.path.join(parent_path, rootname)
|
||||
if cvsroot \
|
||||
or (os.path.exists(os.path.join(rootpath, "CVSROOT", "config"))):
|
||||
roots[rootname] = canonicalize_rootpath(rootpath)
|
||||
return roots
|
||||
|
||||
|
||||
def CVSRepository(name, rootpath, authorizer, utilities, use_rcsparse):
|
||||
rootpath = canonicalize_rootpath(rootpath)
|
||||
if use_rcsparse:
|
||||
import ccvs
|
||||
return ccvs.CCVSRepository(name, rootpath, authorizer, utilities)
|
||||
else:
|
||||
import bincvs
|
||||
return bincvs.BinCVSRepository(name, rootpath, authorizer, utilities)
|
|
@ -0,0 +1,458 @@
|
|||
#!/usr/bin/env python
|
||||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2007 The ViewCVS Group. All Rights Reserved.
|
||||
# Copyright (C) 2000 Curt Hagenlocher <curt@hagenlocher.org>
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# blame.py: Annotate each line of a CVS file with its author,
|
||||
# revision #, date, etc.
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# This file is based on the cvsblame.pl portion of the Bonsai CVS tool,
|
||||
# developed by Steve Lamm for Netscape Communications Corporation. More
|
||||
# information about Bonsai can be found at
|
||||
# http://www.mozilla.org/bonsai.html
|
||||
#
|
||||
# cvsblame.pl, in turn, was based on Scott Furman's cvsblame script
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
import string
|
||||
import re
|
||||
import time
|
||||
import math
|
||||
import rcsparse
|
||||
import vclib
|
||||
|
||||
class CVSParser(rcsparse.Sink):
|
||||
# Precompiled regular expressions
|
||||
trunk_rev = re.compile('^[0-9]+\\.[0-9]+$')
|
||||
last_branch = re.compile('(.*)\\.[0-9]+')
|
||||
is_branch = re.compile('^(.*)\\.0\\.([0-9]+)$')
|
||||
d_command = re.compile('^d(\d+)\\s(\\d+)')
|
||||
a_command = re.compile('^a(\d+)\\s(\\d+)')
|
||||
|
||||
SECONDS_PER_DAY = 86400
|
||||
|
||||
def __init__(self):
|
||||
self.Reset()
|
||||
|
||||
def Reset(self):
|
||||
self.last_revision = {}
|
||||
self.prev_revision = {}
|
||||
self.revision_date = {}
|
||||
self.revision_author = {}
|
||||
self.revision_branches = {}
|
||||
self.next_delta = {}
|
||||
self.prev_delta = {}
|
||||
self.tag_revision = {}
|
||||
self.timestamp = {}
|
||||
self.revision_ctime = {}
|
||||
self.revision_age = {}
|
||||
self.revision_log = {}
|
||||
self.revision_deltatext = {}
|
||||
self.revision_map = [] # map line numbers to revisions
|
||||
self.lines_added = {}
|
||||
self.lines_removed = {}
|
||||
|
||||
# Map a tag to a numerical revision number. The tag can be a symbolic
|
||||
# branch tag, a symbolic revision tag, or an ordinary numerical
|
||||
# revision number.
|
||||
def map_tag_to_revision(self, tag_or_revision):
|
||||
try:
|
||||
revision = self.tag_revision[tag_or_revision]
|
||||
match = self.is_branch.match(revision)
|
||||
if match:
|
||||
branch = match.group(1) + '.' + match.group(2)
|
||||
if self.last_revision.get(branch):
|
||||
return self.last_revision[branch]
|
||||
else:
|
||||
return match.group(1)
|
||||
else:
|
||||
return revision
|
||||
except:
|
||||
return ''
|
||||
|
||||
# Construct an ordered list of ancestor revisions to the given
|
||||
# revision, starting with the immediate ancestor and going back
|
||||
# to the primordial revision (1.1).
|
||||
#
|
||||
# Note: The generated path does not traverse the tree the same way
|
||||
# that the individual revision deltas do. In particular,
|
||||
# the path traverses the tree "backwards" on branches.
|
||||
def ancestor_revisions(self, revision):
|
||||
ancestors = []
|
||||
revision = self.prev_revision.get(revision)
|
||||
while revision:
|
||||
ancestors.append(revision)
|
||||
revision = self.prev_revision.get(revision)
|
||||
|
||||
return ancestors
|
||||
|
||||
# Split deltatext specified by rev to each line.
|
||||
def deltatext_split(self, rev):
|
||||
lines = string.split(self.revision_deltatext[rev], '\n')
|
||||
if lines[-1] == '':
|
||||
del lines[-1]
|
||||
return lines
|
||||
|
||||
# Extract the given revision from the digested RCS file.
|
||||
# (Essentially the equivalent of cvs up -rXXX)
|
||||
def extract_revision(self, revision):
|
||||
path = []
|
||||
add_lines_remaining = 0
|
||||
start_line = 0
|
||||
count = 0
|
||||
while revision:
|
||||
path.append(revision)
|
||||
revision = self.prev_delta.get(revision)
|
||||
path.reverse()
|
||||
path = path[1:] # Get rid of head revision
|
||||
|
||||
text = self.deltatext_split(self.head_revision)
|
||||
|
||||
# Iterate, applying deltas to previous revision
|
||||
for revision in path:
|
||||
adjust = 0
|
||||
diffs = self.deltatext_split(revision)
|
||||
self.lines_added[revision] = 0
|
||||
self.lines_removed[revision] = 0
|
||||
lines_added_now = 0
|
||||
lines_removed_now = 0
|
||||
|
||||
for command in diffs:
|
||||
dmatch = self.d_command.match(command)
|
||||
amatch = self.a_command.match(command)
|
||||
if add_lines_remaining > 0:
|
||||
# Insertion lines from a prior "a" command
|
||||
text.insert(start_line + adjust, command)
|
||||
add_lines_remaining = add_lines_remaining - 1
|
||||
adjust = adjust + 1
|
||||
elif dmatch:
|
||||
# "d" - Delete command
|
||||
start_line = string.atoi(dmatch.group(1))
|
||||
count = string.atoi(dmatch.group(2))
|
||||
begin = start_line + adjust - 1
|
||||
del text[begin:begin + count]
|
||||
adjust = adjust - count
|
||||
lines_removed_now = lines_removed_now + count
|
||||
elif amatch:
|
||||
# "a" - Add command
|
||||
start_line = string.atoi(amatch.group(1))
|
||||
count = string.atoi(amatch.group(2))
|
||||
add_lines_remaining = count
|
||||
lines_added_now = lines_added_now + count
|
||||
else:
|
||||
raise RuntimeError, 'Error parsing diff commands'
|
||||
|
||||
self.lines_added[revision] = self.lines_added[revision] + lines_added_now
|
||||
self.lines_removed[revision] = self.lines_removed[revision] + lines_removed_now
|
||||
return text
|
||||
|
||||
def set_head_revision(self, revision):
|
||||
self.head_revision = revision
|
||||
|
||||
def set_principal_branch(self, branch_name):
|
||||
self.principal_branch = branch_name
|
||||
|
||||
def define_tag(self, name, revision):
|
||||
# Create an associate array that maps from tag name to
|
||||
# revision number and vice-versa.
|
||||
self.tag_revision[name] = revision
|
||||
|
||||
def set_comment(self, comment):
|
||||
self.file_description = comment
|
||||
|
||||
def set_description(self, description):
|
||||
self.rcs_file_description = description
|
||||
|
||||
# Construct dicts that represent the topology of the RCS tree
|
||||
# and other arrays that contain info about individual revisions.
|
||||
#
|
||||
# The following dicts are created, keyed by revision number:
|
||||
# self.revision_date -- e.g. "96.02.23.00.21.52"
|
||||
# self.timestamp -- seconds since 12:00 AM, Jan 1, 1970 GMT
|
||||
# self.revision_author -- e.g. "tom"
|
||||
# self.revision_branches -- descendant branch revisions, separated by spaces,
|
||||
# e.g. "1.21.4.1 1.21.2.6.1"
|
||||
# self.prev_revision -- revision number of previous *ancestor* in RCS tree.
|
||||
# Traversal of this array occurs in the direction
|
||||
# of the primordial (1.1) revision.
|
||||
# self.prev_delta -- revision number of previous revision which forms
|
||||
# the basis for the edit commands in this revision.
|
||||
# This causes the tree to be traversed towards the
|
||||
# trunk when on a branch, and towards the latest trunk
|
||||
# revision when on the trunk.
|
||||
# self.next_delta -- revision number of next "delta". Inverts prev_delta.
|
||||
#
|
||||
# Also creates self.last_revision, keyed by a branch revision number, which
|
||||
# indicates the latest revision on a given branch,
|
||||
# e.g. self.last_revision{"1.2.8"} == 1.2.8.5
|
||||
def define_revision(self, revision, timestamp, author, state,
|
||||
branches, next):
|
||||
self.tag_revision[revision] = revision
|
||||
branch = self.last_branch.match(revision).group(1)
|
||||
self.last_revision[branch] = revision
|
||||
|
||||
#self.revision_date[revision] = date
|
||||
self.timestamp[revision] = timestamp
|
||||
|
||||
# Pretty print the date string
|
||||
ltime = time.localtime(self.timestamp[revision])
|
||||
formatted_date = time.strftime("%d %b %Y %H:%M", ltime)
|
||||
self.revision_ctime[revision] = formatted_date
|
||||
|
||||
# Save age
|
||||
self.revision_age[revision] = ((time.time() - self.timestamp[revision])
|
||||
/ self.SECONDS_PER_DAY)
|
||||
|
||||
# save author
|
||||
self.revision_author[revision] = author
|
||||
|
||||
# ignore the state
|
||||
|
||||
# process the branch information
|
||||
branch_text = ''
|
||||
for branch in branches:
|
||||
self.prev_revision[branch] = revision
|
||||
self.next_delta[revision] = branch
|
||||
self.prev_delta[branch] = revision
|
||||
branch_text = branch_text + branch + ''
|
||||
self.revision_branches[revision] = branch_text
|
||||
|
||||
# process the "next revision" information
|
||||
if next:
|
||||
self.next_delta[revision] = next
|
||||
self.prev_delta[next] = revision
|
||||
is_trunk_revision = self.trunk_rev.match(revision) is not None
|
||||
if is_trunk_revision:
|
||||
self.prev_revision[revision] = next
|
||||
else:
|
||||
self.prev_revision[next] = revision
|
||||
|
||||
# Construct associative arrays containing info about individual revisions.
|
||||
#
|
||||
# The following associative arrays are created, keyed by revision number:
|
||||
# revision_log -- log message
|
||||
# revision_deltatext -- Either the complete text of the revision,
|
||||
# in the case of the head revision, or the
|
||||
# encoded delta between this revision and another.
|
||||
# The delta is either with respect to the successor
|
||||
# revision if this revision is on the trunk or
|
||||
# relative to its immediate predecessor if this
|
||||
# revision is on a branch.
|
||||
def set_revision_info(self, revision, log, text):
|
||||
self.revision_log[revision] = log
|
||||
self.revision_deltatext[revision] = text
|
||||
|
||||
def parse_cvs_file(self, rcs_pathname, opt_rev = None, opt_m_timestamp = None):
|
||||
# Args in: opt_rev - requested revision
|
||||
# opt_m - time since modified
|
||||
# Args out: revision_map
|
||||
# timestamp
|
||||
# revision_deltatext
|
||||
|
||||
# CheckHidden(rcs_pathname)
|
||||
try:
|
||||
rcsfile = open(rcs_pathname, 'rb')
|
||||
except:
|
||||
raise RuntimeError, ('error: %s appeared to be under CVS control, ' +
|
||||
'but the RCS file is inaccessible.') % rcs_pathname
|
||||
|
||||
rcsparse.parse(rcsfile, self)
|
||||
rcsfile.close()
|
||||
|
||||
if opt_rev in [None, '', 'HEAD']:
|
||||
# Explicitly specified topmost revision in tree
|
||||
revision = self.head_revision
|
||||
else:
|
||||
# Symbolic tag or specific revision number specified.
|
||||
revision = self.map_tag_to_revision(opt_rev)
|
||||
if revision == '':
|
||||
raise RuntimeError, 'error: -r: No such revision: ' + opt_rev
|
||||
|
||||
# The primordial revision is not always 1.1! Go find it.
|
||||
primordial = revision
|
||||
while self.prev_revision.get(primordial):
|
||||
primordial = self.prev_revision[primordial]
|
||||
|
||||
# Don't display file at all, if -m option is specified and no
|
||||
# changes have been made in the specified file.
|
||||
if opt_m_timestamp and self.timestamp[revision] < opt_m_timestamp:
|
||||
return ''
|
||||
|
||||
# Figure out how many lines were in the primordial, i.e. version 1.1,
|
||||
# check-in by moving backward in time from the head revision to the
|
||||
# first revision.
|
||||
line_count = 0
|
||||
if self.revision_deltatext.get(self.head_revision):
|
||||
tmp_array = self.deltatext_split(self.head_revision)
|
||||
line_count = len(tmp_array)
|
||||
|
||||
skip = 0
|
||||
|
||||
rev = self.prev_revision.get(self.head_revision)
|
||||
while rev:
|
||||
diffs = self.deltatext_split(rev)
|
||||
for command in diffs:
|
||||
dmatch = self.d_command.match(command)
|
||||
amatch = self.a_command.match(command)
|
||||
if skip > 0:
|
||||
# Skip insertion lines from a prior "a" command
|
||||
skip = skip - 1
|
||||
elif dmatch:
|
||||
# "d" - Delete command
|
||||
start_line = string.atoi(dmatch.group(1))
|
||||
count = string.atoi(dmatch.group(2))
|
||||
line_count = line_count - count
|
||||
elif amatch:
|
||||
# "a" - Add command
|
||||
start_line = string.atoi(amatch.group(1))
|
||||
count = string.atoi(amatch.group(2))
|
||||
skip = count
|
||||
line_count = line_count + count
|
||||
else:
|
||||
raise RuntimeError, 'error: illegal RCS file'
|
||||
|
||||
rev = self.prev_revision.get(rev)
|
||||
|
||||
# Now, play the delta edit commands *backwards* from the primordial
|
||||
# revision forward, but rather than applying the deltas to the text of
|
||||
# each revision, apply the changes to an array of revision numbers.
|
||||
# This creates a "revision map" -- an array where each element
|
||||
# represents a line of text in the given revision but contains only
|
||||
# the revision number in which the line was introduced rather than
|
||||
# the line text itself.
|
||||
#
|
||||
# Note: These are backward deltas for revisions on the trunk and
|
||||
# forward deltas for branch revisions.
|
||||
|
||||
# Create initial revision map for primordial version.
|
||||
self.revision_map = [primordial] * line_count
|
||||
|
||||
ancestors = [revision, ] + self.ancestor_revisions(revision)
|
||||
ancestors = ancestors[:-1] # Remove "1.1"
|
||||
last_revision = primordial
|
||||
ancestors.reverse()
|
||||
for revision in ancestors:
|
||||
is_trunk_revision = self.trunk_rev.match(revision) is not None
|
||||
|
||||
if is_trunk_revision:
|
||||
diffs = self.deltatext_split(last_revision)
|
||||
|
||||
# Revisions on the trunk specify deltas that transform a
|
||||
# revision into an earlier revision, so invert the translation
|
||||
# of the 'diff' commands.
|
||||
for command in diffs:
|
||||
if skip > 0:
|
||||
skip = skip - 1
|
||||
else:
|
||||
dmatch = self.d_command.match(command)
|
||||
amatch = self.a_command.match(command)
|
||||
if dmatch:
|
||||
start_line = string.atoi(dmatch.group(1))
|
||||
count = string.atoi(dmatch.group(2))
|
||||
temp = []
|
||||
while count > 0:
|
||||
temp.append(revision)
|
||||
count = count - 1
|
||||
self.revision_map = (self.revision_map[:start_line - 1] +
|
||||
temp + self.revision_map[start_line - 1:])
|
||||
elif amatch:
|
||||
start_line = string.atoi(amatch.group(1))
|
||||
count = string.atoi(amatch.group(2))
|
||||
del self.revision_map[start_line:start_line + count]
|
||||
skip = count
|
||||
else:
|
||||
raise RuntimeError, 'Error parsing diff commands'
|
||||
|
||||
else:
|
||||
# Revisions on a branch are arranged backwards from those on
|
||||
# the trunk. They specify deltas that transform a revision
|
||||
# into a later revision.
|
||||
adjust = 0
|
||||
diffs = self.deltatext_split(revision)
|
||||
for command in diffs:
|
||||
if skip > 0:
|
||||
skip = skip - 1
|
||||
else:
|
||||
dmatch = self.d_command.match(command)
|
||||
amatch = self.a_command.match(command)
|
||||
if dmatch:
|
||||
start_line = string.atoi(dmatch.group(1))
|
||||
count = string.atoi(dmatch.group(2))
|
||||
adj_begin = start_line + adjust - 1
|
||||
adj_end = start_line + adjust - 1 + count
|
||||
del self.revision_map[adj_begin:adj_end]
|
||||
adjust = adjust - count
|
||||
elif amatch:
|
||||
start_line = string.atoi(amatch.group(1))
|
||||
count = string.atoi(amatch.group(2))
|
||||
skip = count
|
||||
temp = []
|
||||
while count > 0:
|
||||
temp.append(revision)
|
||||
count = count - 1
|
||||
self.revision_map = (self.revision_map[:start_line + adjust] +
|
||||
temp + self.revision_map[start_line + adjust:])
|
||||
adjust = adjust + skip
|
||||
else:
|
||||
raise RuntimeError, 'Error parsing diff commands'
|
||||
|
||||
last_revision = revision
|
||||
|
||||
return revision
|
||||
|
||||
|
||||
class BlameSource:
|
||||
def __init__(self, rcs_file, opt_rev=None):
|
||||
# Parse the CVS file
|
||||
parser = CVSParser()
|
||||
revision = parser.parse_cvs_file(rcs_file, opt_rev)
|
||||
count = len(parser.revision_map)
|
||||
lines = parser.extract_revision(revision)
|
||||
if len(lines) != count:
|
||||
raise RuntimeError, 'Internal consistency error'
|
||||
|
||||
# set up some state variables
|
||||
self.revision = revision
|
||||
self.lines = lines
|
||||
self.num_lines = count
|
||||
self.parser = parser
|
||||
|
||||
# keep track of where we are during an iteration
|
||||
self.idx = -1
|
||||
self.last = None
|
||||
|
||||
def __getitem__(self, idx):
|
||||
if idx == self.idx:
|
||||
return self.last
|
||||
if idx >= self.num_lines:
|
||||
raise IndexError("No more annotations")
|
||||
if idx != self.idx + 1:
|
||||
raise BlameSequencingError()
|
||||
|
||||
# Get the line and metadata for it.
|
||||
rev = self.parser.revision_map[idx]
|
||||
prev_rev = self.parser.prev_revision.get(rev)
|
||||
line_number = idx + 1
|
||||
author = self.parser.revision_author[rev]
|
||||
thisline = self.lines[idx]
|
||||
### TODO: Put a real date in here.
|
||||
item = vclib.Annotation(thisline, line_number, rev, prev_rev, author, None)
|
||||
self.last = item
|
||||
self.idx = idx
|
||||
return item
|
||||
|
||||
|
||||
class BlameSequencingError(Exception):
|
||||
pass
|
|
@ -0,0 +1,398 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2008 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
import os
|
||||
import string
|
||||
import re
|
||||
import cStringIO
|
||||
import tempfile
|
||||
|
||||
import vclib
|
||||
import rcsparse
|
||||
import blame
|
||||
|
||||
### The functionality shared with bincvs should probably be moved to a
|
||||
### separate module
|
||||
from bincvs import BaseCVSRepository, Revision, Tag, _file_log, _log_path, _logsort_date_cmp, _logsort_rev_cmp
|
||||
|
||||
class CCVSRepository(BaseCVSRepository):
|
||||
def dirlogs(self, path_parts, rev, entries, options):
|
||||
"""see vclib.Repository.dirlogs docstring
|
||||
|
||||
rev can be a tag name or None. if set only information from revisions
|
||||
matching the tag will be retrieved
|
||||
|
||||
Option values recognized by this implementation:
|
||||
|
||||
cvs_subdirs
|
||||
boolean. true to fetch logs of the most recently modified file in each
|
||||
subdirectory
|
||||
|
||||
Option values returned by this implementation:
|
||||
|
||||
cvs_tags, cvs_branches
|
||||
lists of tag and branch names encountered in the directory
|
||||
"""
|
||||
if self.itemtype(path_parts, rev) != vclib.DIR: # does auth-check
|
||||
raise vclib.Error("Path '%s' is not a directory."
|
||||
% (string.join(path_parts, "/")))
|
||||
entries_to_fetch = []
|
||||
for entry in entries:
|
||||
if vclib.check_path_access(self, path_parts + [entry.name], None, rev):
|
||||
entries_to_fetch.append(entry)
|
||||
|
||||
subdirs = options.get('cvs_subdirs', 0)
|
||||
|
||||
dirpath = self._getpath(path_parts)
|
||||
alltags = { # all the tags seen in the files of this dir
|
||||
'MAIN' : '',
|
||||
'HEAD' : '1.1'
|
||||
}
|
||||
|
||||
for entry in entries_to_fetch:
|
||||
entry.rev = entry.date = entry.author = None
|
||||
entry.dead = entry.absent = entry.log = entry.lockinfo = None
|
||||
path = _log_path(entry, dirpath, subdirs)
|
||||
if path:
|
||||
entry.path = path
|
||||
try:
|
||||
rcsparse.parse(open(path, 'rb'), InfoSink(entry, rev, alltags))
|
||||
except IOError, e:
|
||||
entry.errors.append("rcsparse error: %s" % e)
|
||||
except RuntimeError, e:
|
||||
entry.errors.append("rcsparse error: %s" % e)
|
||||
except rcsparse.RCSStopParser:
|
||||
pass
|
||||
|
||||
branches = options['cvs_branches'] = []
|
||||
tags = options['cvs_tags'] = []
|
||||
for name, rev in alltags.items():
|
||||
if Tag(None, rev).is_branch:
|
||||
branches.append(name)
|
||||
else:
|
||||
tags.append(name)
|
||||
|
||||
def itemlog(self, path_parts, rev, sortby, first, limit, options):
|
||||
"""see vclib.Repository.itemlog docstring
|
||||
|
||||
rev parameter can be a revision number, a branch number, a tag name,
|
||||
or None. If None, will return information about all revisions, otherwise,
|
||||
will only return information about the specified revision or branch.
|
||||
|
||||
Option values returned by this implementation:
|
||||
|
||||
cvs_tags
|
||||
dictionary of Tag objects for all tags encountered
|
||||
"""
|
||||
if self.itemtype(path_parts, rev) != vclib.FILE: # does auth-check
|
||||
raise vclib.Error("Path '%s' is not a file."
|
||||
% (string.join(path_parts, "/")))
|
||||
|
||||
path = self.rcsfile(path_parts, 1)
|
||||
sink = TreeSink()
|
||||
rcsparse.parse(open(path, 'rb'), sink)
|
||||
filtered_revs = _file_log(sink.revs.values(), sink.tags, sink.lockinfo,
|
||||
sink.default_branch, rev)
|
||||
for rev in filtered_revs:
|
||||
if rev.prev and len(rev.number) == 2:
|
||||
rev.changed = rev.prev.next_changed
|
||||
options['cvs_tags'] = sink.tags
|
||||
|
||||
if sortby == vclib.SORTBY_DATE:
|
||||
filtered_revs.sort(_logsort_date_cmp)
|
||||
elif sortby == vclib.SORTBY_REV:
|
||||
filtered_revs.sort(_logsort_rev_cmp)
|
||||
|
||||
if len(filtered_revs) < first:
|
||||
return []
|
||||
if limit:
|
||||
return filtered_revs[first:first+limit]
|
||||
return filtered_revs
|
||||
|
||||
def rawdiff(self, path_parts1, rev1, path_parts2, rev2, type, options={}):
|
||||
if self.itemtype(path_parts1, rev1) != vclib.FILE: # does auth-check
|
||||
raise vclib.Error("Path '%s' is not a file."
|
||||
% (string.join(path_parts1, "/")))
|
||||
if self.itemtype(path_parts2, rev2) != vclib.FILE: # does auth-check
|
||||
raise vclib.Error("Path '%s' is not a file."
|
||||
% (string.join(path_parts2, "/")))
|
||||
|
||||
temp1 = tempfile.mktemp()
|
||||
open(temp1, 'wb').write(self.openfile(path_parts1, rev1)[0].getvalue())
|
||||
temp2 = tempfile.mktemp()
|
||||
open(temp2, 'wb').write(self.openfile(path_parts2, rev2)[0].getvalue())
|
||||
|
||||
r1 = self.itemlog(path_parts1, rev1, vclib.SORTBY_DEFAULT, 0, 0, {})[-1]
|
||||
r2 = self.itemlog(path_parts2, rev2, vclib.SORTBY_DEFAULT, 0, 0, {})[-1]
|
||||
|
||||
info1 = (self.rcsfile(path_parts1, root=1, v=0), r1.date, r1.string)
|
||||
info2 = (self.rcsfile(path_parts2, root=1, v=0), r2.date, r2.string)
|
||||
|
||||
diff_args = vclib._diff_args(type, options)
|
||||
|
||||
return vclib._diff_fp(temp1, temp2, info1, info2,
|
||||
self.utilities.diff or 'diff', diff_args)
|
||||
|
||||
def annotate(self, path_parts, rev=None):
|
||||
if self.itemtype(path_parts, rev) != vclib.FILE: # does auth-check
|
||||
raise vclib.Error("Path '%s' is not a file."
|
||||
% (string.join(path_parts, "/")))
|
||||
source = blame.BlameSource(self.rcsfile(path_parts, 1), rev)
|
||||
return source, source.revision
|
||||
|
||||
def revinfo(self, rev):
|
||||
raise vclib.UnsupportedFeature
|
||||
|
||||
def openfile(self, path_parts, rev=None):
|
||||
if self.itemtype(path_parts, rev) != vclib.FILE: # does auth-check
|
||||
raise vclib.Error("Path '%s' is not a file."
|
||||
% (string.join(path_parts, "/")))
|
||||
path = self.rcsfile(path_parts, 1)
|
||||
sink = COSink(rev)
|
||||
rcsparse.parse(open(path, 'rb'), sink)
|
||||
revision = sink.last and sink.last.string
|
||||
return cStringIO.StringIO(string.join(sink.sstext.text, "\n")), revision
|
||||
|
||||
class MatchingSink(rcsparse.Sink):
|
||||
"""Superclass for sinks that search for revisions based on tag or number"""
|
||||
|
||||
def __init__(self, find):
|
||||
"""Initialize with tag name or revision number string to match against"""
|
||||
if not find or find == 'MAIN' or find == 'HEAD':
|
||||
self.find = None
|
||||
else:
|
||||
self.find = find
|
||||
|
||||
self.find_tag = None
|
||||
|
||||
def set_principal_branch(self, branch_number):
|
||||
if self.find is None:
|
||||
self.find_tag = Tag(None, branch_number)
|
||||
|
||||
def define_tag(self, name, revision):
|
||||
if name == self.find:
|
||||
self.find_tag = Tag(None, revision)
|
||||
|
||||
def admin_completed(self):
|
||||
if self.find_tag is None:
|
||||
if self.find is None:
|
||||
self.find_tag = Tag(None, '')
|
||||
else:
|
||||
try:
|
||||
self.find_tag = Tag(None, self.find)
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
class InfoSink(MatchingSink):
|
||||
def __init__(self, entry, tag, alltags):
|
||||
MatchingSink.__init__(self, tag)
|
||||
self.entry = entry
|
||||
self.alltags = alltags
|
||||
self.matching_rev = None
|
||||
self.perfect_match = 0
|
||||
self.lockinfo = { }
|
||||
|
||||
def define_tag(self, name, revision):
|
||||
MatchingSink.define_tag(self, name, revision)
|
||||
self.alltags[name] = revision
|
||||
|
||||
def admin_completed(self):
|
||||
MatchingSink.admin_completed(self)
|
||||
if self.find_tag is None:
|
||||
# tag we're looking for doesn't exist
|
||||
if self.entry.kind == vclib.FILE:
|
||||
self.entry.absent = 1
|
||||
raise rcsparse.RCSStopParser
|
||||
|
||||
def set_locker(self, rev, locker):
|
||||
self.lockinfo[rev] = locker
|
||||
|
||||
def define_revision(self, revision, date, author, state, branches, next):
|
||||
if self.perfect_match:
|
||||
return
|
||||
|
||||
tag = self.find_tag
|
||||
rev = Revision(revision, date, author, state == "dead")
|
||||
rev.lockinfo = self.lockinfo.get(revision)
|
||||
|
||||
# perfect match if revision number matches tag number or if revision is on
|
||||
# trunk and tag points to trunk. imperfect match if tag refers to a branch
|
||||
# and this revision is the highest revision so far found on that branch
|
||||
perfect = ((rev.number == tag.number) or
|
||||
(not tag.number and len(rev.number) == 2))
|
||||
if perfect or (tag.is_branch and tag.number == rev.number[:-1] and
|
||||
(not self.matching_rev or
|
||||
rev.number > self.matching_rev.number)):
|
||||
self.matching_rev = rev
|
||||
self.perfect_match = perfect
|
||||
|
||||
def set_revision_info(self, revision, log, text):
|
||||
if self.matching_rev:
|
||||
if revision == self.matching_rev.string:
|
||||
self.entry.rev = self.matching_rev.string
|
||||
self.entry.date = self.matching_rev.date
|
||||
self.entry.author = self.matching_rev.author
|
||||
self.entry.dead = self.matching_rev.dead
|
||||
self.entry.lockinfo = self.matching_rev.lockinfo
|
||||
self.entry.absent = 0
|
||||
self.entry.log = log
|
||||
raise rcsparse.RCSStopParser
|
||||
else:
|
||||
raise rcsparse.RCSStopParser
|
||||
|
||||
class TreeSink(rcsparse.Sink):
|
||||
d_command = re.compile('^d(\d+)\\s(\\d+)')
|
||||
a_command = re.compile('^a(\d+)\\s(\\d+)')
|
||||
|
||||
def __init__(self):
|
||||
self.revs = { }
|
||||
self.tags = { }
|
||||
self.head = None
|
||||
self.default_branch = None
|
||||
self.lockinfo = { }
|
||||
|
||||
def set_head_revision(self, revision):
|
||||
self.head = revision
|
||||
|
||||
def set_principal_branch(self, branch_number):
|
||||
self.default_branch = branch_number
|
||||
|
||||
def set_locker(self, rev, locker):
|
||||
self.lockinfo[rev] = locker
|
||||
|
||||
def define_tag(self, name, revision):
|
||||
# check !tags.has_key(tag_name)
|
||||
self.tags[name] = revision
|
||||
|
||||
def define_revision(self, revision, date, author, state, branches, next):
|
||||
# check !revs.has_key(revision)
|
||||
self.revs[revision] = Revision(revision, date, author, state == "dead")
|
||||
|
||||
def set_revision_info(self, revision, log, text):
|
||||
# check revs.has_key(revision)
|
||||
rev = self.revs[revision]
|
||||
rev.log = log
|
||||
|
||||
changed = None
|
||||
added = 0
|
||||
deled = 0
|
||||
if self.head != revision:
|
||||
changed = 1
|
||||
lines = string.split(text, '\n')
|
||||
idx = 0
|
||||
while idx < len(lines):
|
||||
command = lines[idx]
|
||||
dmatch = self.d_command.match(command)
|
||||
idx = idx + 1
|
||||
if dmatch:
|
||||
deled = deled + string.atoi(dmatch.group(2))
|
||||
else:
|
||||
amatch = self.a_command.match(command)
|
||||
if amatch:
|
||||
count = string.atoi(amatch.group(2))
|
||||
added = added + count
|
||||
idx = idx + count
|
||||
elif command:
|
||||
raise "error while parsing deltatext: %s" % command
|
||||
|
||||
if len(rev.number) == 2:
|
||||
rev.next_changed = changed and "+%i -%i" % (deled, added)
|
||||
else:
|
||||
rev.changed = changed and "+%i -%i" % (added, deled)
|
||||
|
||||
class StreamText:
|
||||
d_command = re.compile('^d(\d+)\\s(\\d+)')
|
||||
a_command = re.compile('^a(\d+)\\s(\\d+)')
|
||||
|
||||
def __init__(self, text):
|
||||
self.text = string.split(text, "\n")
|
||||
|
||||
def command(self, cmd):
|
||||
adjust = 0
|
||||
add_lines_remaining = 0
|
||||
diffs = string.split(cmd, "\n")
|
||||
if diffs[-1] == "":
|
||||
del diffs[-1]
|
||||
if len(diffs) == 0:
|
||||
return
|
||||
if diffs[0] == "":
|
||||
del diffs[0]
|
||||
for command in diffs:
|
||||
if add_lines_remaining > 0:
|
||||
# Insertion lines from a prior "a" command
|
||||
self.text.insert(start_line + adjust, command)
|
||||
add_lines_remaining = add_lines_remaining - 1
|
||||
adjust = adjust + 1
|
||||
continue
|
||||
dmatch = self.d_command.match(command)
|
||||
amatch = self.a_command.match(command)
|
||||
if dmatch:
|
||||
# "d" - Delete command
|
||||
start_line = string.atoi(dmatch.group(1))
|
||||
count = string.atoi(dmatch.group(2))
|
||||
begin = start_line + adjust - 1
|
||||
del self.text[begin:begin + count]
|
||||
adjust = adjust - count
|
||||
elif amatch:
|
||||
# "a" - Add command
|
||||
start_line = string.atoi(amatch.group(1))
|
||||
count = string.atoi(amatch.group(2))
|
||||
add_lines_remaining = count
|
||||
else:
|
||||
raise RuntimeError, 'Error parsing diff commands'
|
||||
|
||||
def secondnextdot(s, start):
|
||||
# find the position the second dot after the start index.
|
||||
return string.find(s, '.', string.find(s, '.', start) + 1)
|
||||
|
||||
|
||||
class COSink(MatchingSink):
|
||||
def __init__(self, rev):
|
||||
MatchingSink.__init__(self, rev)
|
||||
|
||||
def set_head_revision(self, revision):
|
||||
self.head = Revision(revision)
|
||||
self.last = None
|
||||
self.sstext = None
|
||||
|
||||
def admin_completed(self):
|
||||
MatchingSink.admin_completed(self)
|
||||
if self.find_tag is None:
|
||||
raise vclib.InvalidRevision(self.find)
|
||||
|
||||
def set_revision_info(self, revision, log, text):
|
||||
tag = self.find_tag
|
||||
rev = Revision(revision)
|
||||
|
||||
if rev.number == tag.number:
|
||||
self.log = log
|
||||
|
||||
depth = len(rev.number)
|
||||
|
||||
if rev.number == self.head.number:
|
||||
assert self.sstext is None
|
||||
self.sstext = StreamText(text)
|
||||
elif (depth == 2 and tag.number and rev.number >= tag.number[:depth]):
|
||||
assert len(self.last.number) == 2
|
||||
assert rev.number < self.last.number
|
||||
self.sstext.command(text)
|
||||
elif (depth > 2 and rev.number[:depth-1] == tag.number[:depth-1] and
|
||||
(rev.number <= tag.number or len(tag.number) == depth-1)):
|
||||
assert len(rev.number) - len(self.last.number) in (0, 2)
|
||||
assert rev.number > self.last.number
|
||||
self.sstext.command(text)
|
||||
else:
|
||||
rev = None
|
||||
|
||||
if rev:
|
||||
#print "tag =", tag.number, "rev =", rev.number, "<br>"
|
||||
self.last = rev
|
|
@ -0,0 +1,26 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2006 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
"""This package provides parsing tools for RCS files."""
|
||||
|
||||
from common import *
|
||||
|
||||
try:
|
||||
from tparse import parse
|
||||
except ImportError:
|
||||
try:
|
||||
from texttools import Parser
|
||||
except ImportError:
|
||||
from default import Parser
|
||||
|
||||
def parse(file, sink):
|
||||
return Parser().parse(file, sink)
|
|
@ -0,0 +1,324 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2007 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
"""common.py: common classes and functions for the RCS parsing tools."""
|
||||
|
||||
import calendar
|
||||
import string
|
||||
|
||||
class Sink:
|
||||
def set_head_revision(self, revision):
|
||||
pass
|
||||
|
||||
def set_principal_branch(self, branch_name):
|
||||
pass
|
||||
|
||||
def set_access(self, accessors):
|
||||
pass
|
||||
|
||||
def define_tag(self, name, revision):
|
||||
pass
|
||||
|
||||
def set_locker(self, revision, locker):
|
||||
pass
|
||||
|
||||
def set_locking(self, mode):
|
||||
"""Used to signal locking mode.
|
||||
|
||||
Called with mode argument 'strict' if strict locking
|
||||
Not called when no locking used."""
|
||||
|
||||
pass
|
||||
|
||||
def set_comment(self, comment):
|
||||
pass
|
||||
|
||||
def set_expansion(self, mode):
|
||||
pass
|
||||
|
||||
def admin_completed(self):
|
||||
pass
|
||||
|
||||
def define_revision(self, revision, timestamp, author, state,
|
||||
branches, next):
|
||||
pass
|
||||
|
||||
def tree_completed(self):
|
||||
pass
|
||||
|
||||
def set_description(self, description):
|
||||
pass
|
||||
|
||||
def set_revision_info(self, revision, log, text):
|
||||
pass
|
||||
|
||||
def parse_completed(self):
|
||||
pass
|
||||
|
||||
|
||||
# --------------------------------------------------------------------------
|
||||
#
|
||||
# EXCEPTIONS USED BY RCSPARSE
|
||||
#
|
||||
|
||||
class RCSParseError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class RCSIllegalCharacter(RCSParseError):
|
||||
pass
|
||||
|
||||
|
||||
class RCSExpected(RCSParseError):
|
||||
def __init__(self, got, wanted):
|
||||
RCSParseError.__init__(
|
||||
self,
|
||||
'Unexpected parsing error in RCS file.\n'
|
||||
'Expected token: %s, but saw: %s'
|
||||
% (wanted, got)
|
||||
)
|
||||
|
||||
|
||||
class RCSStopParser(Exception):
|
||||
pass
|
||||
|
||||
|
||||
# --------------------------------------------------------------------------
|
||||
#
|
||||
# STANDARD TOKEN STREAM-BASED PARSER
|
||||
#
|
||||
|
||||
class _Parser:
|
||||
stream_class = None # subclasses need to define this
|
||||
|
||||
def _read_until_semicolon(self):
|
||||
"""Read all tokens up to and including the next semicolon token.
|
||||
|
||||
Return the tokens (not including the semicolon) as a list."""
|
||||
|
||||
tokens = []
|
||||
|
||||
while 1:
|
||||
token = self.ts.get()
|
||||
if token == ';':
|
||||
break
|
||||
tokens.append(token)
|
||||
|
||||
return tokens
|
||||
|
||||
def _parse_admin_head(self, token):
|
||||
rev = self.ts.get()
|
||||
if rev == ';':
|
||||
# The head revision is not specified. Just drop the semicolon
|
||||
# on the floor.
|
||||
pass
|
||||
else:
|
||||
self.sink.set_head_revision(rev)
|
||||
self.ts.match(';')
|
||||
|
||||
def _parse_admin_branch(self, token):
|
||||
branch = self.ts.get()
|
||||
if branch != ';':
|
||||
self.sink.set_principal_branch(branch)
|
||||
self.ts.match(';')
|
||||
|
||||
def _parse_admin_access(self, token):
|
||||
accessors = self._read_until_semicolon()
|
||||
if accessors:
|
||||
self.sink.set_access(accessors)
|
||||
|
||||
def _parse_admin_symbols(self, token):
|
||||
while 1:
|
||||
tag_name = self.ts.get()
|
||||
if tag_name == ';':
|
||||
break
|
||||
self.ts.match(':')
|
||||
tag_rev = self.ts.get()
|
||||
self.sink.define_tag(tag_name, tag_rev)
|
||||
|
||||
def _parse_admin_locks(self, token):
|
||||
while 1:
|
||||
locker = self.ts.get()
|
||||
if locker == ';':
|
||||
break
|
||||
self.ts.match(':')
|
||||
rev = self.ts.get()
|
||||
self.sink.set_locker(rev, locker)
|
||||
|
||||
def _parse_admin_strict(self, token):
|
||||
self.sink.set_locking("strict")
|
||||
self.ts.match(';')
|
||||
|
||||
def _parse_admin_comment(self, token):
|
||||
self.sink.set_comment(self.ts.get())
|
||||
self.ts.match(';')
|
||||
|
||||
def _parse_admin_expand(self, token):
|
||||
expand_mode = self.ts.get()
|
||||
self.sink.set_expansion(expand_mode)
|
||||
self.ts.match(';')
|
||||
|
||||
admin_token_map = {
|
||||
'head' : _parse_admin_head,
|
||||
'branch' : _parse_admin_branch,
|
||||
'access' : _parse_admin_access,
|
||||
'symbols' : _parse_admin_symbols,
|
||||
'locks' : _parse_admin_locks,
|
||||
'strict' : _parse_admin_strict,
|
||||
'comment' : _parse_admin_comment,
|
||||
'expand' : _parse_admin_expand,
|
||||
'desc' : None,
|
||||
}
|
||||
|
||||
def parse_rcs_admin(self):
|
||||
while 1:
|
||||
# Read initial token at beginning of line
|
||||
token = self.ts.get()
|
||||
|
||||
try:
|
||||
f = self.admin_token_map[token]
|
||||
except KeyError:
|
||||
# We're done once we reach the description of the RCS tree
|
||||
if token[0] in string.digits:
|
||||
self.ts.unget(token)
|
||||
return
|
||||
else:
|
||||
# Chew up "newphrase"
|
||||
# warn("Unexpected RCS token: $token\n")
|
||||
pass
|
||||
else:
|
||||
if f is None:
|
||||
self.ts.unget(token)
|
||||
return
|
||||
else:
|
||||
f(self, token)
|
||||
|
||||
def _parse_rcs_tree_entry(self, revision):
|
||||
# Parse date
|
||||
self.ts.match('date')
|
||||
date = self.ts.get()
|
||||
self.ts.match(';')
|
||||
|
||||
# Convert date into timestamp
|
||||
date_fields = string.split(date, '.')
|
||||
# According to rcsfile(5): the year "contains just the last two
|
||||
# digits of the year for years from 1900 through 1999, and all the
|
||||
# digits of years thereafter".
|
||||
if len(date_fields[0]) == 2:
|
||||
date_fields[0] = '19' + date_fields[0]
|
||||
date_fields = map(string.atoi, date_fields)
|
||||
EPOCH = 1970
|
||||
if date_fields[0] < EPOCH:
|
||||
raise ValueError, 'invalid year'
|
||||
timestamp = calendar.timegm(tuple(date_fields) + (0, 0, 0,))
|
||||
|
||||
# Parse author
|
||||
### NOTE: authors containing whitespace are violations of the
|
||||
### RCS specification. We are making an allowance here because
|
||||
### CVSNT is known to produce these sorts of authors.
|
||||
self.ts.match('author')
|
||||
author = ' '.join(self._read_until_semicolon())
|
||||
|
||||
# Parse state
|
||||
self.ts.match('state')
|
||||
state = ''
|
||||
while 1:
|
||||
token = self.ts.get()
|
||||
if token == ';':
|
||||
break
|
||||
state = state + token + ' '
|
||||
state = state[:-1] # toss the trailing space
|
||||
|
||||
# Parse branches
|
||||
self.ts.match('branches')
|
||||
branches = self._read_until_semicolon()
|
||||
|
||||
# Parse revision of next delta in chain
|
||||
self.ts.match('next')
|
||||
next = self.ts.get()
|
||||
if next == ';':
|
||||
next = None
|
||||
else:
|
||||
self.ts.match(';')
|
||||
|
||||
# there are some files with extra tags in them. for example:
|
||||
# owner 640;
|
||||
# group 15;
|
||||
# permissions 644;
|
||||
# hardlinks @configure.in@;
|
||||
# this is "newphrase" in RCSFILE(5). we just want to skip over these.
|
||||
while 1:
|
||||
token = self.ts.get()
|
||||
if token == 'desc' or token[0] in string.digits:
|
||||
self.ts.unget(token)
|
||||
break
|
||||
# consume everything up to the semicolon
|
||||
self._read_until_semicolon()
|
||||
|
||||
self.sink.define_revision(revision, timestamp, author, state, branches,
|
||||
next)
|
||||
|
||||
def parse_rcs_tree(self):
|
||||
while 1:
|
||||
revision = self.ts.get()
|
||||
|
||||
# End of RCS tree description ?
|
||||
if revision == 'desc':
|
||||
self.ts.unget(revision)
|
||||
return
|
||||
|
||||
self._parse_rcs_tree_entry(revision)
|
||||
|
||||
def parse_rcs_description(self):
|
||||
self.ts.match('desc')
|
||||
self.sink.set_description(self.ts.get())
|
||||
|
||||
def parse_rcs_deltatext(self):
|
||||
while 1:
|
||||
revision = self.ts.get()
|
||||
if revision is None:
|
||||
# EOF
|
||||
break
|
||||
text, sym2, log, sym1 = self.ts.mget(4)
|
||||
if sym1 != 'log':
|
||||
print `text[:100], sym2[:100], log[:100], sym1[:100]`
|
||||
raise RCSExpected(sym1, 'log')
|
||||
if sym2 != 'text':
|
||||
raise RCSExpected(sym2, 'text')
|
||||
### need to add code to chew up "newphrase"
|
||||
self.sink.set_revision_info(revision, log, text)
|
||||
|
||||
def parse(self, file, sink):
|
||||
self.ts = self.stream_class(file)
|
||||
self.sink = sink
|
||||
|
||||
self.parse_rcs_admin()
|
||||
|
||||
# let sink know when the admin section has been completed
|
||||
self.sink.admin_completed()
|
||||
|
||||
self.parse_rcs_tree()
|
||||
|
||||
# many sinks want to know when the tree has been completed so they can
|
||||
# do some work to prep for the arrival of the deltatext
|
||||
self.sink.tree_completed()
|
||||
|
||||
self.parse_rcs_description()
|
||||
self.parse_rcs_deltatext()
|
||||
|
||||
# easiest for us to tell the sink it is done, rather than worry about
|
||||
# higher level software doing it.
|
||||
self.sink.parse_completed()
|
||||
|
||||
self.ts = self.sink = None
|
||||
|
||||
# --------------------------------------------------------------------------
|
|
@ -0,0 +1,122 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2006 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
"""debug.py: various debugging tools for the rcsparse package."""
|
||||
|
||||
import time
|
||||
|
||||
from __init__ import parse
|
||||
import common
|
||||
|
||||
|
||||
class DebugSink(common.Sink):
|
||||
def set_head_revision(self, revision):
|
||||
print 'head:', revision
|
||||
|
||||
def set_principal_branch(self, branch_name):
|
||||
print 'branch:', branch_name
|
||||
|
||||
def define_tag(self, name, revision):
|
||||
print 'tag:', name, '=', revision
|
||||
|
||||
def set_comment(self, comment):
|
||||
print 'comment:', comment
|
||||
|
||||
def set_description(self, description):
|
||||
print 'description:', description
|
||||
|
||||
def define_revision(self, revision, timestamp, author, state,
|
||||
branches, next):
|
||||
print 'revision:', revision
|
||||
print ' timestamp:', timestamp
|
||||
print ' author:', author
|
||||
print ' state:', state
|
||||
print ' branches:', branches
|
||||
print ' next:', next
|
||||
|
||||
def set_revision_info(self, revision, log, text):
|
||||
print 'revision:', revision
|
||||
print ' log:', log
|
||||
print ' text:', text[:100], '...'
|
||||
|
||||
|
||||
class DumpSink(common.Sink):
|
||||
"""Dump all the parse information directly to stdout.
|
||||
|
||||
The output is relatively unformatted and untagged. It is intended as a
|
||||
raw dump of the data in the RCS file. A copy can be saved, then changes
|
||||
made to the parsing engine, then a comparison of the new output against
|
||||
the old output.
|
||||
"""
|
||||
def __init__(self):
|
||||
global sha
|
||||
import sha
|
||||
|
||||
def set_head_revision(self, revision):
|
||||
print revision
|
||||
|
||||
def set_principal_branch(self, branch_name):
|
||||
print branch_name
|
||||
|
||||
def define_tag(self, name, revision):
|
||||
print name, revision
|
||||
|
||||
def set_comment(self, comment):
|
||||
print comment
|
||||
|
||||
def set_description(self, description):
|
||||
print description
|
||||
|
||||
def define_revision(self, revision, timestamp, author, state,
|
||||
branches, next):
|
||||
print revision, timestamp, author, state, branches, next
|
||||
|
||||
def set_revision_info(self, revision, log, text):
|
||||
print revision, sha.new(log).hexdigest(), sha.new(text).hexdigest()
|
||||
|
||||
def tree_completed(self):
|
||||
print 'tree_completed'
|
||||
|
||||
def parse_completed(self):
|
||||
print 'parse_completed'
|
||||
|
||||
|
||||
def dump_file(fname):
|
||||
parse(open(fname, 'rb'), DumpSink())
|
||||
|
||||
def time_file(fname):
|
||||
f = open(fname, 'rb')
|
||||
s = common.Sink()
|
||||
t = time.time()
|
||||
parse(f, s)
|
||||
t = time.time() - t
|
||||
print t
|
||||
|
||||
def _usage():
|
||||
print 'This is normally a module for importing, but it has a couple'
|
||||
print 'features for testing as an executable script.'
|
||||
print 'USAGE: %s COMMAND filename,v' % sys.argv[0]
|
||||
print ' where COMMAND is one of:'
|
||||
print ' dump: filename is "dumped" to stdout'
|
||||
print ' time: filename is parsed with the time written to stdout'
|
||||
sys.exit(1)
|
||||
|
||||
if __name__ == '__main__':
|
||||
import sys
|
||||
if len(sys.argv) != 3:
|
||||
_usage()
|
||||
if sys.argv[1] == 'dump':
|
||||
dump_file(sys.argv[2])
|
||||
elif sys.argv[1] == 'time':
|
||||
time_file(sys.argv[2])
|
||||
else:
|
||||
_usage()
|
|
@ -0,0 +1,167 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2007 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# This file was originally based on portions of the blame.py script by
|
||||
# Curt Hagenlocher.
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
import string
|
||||
import common
|
||||
|
||||
class _TokenStream:
|
||||
token_term = string.whitespace + ';:'
|
||||
|
||||
# the algorithm is about the same speed for any CHUNK_SIZE chosen.
|
||||
# grab a good-sized chunk, but not too large to overwhelm memory.
|
||||
# note: we use a multiple of a standard block size
|
||||
CHUNK_SIZE = 192 * 512 # about 100k
|
||||
|
||||
# CHUNK_SIZE = 5 # for debugging, make the function grind...
|
||||
|
||||
def __init__(self, file):
|
||||
self.rcsfile = file
|
||||
self.idx = 0
|
||||
self.buf = self.rcsfile.read(self.CHUNK_SIZE)
|
||||
if self.buf == '':
|
||||
raise RuntimeError, 'EOF'
|
||||
|
||||
def get(self):
|
||||
"Get the next token from the RCS file."
|
||||
|
||||
# Note: we can afford to loop within Python, examining individual
|
||||
# characters. For the whitespace and tokens, the number of iterations
|
||||
# is typically quite small. Thus, a simple iterative loop will beat
|
||||
# out more complex solutions.
|
||||
|
||||
buf = self.buf
|
||||
idx = self.idx
|
||||
|
||||
while 1:
|
||||
if idx == len(buf):
|
||||
buf = self.rcsfile.read(self.CHUNK_SIZE)
|
||||
if buf == '':
|
||||
# signal EOF by returning None as the token
|
||||
del self.buf # so we fail if get() is called again
|
||||
return None
|
||||
idx = 0
|
||||
|
||||
if buf[idx] not in string.whitespace:
|
||||
break
|
||||
|
||||
idx = idx + 1
|
||||
|
||||
if buf[idx] == ';' or buf[idx] == ':':
|
||||
self.buf = buf
|
||||
self.idx = idx + 1
|
||||
return buf[idx]
|
||||
|
||||
if buf[idx] != '@':
|
||||
end = idx + 1
|
||||
token = ''
|
||||
while 1:
|
||||
# find token characters in the current buffer
|
||||
while end < len(buf) and buf[end] not in self.token_term:
|
||||
end = end + 1
|
||||
token = token + buf[idx:end]
|
||||
|
||||
if end < len(buf):
|
||||
# we stopped before the end, so we have a full token
|
||||
idx = end
|
||||
break
|
||||
|
||||
# we stopped at the end of the buffer, so we may have a partial token
|
||||
buf = self.rcsfile.read(self.CHUNK_SIZE)
|
||||
idx = end = 0
|
||||
|
||||
self.buf = buf
|
||||
self.idx = idx
|
||||
return token
|
||||
|
||||
# a "string" which starts with the "@" character. we'll skip it when we
|
||||
# search for content.
|
||||
idx = idx + 1
|
||||
|
||||
chunks = [ ]
|
||||
|
||||
while 1:
|
||||
if idx == len(buf):
|
||||
idx = 0
|
||||
buf = self.rcsfile.read(self.CHUNK_SIZE)
|
||||
if buf == '':
|
||||
raise RuntimeError, 'EOF'
|
||||
i = string.find(buf, '@', idx)
|
||||
if i == -1:
|
||||
chunks.append(buf[idx:])
|
||||
idx = len(buf)
|
||||
continue
|
||||
if i == len(buf) - 1:
|
||||
chunks.append(buf[idx:i])
|
||||
idx = 0
|
||||
buf = '@' + self.rcsfile.read(self.CHUNK_SIZE)
|
||||
if buf == '@':
|
||||
raise RuntimeError, 'EOF'
|
||||
continue
|
||||
if buf[i + 1] == '@':
|
||||
chunks.append(buf[idx:i+1])
|
||||
idx = i + 2
|
||||
continue
|
||||
|
||||
chunks.append(buf[idx:i])
|
||||
|
||||
self.buf = buf
|
||||
self.idx = i + 1
|
||||
|
||||
return string.join(chunks, '')
|
||||
|
||||
# _get = get
|
||||
# def get(self):
|
||||
token = self._get()
|
||||
print 'T:', `token`
|
||||
return token
|
||||
|
||||
def match(self, match):
|
||||
"Try to match the next token from the input buffer."
|
||||
|
||||
token = self.get()
|
||||
if token != match:
|
||||
raise common.RCSExpected(token, match)
|
||||
|
||||
def unget(self, token):
|
||||
"Put this token back, for the next get() to return."
|
||||
|
||||
# Override the class' .get method with a function which clears the
|
||||
# overridden method then returns the pushed token. Since this function
|
||||
# will not be looked up via the class mechanism, it should be a "normal"
|
||||
# function, meaning it won't have "self" automatically inserted.
|
||||
# Therefore, we need to pass both self and the token thru via defaults.
|
||||
|
||||
# note: we don't put this into the input buffer because it may have been
|
||||
# @-unescaped already.
|
||||
|
||||
def give_it_back(self=self, token=token):
|
||||
del self.get
|
||||
return token
|
||||
|
||||
self.get = give_it_back
|
||||
|
||||
def mget(self, count):
|
||||
"Return multiple tokens. 'next' is at the end."
|
||||
result = [ ]
|
||||
for i in range(count):
|
||||
result.append(self.get())
|
||||
result.reverse()
|
||||
return result
|
||||
|
||||
|
||||
class Parser(common._Parser):
|
||||
stream_class = _TokenStream
|
|
@ -0,0 +1,73 @@
|
|||
#! /usr/bin/python
|
||||
|
||||
# (Be in -*- python -*- mode.)
|
||||
#
|
||||
# ====================================================================
|
||||
# Copyright (c) 2006-2007 CollabNet. All rights reserved.
|
||||
#
|
||||
# This software is licensed as described in the file COPYING, which
|
||||
# you should have received as part of this distribution. The terms
|
||||
# are also available at http://subversion.tigris.org/license-1.html.
|
||||
# If newer versions of this license are posted there, you may use a
|
||||
# newer version instead, at your option.
|
||||
#
|
||||
# This software consists of voluntary contributions made by many
|
||||
# individuals. For exact contribution history, see the revision
|
||||
# history and logs, available at http://cvs2svn.tigris.org/.
|
||||
# ====================================================================
|
||||
|
||||
"""Parse an RCS file, showing the rcsparse callbacks that are called.
|
||||
|
||||
This program is useful to see whether an RCS file has a problem (in
|
||||
the sense of not being parseable by rcsparse) and also to illuminate
|
||||
the correspondence between RCS file contents and rcsparse callbacks.
|
||||
|
||||
The output of this program can also be considered to be a kind of
|
||||
'canonical' format for RCS files, at least in so far as rcsparse
|
||||
returns all relevant information in the file and provided that the
|
||||
order of callbacks is always the same."""
|
||||
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
|
||||
class Logger:
|
||||
def __init__(self, f, name):
|
||||
self.f = f
|
||||
self.name = name
|
||||
|
||||
def __call__(self, *args):
|
||||
self.f.write(
|
||||
'%s(%s)\n' % (self.name, ', '.join(['%r' % arg for arg in args]),)
|
||||
)
|
||||
|
||||
|
||||
class LoggingSink:
|
||||
def __init__(self, f):
|
||||
self.f = f
|
||||
|
||||
def __getattr__(self, name):
|
||||
return Logger(self.f, name)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
# Since there is nontrivial logic in __init__.py, we have to import
|
||||
# parse() via that file. First make sure that the directory
|
||||
# containing this script is in the path:
|
||||
sys.path.insert(0, os.path.dirname(sys.argv[0]))
|
||||
|
||||
from __init__ import parse
|
||||
|
||||
if sys.argv[1:]:
|
||||
for path in sys.argv[1:]:
|
||||
if os.path.isfile(path) and path.endswith(',v'):
|
||||
parse(
|
||||
open(path, 'rb'), LoggingSink(sys.stdout)
|
||||
)
|
||||
else:
|
||||
sys.stderr.write('%r is being ignored.\n' % path)
|
||||
else:
|
||||
parse(sys.stdin, LoggingSink(sys.stdout))
|
||||
|
||||
|
|
@ -0,0 +1,73 @@
|
|||
#! /usr/bin/python
|
||||
|
||||
# (Be in -*- python -*- mode.)
|
||||
#
|
||||
# ====================================================================
|
||||
# Copyright (c) 2007 CollabNet. All rights reserved.
|
||||
#
|
||||
# This software is licensed as described in the file COPYING, which
|
||||
# you should have received as part of this distribution. The terms
|
||||
# are also available at http://subversion.tigris.org/license-1.html.
|
||||
# If newer versions of this license are posted there, you may use a
|
||||
# newer version instead, at your option.
|
||||
#
|
||||
# This software consists of voluntary contributions made by many
|
||||
# individuals. For exact contribution history, see the revision
|
||||
# history and logs, available at http://viewvc.tigris.org/.
|
||||
# ====================================================================
|
||||
|
||||
"""Run tests of rcsparse code."""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import glob
|
||||
from cStringIO import StringIO
|
||||
from difflib import Differ
|
||||
|
||||
# Since there is nontrivial logic in __init__.py, we have to import
|
||||
# parse() via that file. First make sure that the directory
|
||||
# containing this script is in the path:
|
||||
script_dir = os.path.dirname(sys.argv[0])
|
||||
sys.path.insert(0, script_dir)
|
||||
|
||||
from __init__ import parse
|
||||
from parse_rcs_file import LoggingSink
|
||||
|
||||
|
||||
test_dir = os.path.join(script_dir, 'test-data')
|
||||
|
||||
filelist = glob.glob(os.path.join(test_dir, '*,v'))
|
||||
filelist.sort()
|
||||
|
||||
all_tests_ok = 1
|
||||
|
||||
for filename in filelist:
|
||||
sys.stderr.write('%s: ' % (filename,))
|
||||
f = StringIO()
|
||||
try:
|
||||
parse(open(filename, 'rb'), LoggingSink(f))
|
||||
except Exception, e:
|
||||
sys.stderr.write('Error parsing file: %s!\n' % (e,))
|
||||
all_tests_ok = 0
|
||||
else:
|
||||
output = f.getvalue()
|
||||
|
||||
expected_output_filename = filename[:-2] + '.out'
|
||||
expected_output = open(expected_output_filename, 'rb').read()
|
||||
|
||||
if output == expected_output:
|
||||
sys.stderr.write('OK\n')
|
||||
else:
|
||||
sys.stderr.write('Output does not match expected output!\n')
|
||||
differ = Differ()
|
||||
for diffline in differ.compare(
|
||||
expected_output.splitlines(1), output.splitlines(1)
|
||||
):
|
||||
sys.stderr.write(diffline)
|
||||
all_tests_ok = 0
|
||||
|
||||
if all_tests_ok:
|
||||
sys.exit(0)
|
||||
else:
|
||||
sys.exit(1)
|
||||
|
|
@ -0,0 +1,102 @@
|
|||
head 1.2;
|
||||
access;
|
||||
symbols
|
||||
B_SPLIT:1.2.0.4
|
||||
B_MIXED:1.2.0.2
|
||||
T_MIXED:1.2
|
||||
B_FROM_INITIALS_BUT_ONE:1.1.1.1.0.4
|
||||
B_FROM_INITIALS:1.1.1.1.0.2
|
||||
T_ALL_INITIAL_FILES_BUT_ONE:1.1.1.1
|
||||
T_ALL_INITIAL_FILES:1.1.1.1
|
||||
vendortag:1.1.1.1
|
||||
vendorbranch:1.1.1;
|
||||
locks; strict;
|
||||
comment @# @;
|
||||
|
||||
|
||||
1.2
|
||||
date 2003.05.23.00.17.53; author jrandom; state Exp;
|
||||
branches
|
||||
1.2.2.1
|
||||
1.2.4.1;
|
||||
next 1.1;
|
||||
|
||||
1.1
|
||||
date 98.05.22.23.20.19; author jrandom; state Exp;
|
||||
branches
|
||||
1.1.1.1;
|
||||
next ;
|
||||
|
||||
1.1.1.1
|
||||
date 98.05.22.23.20.19; author jrandom; state Exp;
|
||||
branches;
|
||||
next ;
|
||||
|
||||
1.2.2.1
|
||||
date 2003.05.23.00.31.36; author jrandom; state Exp;
|
||||
branches;
|
||||
next ;
|
||||
|
||||
1.2.4.1
|
||||
date 2003.06.03.03.20.31; author jrandom; state Exp;
|
||||
branches;
|
||||
next ;
|
||||
|
||||
|
||||
desc
|
||||
@@
|
||||
|
||||
|
||||
1.2
|
||||
log
|
||||
@Second commit to proj, affecting all 7 files.
|
||||
@
|
||||
text
|
||||
@This is the file `default' in the top level of the project.
|
||||
|
||||
Every directory in the `proj' project has a file named `default'.
|
||||
|
||||
This line was added in the second commit (affecting all 7 files).
|
||||
@
|
||||
|
||||
|
||||
1.2.4.1
|
||||
log
|
||||
@First change on branch B_SPLIT.
|
||||
|
||||
This change excludes sub3/default, because it was not part of this
|
||||
commit, and sub1/subsubB/default, which is not even on the branch yet.
|
||||
@
|
||||
text
|
||||
@a5 2
|
||||
|
||||
First change on branch B_SPLIT.
|
||||
@
|
||||
|
||||
|
||||
1.2.2.1
|
||||
log
|
||||
@Modify three files, on branch B_MIXED.
|
||||
@
|
||||
text
|
||||
@a5 2
|
||||
|
||||
This line was added on branch B_MIXED only (affecting 3 files).
|
||||
@
|
||||
|
||||
|
||||
1.1
|
||||
log
|
||||
@Initial revision
|
||||
@
|
||||
text
|
||||
@d4 2
|
||||
@
|
||||
|
||||
|
||||
1.1.1.1
|
||||
log
|
||||
@Initial import.
|
||||
@
|
||||
text
|
||||
@@
|
|
@ -0,0 +1,26 @@
|
|||
set_head_revision('1.2')
|
||||
define_tag('B_SPLIT', '1.2.0.4')
|
||||
define_tag('B_MIXED', '1.2.0.2')
|
||||
define_tag('T_MIXED', '1.2')
|
||||
define_tag('B_FROM_INITIALS_BUT_ONE', '1.1.1.1.0.4')
|
||||
define_tag('B_FROM_INITIALS', '1.1.1.1.0.2')
|
||||
define_tag('T_ALL_INITIAL_FILES_BUT_ONE', '1.1.1.1')
|
||||
define_tag('T_ALL_INITIAL_FILES', '1.1.1.1')
|
||||
define_tag('vendortag', '1.1.1.1')
|
||||
define_tag('vendorbranch', '1.1.1')
|
||||
set_locking('strict')
|
||||
set_comment('# ')
|
||||
admin_completed()
|
||||
define_revision('1.2', 1053649073, 'jrandom', 'Exp', ['1.2.2.1', '1.2.4.1'], '1.1')
|
||||
define_revision('1.1', 895879219, 'jrandom', 'Exp', ['1.1.1.1'], None)
|
||||
define_revision('1.1.1.1', 895879219, 'jrandom', 'Exp', [], None)
|
||||
define_revision('1.2.2.1', 1053649896, 'jrandom', 'Exp', [], None)
|
||||
define_revision('1.2.4.1', 1054610431, 'jrandom', 'Exp', [], None)
|
||||
tree_completed()
|
||||
set_description('')
|
||||
set_revision_info('1.2', 'Second commit to proj, affecting all 7 files.\n', "This is the file `default' in the top level of the project.\n\nEvery directory in the `proj' project has a file named `default'.\n\nThis line was added in the second commit (affecting all 7 files).\n")
|
||||
set_revision_info('1.2.4.1', 'First change on branch B_SPLIT.\n\nThis change excludes sub3/default, because it was not part of this\ncommit, and sub1/subsubB/default, which is not even on the branch yet.\n', 'a5 2\n\nFirst change on branch B_SPLIT.\n')
|
||||
set_revision_info('1.2.2.1', 'Modify three files, on branch B_MIXED.\n', 'a5 2\n\nThis line was added on branch B_MIXED only (affecting 3 files).\n')
|
||||
set_revision_info('1.1', 'Initial revision\n', 'd4 2\n')
|
||||
set_revision_info('1.1.1.1', 'Initial import.\n', '')
|
||||
parse_completed()
|
|
@ -0,0 +1,10 @@
|
|||
head ;
|
||||
access;
|
||||
symbols;
|
||||
locks; strict;
|
||||
comment @# @;
|
||||
|
||||
|
||||
|
||||
desc
|
||||
@@
|
|
@ -0,0 +1,6 @@
|
|||
set_locking('strict')
|
||||
set_comment('# ')
|
||||
admin_completed()
|
||||
tree_completed()
|
||||
set_description('')
|
||||
parse_completed()
|
|
@ -0,0 +1,348 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2007 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
import string
|
||||
|
||||
# note: this will raise an ImportError if it isn't available. the rcsparse
|
||||
# package will recognize this and switch over to the default parser.
|
||||
from mx import TextTools
|
||||
|
||||
import common
|
||||
|
||||
|
||||
# for convenience
|
||||
_tt = TextTools
|
||||
|
||||
_idchar_list = map(chr, range(33, 127)) + map(chr, range(160, 256))
|
||||
_idchar_list.remove('$')
|
||||
_idchar_list.remove(',')
|
||||
#_idchar_list.remove('.') # leave as part of 'num' symbol
|
||||
_idchar_list.remove(':')
|
||||
_idchar_list.remove(';')
|
||||
_idchar_list.remove('@')
|
||||
_idchar = string.join(_idchar_list, '')
|
||||
_idchar_set = _tt.set(_idchar)
|
||||
|
||||
_onechar_token_set = _tt.set(':;')
|
||||
|
||||
_not_at_set = _tt.invset('@')
|
||||
|
||||
_T_TOKEN = 30
|
||||
_T_STRING_START = 40
|
||||
_T_STRING_SPAN = 60
|
||||
_T_STRING_END = 70
|
||||
|
||||
_E_COMPLETE = 100 # ended on a complete token
|
||||
_E_TOKEN = 110 # ended mid-token
|
||||
_E_STRING_SPAN = 130 # ended within a string
|
||||
_E_STRING_END = 140 # ended with string-end ('@') (could be mid-@@)
|
||||
|
||||
_SUCCESS = +100
|
||||
|
||||
_EOF = 'EOF'
|
||||
_CONTINUE = 'CONTINUE'
|
||||
_UNUSED = 'UNUSED'
|
||||
|
||||
|
||||
# continuation of a token over a chunk boundary
|
||||
_c_token_table = (
|
||||
(_T_TOKEN, _tt.AllInSet, _idchar_set),
|
||||
)
|
||||
|
||||
class _mxTokenStream:
|
||||
|
||||
# the algorithm is about the same speed for any CHUNK_SIZE chosen.
|
||||
# grab a good-sized chunk, but not too large to overwhelm memory.
|
||||
# note: we use a multiple of a standard block size
|
||||
CHUNK_SIZE = 192 * 512 # about 100k
|
||||
|
||||
# CHUNK_SIZE = 5 # for debugging, make the function grind...
|
||||
|
||||
def __init__(self, file):
|
||||
self.rcsfile = file
|
||||
self.tokens = [ ]
|
||||
self.partial = None
|
||||
|
||||
self.string_end = None
|
||||
|
||||
def _parse_chunk(self, buf, start=0):
|
||||
"Get the next token from the RCS file."
|
||||
|
||||
buflen = len(buf)
|
||||
|
||||
assert start < buflen
|
||||
|
||||
# construct a tag table which refers to the buffer we need to parse.
|
||||
table = (
|
||||
#1: ignore whitespace. with or without whitespace, move to the next rule.
|
||||
(None, _tt.AllInSet, _tt.whitespace_set, +1),
|
||||
|
||||
#2
|
||||
(_E_COMPLETE, _tt.EOF + _tt.AppendTagobj, _tt.Here, +1, _SUCCESS),
|
||||
|
||||
#3: accumulate token text and exit, or move to the next rule.
|
||||
(_UNUSED, _tt.AllInSet + _tt.AppendMatch, _idchar_set, +2),
|
||||
|
||||
#4
|
||||
(_E_TOKEN, _tt.EOF + _tt.AppendTagobj, _tt.Here, -3, _SUCCESS),
|
||||
|
||||
#5: single character tokens exit immediately, or move to the next rule
|
||||
(_UNUSED, _tt.IsInSet + _tt.AppendMatch, _onechar_token_set, +2),
|
||||
|
||||
#6
|
||||
(_E_COMPLETE, _tt.EOF + _tt.AppendTagobj, _tt.Here, -5, _SUCCESS),
|
||||
|
||||
#7: if this isn't an '@' symbol, then we have a syntax error (go to a
|
||||
# negative index to indicate that condition). otherwise, suck it up
|
||||
# and move to the next rule.
|
||||
(_T_STRING_START, _tt.Is + _tt.AppendTagobj, '@'),
|
||||
|
||||
#8
|
||||
(None, _tt.Is, '@', +4, +1),
|
||||
#9
|
||||
(buf, _tt.Is, '@', +1, -1),
|
||||
#10
|
||||
(_T_STRING_END, _tt.Skip + _tt.AppendTagobj, 0, 0, +1),
|
||||
#11
|
||||
(_E_STRING_END, _tt.EOF + _tt.AppendTagobj, _tt.Here, -10, _SUCCESS),
|
||||
|
||||
#12
|
||||
(_E_STRING_SPAN, _tt.EOF + _tt.AppendTagobj, _tt.Here, +1, _SUCCESS),
|
||||
|
||||
#13: suck up everything that isn't an AT. go to next rule to look for EOF
|
||||
(buf, _tt.AllInSet, _not_at_set, 0, +1),
|
||||
|
||||
#14: go back to look for double AT if we aren't at the end of the string
|
||||
(_E_STRING_SPAN, _tt.EOF + _tt.AppendTagobj, _tt.Here, -6, _SUCCESS),
|
||||
)
|
||||
|
||||
# Fast, texttools may be, but it's somewhat lacking in clarity.
|
||||
# Here's an attempt to document the logic encoded in the table above:
|
||||
#
|
||||
# Flowchart:
|
||||
# _____
|
||||
# / /\
|
||||
# 1 -> 2 -> 3 -> 5 -> 7 -> 8 -> 9 -> 10 -> 11
|
||||
# | \/ \/ \/ /\ \/
|
||||
# \ 4 6 12 14 /
|
||||
# \_______/_____/ \ / /
|
||||
# \ 13 /
|
||||
# \__________________________________________/
|
||||
#
|
||||
# #1: Skip over any whitespace.
|
||||
# #2: If now EOF, exit with code _E_COMPLETE.
|
||||
# #3: If we have a series of characters in _idchar_set, then:
|
||||
# #4: Output them as a token, and go back to #1.
|
||||
# #5: If we have a character in _onechar_token_set, then:
|
||||
# #6: Output it as a token, and go back to #1.
|
||||
# #7: If we do not have an '@', then error.
|
||||
# If we do, then log a _T_STRING_START and continue.
|
||||
# #8: If we have another '@', continue on to #9. Otherwise:
|
||||
# #12: If now EOF, exit with code _E_STRING_SPAN.
|
||||
# #13: Record the slice up to the next '@' (or EOF).
|
||||
# #14: If now EOF, exit with code _E_STRING_SPAN.
|
||||
# Otherwise, go back to #8.
|
||||
# #9: If we have another '@', then we've just seen an escaped
|
||||
# (by doubling) '@' within an @-string. Record a slice including
|
||||
# just one '@' character, and jump back to #8.
|
||||
# Otherwise, we've *either* seen the terminating '@' of an @-string,
|
||||
# *or* we've seen one half of an escaped @@ sequence that just
|
||||
# happened to be split over a chunk boundary - in either case,
|
||||
# we continue on to #10.
|
||||
# #10: Log a _T_STRING_END.
|
||||
# #11: If now EOF, exit with _E_STRING_END. Otherwise, go back to #1.
|
||||
|
||||
success, taglist, idx = _tt.tag(buf, table, start)
|
||||
|
||||
if not success:
|
||||
### need a better way to report this error
|
||||
raise common.RCSIllegalCharacter()
|
||||
assert idx == buflen
|
||||
|
||||
# pop off the last item
|
||||
last_which = taglist.pop()
|
||||
|
||||
i = 0
|
||||
tlen = len(taglist)
|
||||
while i < tlen:
|
||||
if taglist[i] == _T_STRING_START:
|
||||
j = i + 1
|
||||
while j < tlen:
|
||||
if taglist[j] == _T_STRING_END:
|
||||
s = _tt.join(taglist, '', i+1, j)
|
||||
del taglist[i:j]
|
||||
tlen = len(taglist)
|
||||
taglist[i] = s
|
||||
break
|
||||
j = j + 1
|
||||
else:
|
||||
assert last_which == _E_STRING_SPAN
|
||||
s = _tt.join(taglist, '', i+1)
|
||||
del taglist[i:]
|
||||
self.partial = (_T_STRING_SPAN, [ s ])
|
||||
break
|
||||
i = i + 1
|
||||
|
||||
# figure out whether we have a partial last-token
|
||||
if last_which == _E_TOKEN:
|
||||
self.partial = (_T_TOKEN, [ taglist.pop() ])
|
||||
elif last_which == _E_COMPLETE:
|
||||
pass
|
||||
elif last_which == _E_STRING_SPAN:
|
||||
assert self.partial
|
||||
else:
|
||||
assert last_which == _E_STRING_END
|
||||
self.partial = (_T_STRING_END, [ taglist.pop() ])
|
||||
|
||||
taglist.reverse()
|
||||
taglist.extend(self.tokens)
|
||||
self.tokens = taglist
|
||||
|
||||
def _set_end(self, taglist, text, l, r, subtags):
|
||||
self.string_end = l
|
||||
|
||||
def _handle_partial(self, buf):
|
||||
which, chunks = self.partial
|
||||
if which == _T_TOKEN:
|
||||
success, taglist, idx = _tt.tag(buf, _c_token_table)
|
||||
if not success:
|
||||
# The start of this buffer was not a token. So the end of the
|
||||
# prior buffer was a complete token.
|
||||
self.tokens.insert(0, string.join(chunks, ''))
|
||||
else:
|
||||
assert len(taglist) == 1 and taglist[0][0] == _T_TOKEN \
|
||||
and taglist[0][1] == 0 and taglist[0][2] == idx
|
||||
if idx == len(buf):
|
||||
#
|
||||
# The whole buffer was one huge token, so we may have a
|
||||
# partial token again.
|
||||
#
|
||||
# Note: this modifies the list of chunks in self.partial
|
||||
#
|
||||
chunks.append(buf)
|
||||
|
||||
# consumed the whole buffer
|
||||
return len(buf)
|
||||
|
||||
# got the rest of the token.
|
||||
chunks.append(buf[:idx])
|
||||
self.tokens.insert(0, string.join(chunks, ''))
|
||||
|
||||
# no more partial token
|
||||
self.partial = None
|
||||
|
||||
return idx
|
||||
|
||||
if which == _T_STRING_END:
|
||||
if buf[0] != '@':
|
||||
self.tokens.insert(0, string.join(chunks, ''))
|
||||
return 0
|
||||
chunks.append('@')
|
||||
start = 1
|
||||
else:
|
||||
start = 0
|
||||
|
||||
self.string_end = None
|
||||
string_table = (
|
||||
(None, _tt.Is, '@', +3, +1),
|
||||
(_UNUSED, _tt.Is + _tt.AppendMatch, '@', +1, -1),
|
||||
(self._set_end, _tt.Skip + _tt.CallTag, 0, 0, _SUCCESS),
|
||||
|
||||
(None, _tt.EOF, _tt.Here, +1, _SUCCESS),
|
||||
|
||||
# suck up everything that isn't an AT. move to next rule to look
|
||||
# for EOF
|
||||
(_UNUSED, _tt.AllInSet + _tt.AppendMatch, _not_at_set, 0, +1),
|
||||
|
||||
# go back to look for double AT if we aren't at the end of the string
|
||||
(None, _tt.EOF, _tt.Here, -5, _SUCCESS),
|
||||
)
|
||||
|
||||
success, unused, idx = _tt.tag(buf, string_table,
|
||||
start, len(buf), chunks)
|
||||
|
||||
# must have matched at least one item
|
||||
assert success
|
||||
|
||||
if self.string_end is None:
|
||||
assert idx == len(buf)
|
||||
self.partial = (_T_STRING_SPAN, chunks)
|
||||
elif self.string_end < len(buf):
|
||||
self.partial = None
|
||||
self.tokens.insert(0, string.join(chunks, ''))
|
||||
else:
|
||||
self.partial = (_T_STRING_END, chunks)
|
||||
|
||||
return idx
|
||||
|
||||
def _parse_more(self):
|
||||
buf = self.rcsfile.read(self.CHUNK_SIZE)
|
||||
if not buf:
|
||||
return _EOF
|
||||
|
||||
if self.partial:
|
||||
idx = self._handle_partial(buf)
|
||||
if idx is None:
|
||||
return _CONTINUE
|
||||
if idx < len(buf):
|
||||
self._parse_chunk(buf, idx)
|
||||
else:
|
||||
self._parse_chunk(buf)
|
||||
|
||||
return _CONTINUE
|
||||
|
||||
def get(self):
|
||||
try:
|
||||
return self.tokens.pop()
|
||||
except IndexError:
|
||||
pass
|
||||
|
||||
while not self.tokens:
|
||||
action = self._parse_more()
|
||||
if action == _EOF:
|
||||
return None
|
||||
|
||||
return self.tokens.pop()
|
||||
|
||||
|
||||
# _get = get
|
||||
# def get(self):
|
||||
token = self._get()
|
||||
print 'T:', `token`
|
||||
return token
|
||||
|
||||
def match(self, match):
|
||||
if self.tokens:
|
||||
token = self.tokens.pop()
|
||||
else:
|
||||
token = self.get()
|
||||
|
||||
if token != match:
|
||||
raise common.RCSExpected(token, match)
|
||||
|
||||
def unget(self, token):
|
||||
self.tokens.append(token)
|
||||
|
||||
def mget(self, count):
|
||||
"Return multiple tokens. 'next' is at the end."
|
||||
while len(self.tokens) < count:
|
||||
action = self._parse_more()
|
||||
if action == _EOF:
|
||||
### fix this
|
||||
raise RuntimeError, 'EOF hit while expecting tokens'
|
||||
result = self.tokens[-count:]
|
||||
del self.tokens[-count:]
|
||||
return result
|
||||
|
||||
|
||||
class Parser(common._Parser):
|
||||
stream_class = _mxTokenStream
|
|
@ -0,0 +1,55 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2008 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
"Version Control lib driver for Subversion repositories"
|
||||
|
||||
import os
|
||||
import os.path
|
||||
import re
|
||||
|
||||
_re_url = re.compile('^(http|https|file|svn|svn\+[^:]+)://')
|
||||
|
||||
def canonicalize_rootpath(rootpath):
|
||||
try:
|
||||
import svn.core
|
||||
return svn.core.svn_path_canonicalize(rootpath)
|
||||
except:
|
||||
if re.search(_re_url, rootpath):
|
||||
return rootpath[-1] == '/' and rootpath[:-1] or rootpath
|
||||
return os.path.normpath(rootpath)
|
||||
|
||||
|
||||
def expand_root_parent(parent_path):
|
||||
roots = {}
|
||||
if re.search(_re_url, parent_path):
|
||||
pass
|
||||
else:
|
||||
# Any subdirectories of PARENT_PATH which themselves have a child
|
||||
# "format" are returned as roots.
|
||||
subpaths = os.listdir(parent_path)
|
||||
for rootname in subpaths:
|
||||
rootpath = os.path.join(parent_path, rootname)
|
||||
if os.path.exists(os.path.join(rootpath, "format")):
|
||||
roots[rootname] = canonicalize_rootpath(rootpath)
|
||||
return roots
|
||||
|
||||
|
||||
def SubversionRepository(name, rootpath, authorizer, utilities, config_dir):
|
||||
rootpath = canonicalize_rootpath(rootpath)
|
||||
if re.search(_re_url, rootpath):
|
||||
import svn_ra
|
||||
return svn_ra.RemoteSubversionRepository(name, rootpath, authorizer,
|
||||
utilities, config_dir)
|
||||
else:
|
||||
import svn_repos
|
||||
return svn_repos.LocalSubversionRepository(name, rootpath, authorizer,
|
||||
utilities, config_dir)
|
|
@ -0,0 +1,546 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2008 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
"Version Control lib driver for remotely accessible Subversion repositories."
|
||||
|
||||
import vclib
|
||||
import sys
|
||||
import os
|
||||
import string
|
||||
import re
|
||||
import tempfile
|
||||
import popen2
|
||||
import time
|
||||
import urllib
|
||||
from svn_repos import Revision, SVNChangedPath, _datestr_to_date, _compare_paths, _path_parts, _cleanup_path, _rev2optrev
|
||||
from svn import core, delta, client, wc, ra
|
||||
|
||||
|
||||
### Require Subversion 1.3.1 or better. (for svn_ra_get_locations support)
|
||||
if (core.SVN_VER_MAJOR, core.SVN_VER_MINOR, core.SVN_VER_PATCH) < (1, 3, 1):
|
||||
raise Exception, "Version requirement not met (needs 1.3.1 or better)"
|
||||
|
||||
|
||||
### BEGIN COMPATABILITY CODE ###
|
||||
|
||||
try:
|
||||
SVN_INVALID_REVNUM = core.SVN_INVALID_REVNUM
|
||||
except AttributeError: # The 1.4.x bindings are missing core.SVN_INVALID_REVNUM
|
||||
SVN_INVALID_REVNUM = -1
|
||||
|
||||
def list_directory(url, peg_rev, rev, flag, ctx):
|
||||
try:
|
||||
dirents, locks = client.svn_client_ls3(url, peg_rev, rev, flag, ctx)
|
||||
except TypeError: # 1.4.x bindings are goofed
|
||||
dirents = client.svn_client_ls3(None, url, peg_rev, rev, flag, ctx)
|
||||
locks = {}
|
||||
return dirents, locks
|
||||
|
||||
def get_directory_props(ra_session, path, rev):
|
||||
try:
|
||||
dirents, fetched_rev, props = ra.svn_ra_get_dir(ra_session, path, rev)
|
||||
except ValueError: # older bindings are goofed
|
||||
props = ra.svn_ra_get_dir(ra_session, path, rev)
|
||||
return props
|
||||
|
||||
### END COMPATABILITY CODE ###
|
||||
|
||||
|
||||
class LogCollector:
|
||||
### TODO: Make this thing authz-aware
|
||||
|
||||
def __init__(self, path, show_all_logs, lockinfo):
|
||||
# This class uses leading slashes for paths internally
|
||||
if not path:
|
||||
self.path = '/'
|
||||
else:
|
||||
self.path = path[0] == '/' and path or '/' + path
|
||||
self.logs = []
|
||||
self.show_all_logs = show_all_logs
|
||||
self.lockinfo = lockinfo
|
||||
|
||||
def add_log(self, paths, revision, author, date, message, pool):
|
||||
# Changed paths have leading slashes
|
||||
changed_paths = paths.keys()
|
||||
changed_paths.sort(lambda a, b: _compare_paths(a, b))
|
||||
this_path = None
|
||||
if self.path in changed_paths:
|
||||
this_path = self.path
|
||||
change = paths[self.path]
|
||||
if change.copyfrom_path:
|
||||
this_path = change.copyfrom_path
|
||||
for changed_path in changed_paths:
|
||||
if changed_path != self.path:
|
||||
# If a parent of our path was copied, our "next previous"
|
||||
# (huh?) path will exist elsewhere (under the copy source).
|
||||
if (string.rfind(self.path, changed_path) == 0) and \
|
||||
self.path[len(changed_path)] == '/':
|
||||
change = paths[changed_path]
|
||||
if change.copyfrom_path:
|
||||
this_path = change.copyfrom_path + self.path[len(changed_path):]
|
||||
if self.show_all_logs or this_path:
|
||||
entry = Revision(revision, _datestr_to_date(date), author, message, None,
|
||||
self.lockinfo, self.path[1:], None, None)
|
||||
self.logs.append(entry)
|
||||
if this_path:
|
||||
self.path = this_path
|
||||
|
||||
def temp_checkout(svnrepos, path, rev):
|
||||
"""Check out file revision to temporary file"""
|
||||
temp = tempfile.mktemp()
|
||||
stream = core.svn_stream_from_aprfile(temp)
|
||||
url = svnrepos._geturl(path)
|
||||
client.svn_client_cat(core.Stream(stream), url, _rev2optrev(rev),
|
||||
svnrepos.ctx)
|
||||
core.svn_stream_close(stream)
|
||||
return temp
|
||||
|
||||
class SelfCleanFP:
|
||||
def __init__(self, path):
|
||||
self._fp = open(path, 'r')
|
||||
self._path = path
|
||||
self._eof = 0
|
||||
|
||||
def read(self, len=None):
|
||||
if len:
|
||||
chunk = self._fp.read(len)
|
||||
else:
|
||||
chunk = self._fp.read()
|
||||
if chunk == '':
|
||||
self._eof = 1
|
||||
return chunk
|
||||
|
||||
def readline(self):
|
||||
chunk = self._fp.readline()
|
||||
if chunk == '':
|
||||
self._eof = 1
|
||||
return chunk
|
||||
|
||||
def readlines(self):
|
||||
lines = self._fp.readlines()
|
||||
self._eof = 1
|
||||
return lines
|
||||
|
||||
def close(self):
|
||||
self._fp.close()
|
||||
os.remove(self._path)
|
||||
|
||||
def __del__(self):
|
||||
self.close()
|
||||
|
||||
def eof(self):
|
||||
return self._eof
|
||||
|
||||
|
||||
class RemoteSubversionRepository(vclib.Repository):
|
||||
def __init__(self, name, rootpath, authorizer, utilities, config_dir):
|
||||
self.name = name
|
||||
self.rootpath = rootpath
|
||||
self.auth = authorizer
|
||||
self.diff_cmd = utilities.diff or 'diff'
|
||||
self.config_dir = config_dir or None
|
||||
|
||||
# See if this repository is even viewable, authz-wise.
|
||||
if not vclib.check_root_access(self):
|
||||
raise vclib.ReposNotFound(name)
|
||||
|
||||
def open(self):
|
||||
# Setup the client context baton, complete with non-prompting authstuffs.
|
||||
# TODO: svn_cmdline_setup_auth_baton() is mo' better (when available)
|
||||
core.svn_config_ensure(self.config_dir)
|
||||
self.ctx = client.svn_client_ctx_t()
|
||||
self.ctx.auth_baton = core.svn_auth_open([
|
||||
client.svn_client_get_simple_provider(),
|
||||
client.svn_client_get_username_provider(),
|
||||
client.svn_client_get_ssl_server_trust_file_provider(),
|
||||
client.svn_client_get_ssl_client_cert_file_provider(),
|
||||
client.svn_client_get_ssl_client_cert_pw_file_provider(),
|
||||
])
|
||||
self.ctx.config = core.svn_config_get_config(self.config_dir)
|
||||
if self.config_dir is not None:
|
||||
core.svn_auth_set_parameter(self.ctx.auth_baton,
|
||||
core.SVN_AUTH_PARAM_CONFIG_DIR,
|
||||
self.config_dir)
|
||||
ra_callbacks = ra.svn_ra_callbacks_t()
|
||||
ra_callbacks.auth_baton = self.ctx.auth_baton
|
||||
self.ra_session = ra.svn_ra_open(self.rootpath, ra_callbacks, None,
|
||||
self.ctx.config)
|
||||
self.youngest = ra.svn_ra_get_latest_revnum(self.ra_session)
|
||||
self._dirent_cache = { }
|
||||
self._revinfo_cache = { }
|
||||
|
||||
def rootname(self):
|
||||
return self.name
|
||||
|
||||
def rootpath(self):
|
||||
return self.rootpath
|
||||
|
||||
def roottype(self):
|
||||
return vclib.SVN
|
||||
|
||||
def authorizer(self):
|
||||
return self.auth
|
||||
|
||||
def itemtype(self, path_parts, rev):
|
||||
pathtype = None
|
||||
if not len(path_parts):
|
||||
pathtype = vclib.DIR
|
||||
else:
|
||||
path = self._getpath(path_parts)
|
||||
rev = self._getrev(rev)
|
||||
try:
|
||||
kind = ra.svn_ra_check_path(self.ra_session, path, rev)
|
||||
if kind == core.svn_node_file:
|
||||
pathtype = vclib.FILE
|
||||
elif kind == core.svn_node_dir:
|
||||
pathtype = vclib.DIR
|
||||
except:
|
||||
pass
|
||||
if pathtype is None:
|
||||
raise vclib.ItemNotFound(path_parts)
|
||||
if not vclib.check_path_access(self, path_parts, pathtype, rev):
|
||||
raise vclib.ItemNotFound(path_parts)
|
||||
return pathtype
|
||||
|
||||
def openfile(self, path_parts, rev):
|
||||
path = self._getpath(path_parts)
|
||||
if self.itemtype(path_parts, rev) != vclib.FILE: # does auth-check
|
||||
raise vclib.Error("Path '%s' is not a file." % path)
|
||||
rev = self._getrev(rev)
|
||||
url = self._geturl(path)
|
||||
tmp_file = tempfile.mktemp()
|
||||
stream = core.svn_stream_from_aprfile(tmp_file)
|
||||
### rev here should be the last history revision of the URL
|
||||
client.svn_client_cat(core.Stream(stream), url, _rev2optrev(rev), self.ctx)
|
||||
core.svn_stream_close(stream)
|
||||
return SelfCleanFP(tmp_file), self._get_last_history_rev(path_parts, rev)
|
||||
|
||||
def listdir(self, path_parts, rev, options):
|
||||
path = self._getpath(path_parts)
|
||||
if self.itemtype(path_parts, rev) != vclib.DIR: # does auth-check
|
||||
raise vclib.Error("Path '%s' is not a directory." % path)
|
||||
rev = self._getrev(rev)
|
||||
entries = [ ]
|
||||
dirents, locks = self._get_dirents(path, rev)
|
||||
for name in dirents.keys():
|
||||
entry = dirents[name]
|
||||
if entry.kind == core.svn_node_dir:
|
||||
kind = vclib.DIR
|
||||
elif entry.kind == core.svn_node_file:
|
||||
kind = vclib.FILE
|
||||
if vclib.check_path_access(self, path_parts + [name], kind, rev):
|
||||
entries.append(vclib.DirEntry(name, kind))
|
||||
return entries
|
||||
|
||||
def dirlogs(self, path_parts, rev, entries, options):
|
||||
path = self._getpath(path_parts)
|
||||
if self.itemtype(path_parts, rev) != vclib.DIR: # does auth-check
|
||||
raise vclib.Error("Path '%s' is not a directory." % path)
|
||||
rev = self._getrev(rev)
|
||||
dirents, locks = self._get_dirents(path, rev)
|
||||
for entry in entries:
|
||||
entry_path_parts = path_parts + [entry.name]
|
||||
if not vclib.check_path_access(self, entry_path_parts, entry.kind, rev):
|
||||
continue
|
||||
dirent = dirents[entry.name]
|
||||
entry.date, entry.author, entry.log, changes = \
|
||||
self.revinfo(dirent.created_rev)
|
||||
entry.rev = dirent.created_rev
|
||||
entry.size = dirent.size
|
||||
entry.lockinfo = None
|
||||
if locks.has_key(entry.name):
|
||||
entry.lockinfo = locks[entry.name].owner
|
||||
|
||||
def itemlog(self, path_parts, rev, sortby, first, limit, options):
|
||||
assert sortby == vclib.SORTBY_DEFAULT or sortby == vclib.SORTBY_REV
|
||||
path_type = self.itemtype(path_parts, rev) # does auth-check
|
||||
path = self._getpath(path_parts)
|
||||
rev = self._getrev(rev)
|
||||
url = self._geturl(path)
|
||||
|
||||
# Use ls3 to fetch the lock status for this item.
|
||||
lockinfo = None
|
||||
basename = path_parts and path_parts[-1] or ""
|
||||
dirents, locks = list_directory(url, _rev2optrev(rev),
|
||||
_rev2optrev(rev), 0, self.ctx)
|
||||
if locks.has_key(basename):
|
||||
lockinfo = locks[basename].owner
|
||||
|
||||
# It's okay if we're told to not show all logs on a file -- all
|
||||
# the revisions should match correctly anyway.
|
||||
lc = LogCollector(path, options.get('svn_show_all_dir_logs', 0), lockinfo)
|
||||
|
||||
cross_copies = options.get('svn_cross_copies', 0)
|
||||
log_limit = 0
|
||||
if limit:
|
||||
log_limit = first + limit
|
||||
client.svn_client_log2([url], _rev2optrev(rev), _rev2optrev(1),
|
||||
log_limit, 1, not cross_copies,
|
||||
lc.add_log, self.ctx)
|
||||
revs = lc.logs
|
||||
revs.sort()
|
||||
prev = None
|
||||
for rev in revs:
|
||||
rev.prev = prev
|
||||
prev = rev
|
||||
revs.reverse()
|
||||
|
||||
if len(revs) < first:
|
||||
return []
|
||||
if limit:
|
||||
return revs[first:first+limit]
|
||||
return revs
|
||||
|
||||
def itemprops(self, path_parts, rev):
|
||||
path = self._getpath(path_parts)
|
||||
path_type = self.itemtype(path_parts, rev) # does auth-check
|
||||
rev = self._getrev(rev)
|
||||
url = self._geturl(path)
|
||||
pairs = client.svn_client_proplist2(url, _rev2optrev(rev),
|
||||
_rev2optrev(rev), 0, self.ctx)
|
||||
return pairs and pairs[0][1] or {}
|
||||
|
||||
def annotate(self, path_parts, rev):
|
||||
path = self._getpath(path_parts)
|
||||
if self.itemtype(path_parts, rev) != vclib.FILE: # does auth-check
|
||||
raise vclib.Error("Path '%s' is not a file." % path)
|
||||
rev = self._getrev(rev)
|
||||
url = self._geturl(path)
|
||||
|
||||
blame_data = []
|
||||
|
||||
def _blame_cb(line_no, revision, author, date,
|
||||
line, pool, blame_data=blame_data):
|
||||
prev_rev = None
|
||||
if revision > 1:
|
||||
prev_rev = revision - 1
|
||||
blame_data.append(vclib.Annotation(line, line_no+1, revision, prev_rev,
|
||||
author, None))
|
||||
|
||||
client.svn_client_blame(url, _rev2optrev(1), _rev2optrev(rev),
|
||||
_blame_cb, self.ctx)
|
||||
|
||||
return blame_data, rev
|
||||
|
||||
def revinfo(self, rev):
|
||||
rev = self._getrev(rev)
|
||||
cached_info = self._revinfo_cache.get(rev)
|
||||
if not cached_info:
|
||||
cached_info = self._revinfo_raw(rev)
|
||||
self._revinfo_cache[rev] = cached_info
|
||||
return cached_info[0], cached_info[1], cached_info[2], cached_info[3]
|
||||
|
||||
def rawdiff(self, path_parts1, rev1, path_parts2, rev2, type, options={}):
|
||||
p1 = self._getpath(path_parts1)
|
||||
p2 = self._getpath(path_parts2)
|
||||
r1 = self._getrev(rev1)
|
||||
r2 = self._getrev(rev2)
|
||||
if not vclib.check_path_access(self, path_parts1, vclib.FILE, rev1):
|
||||
raise vclib.ItemNotFound(path_parts1)
|
||||
if not vclib.check_path_access(self, path_parts2, vclib.FILE, rev2):
|
||||
raise vclib.ItemNotFound(path_parts2)
|
||||
|
||||
args = vclib._diff_args(type, options)
|
||||
|
||||
def _date_from_rev(rev):
|
||||
date, author, msg, changes = self.revinfo(rev)
|
||||
return date
|
||||
|
||||
try:
|
||||
temp1 = temp_checkout(self, p1, r1)
|
||||
temp2 = temp_checkout(self, p2, r2)
|
||||
info1 = p1, _date_from_rev(r1), r1
|
||||
info2 = p2, _date_from_rev(r2), r2
|
||||
return vclib._diff_fp(temp1, temp2, info1, info2, self.diff_cmd, args)
|
||||
except core.SubversionException, e:
|
||||
if e.apr_err == vclib.svn.core.SVN_ERR_FS_NOT_FOUND:
|
||||
raise vclib.InvalidRevision
|
||||
raise
|
||||
|
||||
def isexecutable(self, path_parts, rev):
|
||||
props = self.itemprops(path_parts, rev) # does authz-check
|
||||
return props.has_key(core.SVN_PROP_EXECUTABLE)
|
||||
|
||||
def _getpath(self, path_parts):
|
||||
return string.join(path_parts, '/')
|
||||
|
||||
def _getrev(self, rev):
|
||||
if rev is None or rev == 'HEAD':
|
||||
return self.youngest
|
||||
try:
|
||||
rev = int(rev)
|
||||
except ValueError:
|
||||
raise vclib.InvalidRevision(rev)
|
||||
if (rev < 0) or (rev > self.youngest):
|
||||
raise vclib.InvalidRevision(rev)
|
||||
return rev
|
||||
|
||||
def _geturl(self, path=None):
|
||||
if not path:
|
||||
return self.rootpath
|
||||
return self.rootpath + '/' + urllib.quote(path, "/*~")
|
||||
|
||||
def _get_dirents(self, path, rev):
|
||||
"""Return a 2-type of dirents and locks, possibly reading/writing
|
||||
from a local cache of that information."""
|
||||
|
||||
dir_url = self._geturl(path)
|
||||
if path:
|
||||
key = str(rev) + '/' + path
|
||||
else:
|
||||
key = str(rev)
|
||||
dirents_locks = self._dirent_cache.get(key)
|
||||
if not dirents_locks:
|
||||
dirents, locks = list_directory(dir_url, _rev2optrev(rev),
|
||||
_rev2optrev(rev), 0, self.ctx)
|
||||
dirents_locks = [dirents, locks]
|
||||
self._dirent_cache[key] = dirents_locks
|
||||
return dirents_locks[0], dirents_locks[1]
|
||||
|
||||
def _get_last_history_rev(self, path_parts, rev):
|
||||
url = self._geturl(self._getpath(path_parts))
|
||||
optrev = _rev2optrev(rev)
|
||||
revisions = []
|
||||
def _info_cb(path, info, pool, retval=revisions):
|
||||
revisions.append(info.last_changed_rev)
|
||||
client.svn_client_info(url, optrev, optrev, _info_cb, 0, self.ctx)
|
||||
return revisions[0]
|
||||
|
||||
def _revinfo_raw(self, rev):
|
||||
# return 5-tuple (date, author, message, changes)
|
||||
optrev = _rev2optrev(rev)
|
||||
revs = []
|
||||
|
||||
def _log_cb(changed_paths, revision, author,
|
||||
datestr, message, pool, retval=revs):
|
||||
date = _datestr_to_date(datestr)
|
||||
action_map = { 'D' : vclib.DELETED,
|
||||
'A' : vclib.ADDED,
|
||||
'R' : vclib.REPLACED,
|
||||
'M' : vclib.MODIFIED,
|
||||
}
|
||||
paths = (changed_paths or {}).keys()
|
||||
paths.sort(lambda a, b: _compare_paths(a, b))
|
||||
changes = []
|
||||
found_readable = found_unreadable = 0
|
||||
for path in paths:
|
||||
pathtype = None
|
||||
change = changed_paths[path]
|
||||
action = action_map.get(change.action, vclib.MODIFIED)
|
||||
### Wrong, diddily wrong wrong wrong. Can you say,
|
||||
### "Manufacturing data left and right because it hurts to
|
||||
### figure out the right stuff?"
|
||||
if change.copyfrom_path and change.copyfrom_rev:
|
||||
is_copy = 1
|
||||
base_path = change.copyfrom_path
|
||||
base_rev = change.copyfrom_rev
|
||||
elif action == vclib.ADDED or action == vclib.REPLACED:
|
||||
is_copy = 0
|
||||
base_path = base_rev = None
|
||||
else:
|
||||
is_copy = 0
|
||||
base_path = path
|
||||
base_rev = revision - 1
|
||||
|
||||
### Check authz rules (we lie about the path type)
|
||||
parts = _path_parts(path)
|
||||
if vclib.check_path_access(self, parts, vclib.FILE, revision):
|
||||
if is_copy and base_path and (base_path != path):
|
||||
parts = _path_parts(base_path)
|
||||
if vclib.check_path_access(self, parts, vclib.FILE, base_rev):
|
||||
is_copy = 0
|
||||
base_path = None
|
||||
base_rev = None
|
||||
changes.append(SVNChangedPath(path, revision, pathtype, base_path,
|
||||
base_rev, action, is_copy, 0, 0))
|
||||
found_readable = 1
|
||||
else:
|
||||
found_unreadable = 1
|
||||
|
||||
if found_unreadable:
|
||||
message = None
|
||||
if not found_readable:
|
||||
author = None
|
||||
date = None
|
||||
revs.append([date, author, message, changes])
|
||||
|
||||
client.svn_client_log([self.rootpath], optrev, optrev,
|
||||
1, 0, _log_cb, self.ctx)
|
||||
return revs[0][0], revs[0][1], revs[0][2], revs[0][3]
|
||||
|
||||
##--- custom --##
|
||||
|
||||
def get_youngest_revision(self):
|
||||
return self.youngest
|
||||
|
||||
def get_location(self, path, rev, old_rev):
|
||||
try:
|
||||
results = ra.get_locations(self.ra_session, path, rev, [old_rev])
|
||||
except core.SubversionException, e:
|
||||
if e.apr_err == core.SVN_ERR_FS_NOT_FOUND:
|
||||
raise vclib.ItemNotFound(path)
|
||||
raise
|
||||
try:
|
||||
old_path = results[old_rev]
|
||||
except KeyError:
|
||||
raise vclib.ItemNotFound(path)
|
||||
|
||||
return _cleanup_path(old_path)
|
||||
|
||||
def created_rev(self, path, rev):
|
||||
# NOTE: We can't use svn_client_propget here because the
|
||||
# interfaces in that layer strip out the properties not meant for
|
||||
# human consumption (such as svn:entry:committed-rev, which we are
|
||||
# using here to get the created revision of PATH@REV).
|
||||
kind = ra.svn_ra_check_path(self.ra_session, path, rev)
|
||||
if kind == core.svn_node_none:
|
||||
raise vclib.ItemNotFound(_path_parts(path))
|
||||
elif kind == core.svn_node_dir:
|
||||
props = get_directory_props(self.ra_session, path, rev)
|
||||
elif kind == core.svn_node_file:
|
||||
fetched_rev, props = ra.svn_ra_get_file(self.ra_session, path, rev, None)
|
||||
return int(props.get(core.SVN_PROP_ENTRY_COMMITTED_REV,
|
||||
SVN_INVALID_REVNUM))
|
||||
|
||||
def last_rev(self, path, peg_revision, limit_revision=None):
|
||||
"""Given PATH, known to exist in PEG_REVISION, find the youngest
|
||||
revision older than, or equal to, LIMIT_REVISION in which path
|
||||
exists. Return that revision, and the path at which PATH exists in
|
||||
that revision."""
|
||||
|
||||
# Here's the plan, man. In the trivial case (where PEG_REVISION is
|
||||
# the same as LIMIT_REVISION), this is a no-brainer. If
|
||||
# LIMIT_REVISION is older than PEG_REVISION, we can use Subversion's
|
||||
# history tracing code to find the right location. If, however,
|
||||
# LIMIT_REVISION is younger than PEG_REVISION, we suffer from
|
||||
# Subversion's lack of forward history searching. Our workaround,
|
||||
# ugly as it may be, involves a binary search through the revisions
|
||||
# between PEG_REVISION and LIMIT_REVISION to find our last live
|
||||
# revision.
|
||||
peg_revision = self._getrev(peg_revision)
|
||||
limit_revision = self._getrev(limit_revision)
|
||||
if peg_revision == limit_revision:
|
||||
return peg_revision, path
|
||||
elif peg_revision > limit_revision:
|
||||
path = self.get_location(path, peg_revision, limit_revision)
|
||||
return limit_revision, path
|
||||
else:
|
||||
direction = 1
|
||||
while peg_revision != limit_revision:
|
||||
mid = (peg_revision + 1 + limit_revision) / 2
|
||||
try:
|
||||
path = self.get_location(path, peg_revision, mid)
|
||||
except vclib.ItemNotFound:
|
||||
limit_revision = mid - 1
|
||||
else:
|
||||
peg_revision = mid
|
||||
return peg_revision, path
|
|
@ -0,0 +1,778 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2008 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
"Version Control lib driver for locally accessible Subversion repositories"
|
||||
|
||||
import vclib
|
||||
import os
|
||||
import os.path
|
||||
import stat
|
||||
import string
|
||||
import cStringIO
|
||||
import signal
|
||||
import shutil
|
||||
import time
|
||||
import tempfile
|
||||
import popen
|
||||
import re
|
||||
from svn import fs, repos, core, client, delta
|
||||
|
||||
|
||||
### Require Subversion 1.3.1 or better.
|
||||
if (core.SVN_VER_MAJOR, core.SVN_VER_MINOR, core.SVN_VER_PATCH) < (1, 3, 1):
|
||||
raise Exception, "Version requirement not met (needs 1.3.1 or better)"
|
||||
|
||||
|
||||
def _allow_all(root, path, pool):
|
||||
"""Generic authz_read_func that permits access to all paths"""
|
||||
return 1
|
||||
|
||||
|
||||
def _path_parts(path):
|
||||
return filter(None, string.split(path, '/'))
|
||||
|
||||
|
||||
def _cleanup_path(path):
|
||||
"""Return a cleaned-up Subversion filesystem path"""
|
||||
return string.join(_path_parts(path), '/')
|
||||
|
||||
|
||||
def _fs_path_join(base, relative):
|
||||
return _cleanup_path(base + '/' + relative)
|
||||
|
||||
|
||||
def _compare_paths(path1, path2):
|
||||
path1_len = len (path1);
|
||||
path2_len = len (path2);
|
||||
min_len = min(path1_len, path2_len)
|
||||
i = 0
|
||||
|
||||
# Are the paths exactly the same?
|
||||
if path1 == path2:
|
||||
return 0
|
||||
|
||||
# Skip past common prefix
|
||||
while (i < min_len) and (path1[i] == path2[i]):
|
||||
i = i + 1
|
||||
|
||||
# Children of paths are greater than their parents, but less than
|
||||
# greater siblings of their parents
|
||||
char1 = '\0'
|
||||
char2 = '\0'
|
||||
if (i < path1_len):
|
||||
char1 = path1[i]
|
||||
if (i < path2_len):
|
||||
char2 = path2[i]
|
||||
|
||||
if (char1 == '/') and (i == path2_len):
|
||||
return 1
|
||||
if (char2 == '/') and (i == path1_len):
|
||||
return -1
|
||||
if (i < path1_len) and (char1 == '/'):
|
||||
return -1
|
||||
if (i < path2_len) and (char2 == '/'):
|
||||
return 1
|
||||
|
||||
# Common prefix was skipped above, next character is compared to
|
||||
# determine order
|
||||
return cmp(char1, char2)
|
||||
|
||||
|
||||
def _rev2optrev(rev):
|
||||
assert type(rev) is int
|
||||
rt = core.svn_opt_revision_t()
|
||||
rt.kind = core.svn_opt_revision_number
|
||||
rt.value.number = rev
|
||||
return rt
|
||||
|
||||
|
||||
def _rootpath2url(rootpath, path):
|
||||
rootpath = os.path.abspath(rootpath)
|
||||
if rootpath and rootpath[0] != '/':
|
||||
rootpath = '/' + rootpath
|
||||
if os.sep != '/':
|
||||
rootpath = string.replace(rootpath, os.sep, '/')
|
||||
return 'file://' + string.join([rootpath, path], "/")
|
||||
|
||||
|
||||
def _datestr_to_date(datestr):
|
||||
try:
|
||||
return core.svn_time_from_cstring(datestr) / 1000000
|
||||
except:
|
||||
return None
|
||||
|
||||
|
||||
class Revision(vclib.Revision):
|
||||
"Hold state for each revision's log entry."
|
||||
def __init__(self, rev, date, author, msg, size, lockinfo,
|
||||
filename, copy_path, copy_rev):
|
||||
vclib.Revision.__init__(self, rev, str(rev), date, author, None,
|
||||
msg, size, lockinfo)
|
||||
self.filename = filename
|
||||
self.copy_path = copy_path
|
||||
self.copy_rev = copy_rev
|
||||
|
||||
|
||||
class NodeHistory:
|
||||
"""An iterable object that returns 2-tuples of (revision, path)
|
||||
locations along a node's change history, ordered from youngest to
|
||||
oldest."""
|
||||
|
||||
def __init__(self, fs_ptr, show_all_logs, limit=0):
|
||||
self.histories = []
|
||||
self.fs_ptr = fs_ptr
|
||||
self.show_all_logs = show_all_logs
|
||||
self.oldest_rev = None
|
||||
self.limit = limit
|
||||
|
||||
def add_history(self, path, revision, pool):
|
||||
# If filtering, only add the path and revision to the histories
|
||||
# list if they were actually changed in this revision (where
|
||||
# change means the path itself was changed, or one of its parents
|
||||
# was copied). This is useful for omitting bubble-up directory
|
||||
# changes.
|
||||
if not self.oldest_rev:
|
||||
self.oldest_rev = revision
|
||||
else:
|
||||
assert(revision < self.oldest_rev)
|
||||
|
||||
if not self.show_all_logs:
|
||||
rev_root = fs.revision_root(self.fs_ptr, revision)
|
||||
changed_paths = fs.paths_changed(rev_root)
|
||||
paths = changed_paths.keys()
|
||||
if path not in paths:
|
||||
# Look for a copied parent
|
||||
test_path = path
|
||||
found = 0
|
||||
while 1:
|
||||
off = string.rfind(test_path, '/')
|
||||
if off < 0:
|
||||
break
|
||||
test_path = test_path[0:off]
|
||||
if test_path in paths:
|
||||
copyfrom_rev, copyfrom_path = fs.copied_from(rev_root, test_path)
|
||||
if copyfrom_rev >= 0 and copyfrom_path:
|
||||
found = 1
|
||||
break
|
||||
if not found:
|
||||
return
|
||||
self.histories.append([revision, _cleanup_path(path)])
|
||||
if self.limit and len(self.histories) == self.limit:
|
||||
raise core.SubversionException("", core.SVN_ERR_CEASE_INVOCATION)
|
||||
|
||||
def __getitem__(self, idx):
|
||||
return self.histories[idx]
|
||||
|
||||
|
||||
def _get_history(svnrepos, path, rev, path_type, limit=0, options={}):
|
||||
rev_paths = []
|
||||
fsroot = svnrepos._getroot(rev)
|
||||
show_all_logs = options.get('svn_show_all_dir_logs', 0)
|
||||
if not show_all_logs:
|
||||
# See if the path is a file or directory.
|
||||
kind = fs.check_path(fsroot, path)
|
||||
if kind is core.svn_node_file:
|
||||
show_all_logs = 1
|
||||
|
||||
# Instantiate a NodeHistory collector object, and use it to collect
|
||||
# history items for PATH@REV.
|
||||
history = NodeHistory(svnrepos.fs_ptr, show_all_logs, limit)
|
||||
try:
|
||||
repos.svn_repos_history(svnrepos.fs_ptr, path, history.add_history,
|
||||
1, rev, options.get('svn_cross_copies', 0))
|
||||
except core.SubversionException, e:
|
||||
if e.apr_err != core.SVN_ERR_CEASE_INVOCATION:
|
||||
raise
|
||||
|
||||
# Now, iterate over those history items, checking for changes of
|
||||
# location, pruning as necessitated by authz rules.
|
||||
for hist_rev, hist_path in history:
|
||||
path_parts = _path_parts(hist_path)
|
||||
if not vclib.check_path_access(svnrepos, path_parts, path_type, hist_rev):
|
||||
break
|
||||
rev_paths.append([hist_rev, hist_path])
|
||||
return rev_paths
|
||||
|
||||
|
||||
def _log_helper(svnrepos, path, rev, lockinfo):
|
||||
rev_root = fs.revision_root(svnrepos.fs_ptr, rev)
|
||||
|
||||
# Was this path@rev the target of a copy?
|
||||
copyfrom_rev, copyfrom_path = fs.copied_from(rev_root, path)
|
||||
|
||||
# Assemble our LogEntry
|
||||
date, author, msg, changes = svnrepos.revinfo(rev)
|
||||
if fs.is_file(rev_root, path):
|
||||
size = fs.file_length(rev_root, path)
|
||||
else:
|
||||
size = None
|
||||
entry = Revision(rev, date, author, msg, size, lockinfo, path,
|
||||
copyfrom_path and _cleanup_path(copyfrom_path),
|
||||
copyfrom_rev)
|
||||
return entry
|
||||
|
||||
|
||||
def _get_last_history_rev(fsroot, path):
|
||||
history = fs.node_history(fsroot, path)
|
||||
history = fs.history_prev(history, 0)
|
||||
history_path, history_rev = fs.history_location(history)
|
||||
return history_rev
|
||||
|
||||
def temp_checkout(svnrepos, path, rev):
|
||||
"""Check out file revision to temporary file"""
|
||||
temp = tempfile.mktemp()
|
||||
fp = open(temp, 'wb')
|
||||
try:
|
||||
root = svnrepos._getroot(rev)
|
||||
stream = fs.file_contents(root, path)
|
||||
try:
|
||||
while 1:
|
||||
chunk = core.svn_stream_read(stream, core.SVN_STREAM_CHUNK_SIZE)
|
||||
if not chunk:
|
||||
break
|
||||
fp.write(chunk)
|
||||
finally:
|
||||
core.svn_stream_close(stream)
|
||||
finally:
|
||||
fp.close()
|
||||
return temp
|
||||
|
||||
class FileContentsPipe:
|
||||
def __init__(self, root, path):
|
||||
self._stream = fs.file_contents(root, path)
|
||||
self._eof = 0
|
||||
|
||||
def read(self, len=None):
|
||||
chunk = None
|
||||
if not self._eof:
|
||||
if len is None:
|
||||
buffer = cStringIO.StringIO()
|
||||
try:
|
||||
while 1:
|
||||
hunk = core.svn_stream_read(self._stream, 8192)
|
||||
if not hunk:
|
||||
break
|
||||
buffer.write(hunk)
|
||||
chunk = buffer.getvalue()
|
||||
finally:
|
||||
buffer.close()
|
||||
|
||||
else:
|
||||
chunk = core.svn_stream_read(self._stream, len)
|
||||
if not chunk:
|
||||
self._eof = 1
|
||||
return chunk
|
||||
|
||||
def readline(self):
|
||||
chunk = None
|
||||
if not self._eof:
|
||||
chunk, self._eof = core.svn_stream_readline(self._stream, '\n')
|
||||
if not self._eof:
|
||||
chunk = chunk + '\n'
|
||||
if not chunk:
|
||||
self._eof = 1
|
||||
return chunk
|
||||
|
||||
def readlines(self):
|
||||
lines = []
|
||||
while True:
|
||||
line = self.readline()
|
||||
if not line:
|
||||
break
|
||||
lines.append(line)
|
||||
return lines
|
||||
|
||||
def close(self):
|
||||
return core.svn_stream_close(self._stream)
|
||||
|
||||
def eof(self):
|
||||
return self._eof
|
||||
|
||||
|
||||
class BlameSource:
|
||||
def __init__(self, local_url, rev, first_rev):
|
||||
self.idx = -1
|
||||
self.first_rev = first_rev
|
||||
self.blame_data = []
|
||||
|
||||
ctx = client.ctx_t()
|
||||
core.svn_config_ensure(None)
|
||||
ctx.config = core.svn_config_get_config(None)
|
||||
ctx.auth_baton = core.svn_auth_open([])
|
||||
try:
|
||||
### TODO: Is this use of FIRST_REV always what we want? Should we
|
||||
### pass 1 here instead and do filtering later?
|
||||
client.blame2(local_url, _rev2optrev(rev), _rev2optrev(first_rev),
|
||||
_rev2optrev(rev), self._blame_cb, ctx)
|
||||
except core.SubversionException, e:
|
||||
if e.apr_err == core.SVN_ERR_CLIENT_IS_BINARY_FILE:
|
||||
raise vclib.NonTextualFileContents
|
||||
raise
|
||||
|
||||
def _blame_cb(self, line_no, rev, author, date, text, pool):
|
||||
prev_rev = None
|
||||
if rev > self.first_rev:
|
||||
prev_rev = rev - 1
|
||||
self.blame_data.append(vclib.Annotation(text, line_no + 1, rev,
|
||||
prev_rev, author, None))
|
||||
|
||||
def __getitem__(self, idx):
|
||||
if idx != self.idx + 1:
|
||||
raise BlameSequencingError()
|
||||
self.idx = idx
|
||||
return self.blame_data[idx]
|
||||
|
||||
|
||||
class BlameSequencingError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class SVNChangedPath(vclib.ChangedPath):
|
||||
"""Wrapper around vclib.ChangedPath which handles path splitting."""
|
||||
|
||||
def __init__(self, path, rev, pathtype, base_path, base_rev,
|
||||
action, copied, text_changed, props_changed):
|
||||
path_parts = _path_parts(path or '')
|
||||
base_path_parts = _path_parts(base_path or '')
|
||||
vclib.ChangedPath.__init__(self, path_parts, rev, pathtype,
|
||||
base_path_parts, base_rev, action,
|
||||
copied, text_changed, props_changed)
|
||||
|
||||
|
||||
class LocalSubversionRepository(vclib.Repository):
|
||||
def __init__(self, name, rootpath, authorizer, utilities, config_dir):
|
||||
if not (os.path.isdir(rootpath) \
|
||||
and os.path.isfile(os.path.join(rootpath, 'format'))):
|
||||
raise vclib.ReposNotFound(name)
|
||||
|
||||
# Initialize some stuff.
|
||||
self.rootpath = rootpath
|
||||
self.name = name
|
||||
self.auth = authorizer
|
||||
self.svn_client_path = utilities.svn or 'svn'
|
||||
self.diff_cmd = utilities.diff or 'diff'
|
||||
self.config_dir = config_dir
|
||||
|
||||
# See if this repository is even viewable, authz-wise.
|
||||
if not vclib.check_root_access(self):
|
||||
raise vclib.ReposNotFound(name)
|
||||
|
||||
def open(self):
|
||||
# Register a handler for SIGTERM so we can have a chance to
|
||||
# cleanup. If ViewVC takes too long to start generating CGI
|
||||
# output, Apache will grow impatient and SIGTERM it. While we
|
||||
# don't mind getting told to bail, we want to gracefully close the
|
||||
# repository before we bail.
|
||||
def _sigterm_handler(signum, frame, self=self):
|
||||
sys.exit(-1)
|
||||
try:
|
||||
signal.signal(signal.SIGTERM, _sigterm_handler)
|
||||
except ValueError:
|
||||
# This is probably "ValueError: signal only works in main
|
||||
# thread", which will get thrown by the likes of mod_python
|
||||
# when trying to install a signal handler from a thread that
|
||||
# isn't the main one. We'll just not care.
|
||||
pass
|
||||
|
||||
# Open the repository and init some other variables.
|
||||
self.repos = repos.svn_repos_open(self.rootpath)
|
||||
self.fs_ptr = repos.svn_repos_fs(self.repos)
|
||||
self.youngest = fs.youngest_rev(self.fs_ptr)
|
||||
self._fsroots = {}
|
||||
self._revinfo_cache = {}
|
||||
|
||||
def rootname(self):
|
||||
return self.name
|
||||
|
||||
def rootpath(self):
|
||||
return self.rootpath
|
||||
|
||||
def roottype(self):
|
||||
return vclib.SVN
|
||||
|
||||
def authorizer(self):
|
||||
return self.auth
|
||||
|
||||
def itemtype(self, path_parts, rev):
|
||||
rev = self._getrev(rev)
|
||||
basepath = self._getpath(path_parts)
|
||||
kind = fs.check_path(self._getroot(rev), basepath)
|
||||
pathtype = None
|
||||
if kind == core.svn_node_dir:
|
||||
pathtype = vclib.DIR
|
||||
elif kind == core.svn_node_file:
|
||||
pathtype = vclib.FILE
|
||||
else:
|
||||
raise vclib.ItemNotFound(path_parts)
|
||||
if not vclib.check_path_access(self, path_parts, pathtype, rev):
|
||||
raise vclib.ItemNotFound(path_parts)
|
||||
return pathtype
|
||||
|
||||
def openfile(self, path_parts, rev):
|
||||
path = self._getpath(path_parts)
|
||||
if self.itemtype(path_parts, rev) != vclib.FILE: # does auth-check
|
||||
raise vclib.Error("Path '%s' is not a file." % path)
|
||||
rev = self._getrev(rev)
|
||||
fsroot = self._getroot(rev)
|
||||
revision = str(_get_last_history_rev(fsroot, path))
|
||||
fp = FileContentsPipe(fsroot, path)
|
||||
return fp, revision
|
||||
|
||||
def listdir(self, path_parts, rev, options):
|
||||
path = self._getpath(path_parts)
|
||||
if self.itemtype(path_parts, rev) != vclib.DIR: # does auth-check
|
||||
raise vclib.Error("Path '%s' is not a directory." % path)
|
||||
rev = self._getrev(rev)
|
||||
fsroot = self._getroot(rev)
|
||||
dirents = fs.dir_entries(fsroot, path)
|
||||
entries = [ ]
|
||||
for entry in dirents.values():
|
||||
if entry.kind == core.svn_node_dir:
|
||||
kind = vclib.DIR
|
||||
elif entry.kind == core.svn_node_file:
|
||||
kind = vclib.FILE
|
||||
if vclib.check_path_access(self, path_parts + [entry.name], kind, rev):
|
||||
entries.append(vclib.DirEntry(entry.name, kind))
|
||||
return entries
|
||||
|
||||
def dirlogs(self, path_parts, rev, entries, options):
|
||||
path = self._getpath(path_parts)
|
||||
if self.itemtype(path_parts, rev) != vclib.DIR: # does auth-check
|
||||
raise vclib.Error("Path '%s' is not a directory." % path)
|
||||
fsroot = self._getroot(self._getrev(rev))
|
||||
rev = self._getrev(rev)
|
||||
for entry in entries:
|
||||
entry_path_parts = path_parts + [entry.name]
|
||||
if not vclib.check_path_access(self, entry_path_parts, entry.kind, rev):
|
||||
continue
|
||||
path = self._getpath(entry_path_parts)
|
||||
entry_rev = _get_last_history_rev(fsroot, path)
|
||||
date, author, msg, changes = self.revinfo(entry_rev)
|
||||
entry.rev = str(entry_rev)
|
||||
entry.date = date
|
||||
entry.author = author
|
||||
entry.log = msg
|
||||
if entry.kind == vclib.FILE:
|
||||
entry.size = fs.file_length(fsroot, path)
|
||||
lock = fs.get_lock(self.fs_ptr, path)
|
||||
entry.lockinfo = lock and lock.owner or None
|
||||
|
||||
def itemlog(self, path_parts, rev, sortby, first, limit, options):
|
||||
"""see vclib.Repository.itemlog docstring
|
||||
|
||||
Option values recognized by this implementation
|
||||
|
||||
svn_show_all_dir_logs
|
||||
boolean, default false. if set for a directory path, will include
|
||||
revisions where files underneath the directory have changed
|
||||
|
||||
svn_cross_copies
|
||||
boolean, default false. if set for a path created by a copy, will
|
||||
include revisions from before the copy
|
||||
|
||||
svn_latest_log
|
||||
boolean, default false. if set will return only newest single log
|
||||
entry
|
||||
"""
|
||||
assert sortby == vclib.SORTBY_DEFAULT or sortby == vclib.SORTBY_REV
|
||||
|
||||
path = self._getpath(path_parts)
|
||||
path_type = self.itemtype(path_parts, rev) # does auth-check
|
||||
rev = self._getrev(rev)
|
||||
revs = []
|
||||
lockinfo = None
|
||||
|
||||
# See if this path is locked.
|
||||
try:
|
||||
lock = fs.get_lock(self.fs_ptr, path)
|
||||
if lock:
|
||||
lockinfo = lock.owner
|
||||
except NameError:
|
||||
pass
|
||||
|
||||
# If our caller only wants the latest log, we'll invoke
|
||||
# _log_helper for just the one revision. Otherwise, we go off
|
||||
# into history-fetching mode. ### TODO: we could stand to have a
|
||||
# 'limit' parameter here as numeric cut-off for the depth of our
|
||||
# history search.
|
||||
if options.get('svn_latest_log', 0):
|
||||
revision = _log_helper(self, path, rev, lockinfo)
|
||||
if revision:
|
||||
revision.prev = None
|
||||
revs.append(revision)
|
||||
else:
|
||||
history = _get_history(self, path, rev, path_type,
|
||||
first + limit, options)
|
||||
if len(history) < first:
|
||||
history = []
|
||||
if limit:
|
||||
history = history[first:first+limit]
|
||||
|
||||
for hist_rev, hist_path in history:
|
||||
revision = _log_helper(self, hist_path, hist_rev, lockinfo)
|
||||
if revision:
|
||||
# If we have unreadable copyfrom data, obscure it.
|
||||
if revision.copy_path is not None:
|
||||
cp_parts = _path_parts(revision.copy_path)
|
||||
if not vclib.check_path_access(self, cp_parts, path_type,
|
||||
revision.copy_rev):
|
||||
revision.copy_path = revision.copy_rev = None
|
||||
revision.prev = None
|
||||
if len(revs):
|
||||
revs[-1].prev = revision
|
||||
revs.append(revision)
|
||||
return revs
|
||||
|
||||
def itemprops(self, path_parts, rev):
|
||||
path = self._getpath(path_parts)
|
||||
path_type = self.itemtype(path_parts, rev) # does auth-check
|
||||
rev = self._getrev(rev)
|
||||
fsroot = self._getroot(rev)
|
||||
return fs.node_proplist(fsroot, path)
|
||||
|
||||
def annotate(self, path_parts, rev):
|
||||
path = self._getpath(path_parts)
|
||||
path_type = self.itemtype(path_parts, rev) # does auth-check
|
||||
if path_type != vclib.FILE:
|
||||
raise vclib.Error("Path '%s' is not a file." % path)
|
||||
rev = self._getrev(rev)
|
||||
fsroot = self._getroot(rev)
|
||||
history = _get_history(self, path, rev, path_type, 0,
|
||||
{'svn_cross_copies': 1})
|
||||
youngest_rev, youngest_path = history[0]
|
||||
oldest_rev, oldest_path = history[-1]
|
||||
source = BlameSource(_rootpath2url(self.rootpath, path),
|
||||
youngest_rev, oldest_rev)
|
||||
return source, youngest_rev
|
||||
|
||||
def _revinfo_raw(self, rev):
|
||||
fsroot = self._getroot(rev)
|
||||
|
||||
# Get the changes for the revision
|
||||
editor = repos.ChangeCollector(self.fs_ptr, fsroot)
|
||||
e_ptr, e_baton = delta.make_editor(editor)
|
||||
repos.svn_repos_replay(fsroot, e_ptr, e_baton)
|
||||
changes = editor.get_changes()
|
||||
changedpaths = {}
|
||||
|
||||
# Now get the revision property info. Would use
|
||||
# editor.get_root_props(), but something is broken there...
|
||||
revprops = fs.revision_proplist(self.fs_ptr, rev)
|
||||
msg = revprops.get(core.SVN_PROP_REVISION_LOG)
|
||||
author = revprops.get(core.SVN_PROP_REVISION_AUTHOR)
|
||||
datestr = revprops.get(core.SVN_PROP_REVISION_DATE)
|
||||
|
||||
# Copy the Subversion changes into a new hash, converting them into
|
||||
# ChangedPath objects.
|
||||
found_readable = found_unreadable = 0
|
||||
for path in changes.keys():
|
||||
change = changes[path]
|
||||
if change.path:
|
||||
change.path = _cleanup_path(change.path)
|
||||
if change.base_path:
|
||||
change.base_path = _cleanup_path(change.base_path)
|
||||
is_copy = 0
|
||||
if not hasattr(change, 'action'): # new to subversion 1.4.0
|
||||
action = vclib.MODIFIED
|
||||
if not change.path:
|
||||
action = vclib.DELETED
|
||||
elif change.added:
|
||||
action = vclib.ADDED
|
||||
replace_check_path = path
|
||||
if change.base_path and change.base_rev:
|
||||
replace_check_path = change.base_path
|
||||
if changedpaths.has_key(replace_check_path) \
|
||||
and changedpaths[replace_check_path].action == vclib.DELETED:
|
||||
action = vclib.REPLACED
|
||||
else:
|
||||
if change.action == repos.CHANGE_ACTION_ADD:
|
||||
action = vclib.ADDED
|
||||
elif change.action == repos.CHANGE_ACTION_DELETE:
|
||||
action = vclib.DELETED
|
||||
elif change.action == repos.CHANGE_ACTION_REPLACE:
|
||||
action = vclib.REPLACED
|
||||
else:
|
||||
action = vclib.MODIFIED
|
||||
if (action == vclib.ADDED or action == vclib.REPLACED) \
|
||||
and change.base_path \
|
||||
and change.base_rev:
|
||||
is_copy = 1
|
||||
if change.item_kind == core.svn_node_dir:
|
||||
pathtype = vclib.DIR
|
||||
elif change.item_kind == core.svn_node_file:
|
||||
pathtype = vclib.FILE
|
||||
else:
|
||||
pathtype = None
|
||||
|
||||
parts = _path_parts(path)
|
||||
if vclib.check_path_access(self, parts, pathtype, rev):
|
||||
if is_copy and change.base_path and (change.base_path != path):
|
||||
parts = _path_parts(change.base_path)
|
||||
if not vclib.check_path_access(self, parts, pathtype, change.base_rev):
|
||||
is_copy = 0
|
||||
change.base_path = None
|
||||
change.base_rev = None
|
||||
changedpaths[path] = SVNChangedPath(path, rev, pathtype,
|
||||
change.base_path,
|
||||
change.base_rev, action,
|
||||
is_copy, change.text_changed,
|
||||
change.prop_changes)
|
||||
found_readable = 1
|
||||
else:
|
||||
found_unreadable = 1
|
||||
|
||||
# Return our tuple, auth-filtered: date, author, msg, changes
|
||||
if found_unreadable:
|
||||
msg = None
|
||||
if not found_readable:
|
||||
author = None
|
||||
datestr = None
|
||||
|
||||
date = _datestr_to_date(datestr)
|
||||
return date, author, msg, changedpaths.values()
|
||||
|
||||
def revinfo(self, rev):
|
||||
rev = self._getrev(rev)
|
||||
cached_info = self._revinfo_cache.get(rev)
|
||||
if not cached_info:
|
||||
cached_info = self._revinfo_raw(rev)
|
||||
self._revinfo_cache[rev] = cached_info
|
||||
return cached_info[0], cached_info[1], cached_info[2], cached_info[3]
|
||||
|
||||
def rawdiff(self, path_parts1, rev1, path_parts2, rev2, type, options={}):
|
||||
p1 = self._getpath(path_parts1)
|
||||
p2 = self._getpath(path_parts2)
|
||||
r1 = self._getrev(rev1)
|
||||
r2 = self._getrev(rev2)
|
||||
if not vclib.check_path_access(self, path_parts1, vclib.FILE, rev1):
|
||||
raise vclib.ItemNotFound(path_parts1)
|
||||
if not vclib.check_path_access(self, path_parts2, vclib.FILE, rev2):
|
||||
raise vclib.ItemNotFound(path_parts2)
|
||||
|
||||
args = vclib._diff_args(type, options)
|
||||
|
||||
def _date_from_rev(rev):
|
||||
date, author, msg, changes = self.revinfo(rev)
|
||||
return date
|
||||
|
||||
try:
|
||||
temp1 = temp_checkout(self, p1, r1)
|
||||
temp2 = temp_checkout(self, p2, r2)
|
||||
info1 = p1, _date_from_rev(r1), r1
|
||||
info2 = p2, _date_from_rev(r2), r2
|
||||
return vclib._diff_fp(temp1, temp2, info1, info2, self.diff_cmd, args)
|
||||
except core.SubversionException, e:
|
||||
if e.apr_err == core.SVN_ERR_FS_NOT_FOUND:
|
||||
raise vclib.InvalidRevision
|
||||
raise
|
||||
|
||||
def isexecutable(self, path_parts, rev):
|
||||
props = self.itemprops(path_parts, rev) # does authz-check
|
||||
return props.has_key(core.SVN_PROP_EXECUTABLE)
|
||||
|
||||
def _getpath(self, path_parts):
|
||||
return string.join(path_parts, '/')
|
||||
|
||||
def _getrev(self, rev):
|
||||
if rev is None or rev == 'HEAD':
|
||||
return self.youngest
|
||||
try:
|
||||
rev = int(rev)
|
||||
except ValueError:
|
||||
raise vclib.InvalidRevision(rev)
|
||||
if (rev < 0) or (rev > self.youngest):
|
||||
raise vclib.InvalidRevision(rev)
|
||||
return rev
|
||||
|
||||
def _getroot(self, rev):
|
||||
try:
|
||||
return self._fsroots[rev]
|
||||
except KeyError:
|
||||
r = self._fsroots[rev] = fs.revision_root(self.fs_ptr, rev)
|
||||
return r
|
||||
|
||||
##--- custom --##
|
||||
|
||||
def get_youngest_revision(self):
|
||||
return self.youngest
|
||||
|
||||
def get_location(self, path, rev, old_rev):
|
||||
try:
|
||||
results = repos.svn_repos_trace_node_locations(self.fs_ptr, path,
|
||||
rev, [old_rev], _allow_all)
|
||||
except core.SubversionException, e:
|
||||
if e.apr_err == core.SVN_ERR_FS_NOT_FOUND:
|
||||
raise vclib.ItemNotFound(path)
|
||||
raise
|
||||
try:
|
||||
old_path = results[old_rev]
|
||||
except KeyError:
|
||||
raise vclib.ItemNotFound(path)
|
||||
|
||||
return _cleanup_path(old_path)
|
||||
|
||||
def created_rev(self, full_name, rev):
|
||||
return fs.node_created_rev(self._getroot(rev), full_name)
|
||||
|
||||
def last_rev(self, path, peg_revision, limit_revision=None):
|
||||
"""Given PATH, known to exist in PEG_REVISION, find the youngest
|
||||
revision older than, or equal to, LIMIT_REVISION in which path
|
||||
exists. Return that revision, and the path at which PATH exists in
|
||||
that revision."""
|
||||
|
||||
# Here's the plan, man. In the trivial case (where PEG_REVISION is
|
||||
# the same as LIMIT_REVISION), this is a no-brainer. If
|
||||
# LIMIT_REVISION is older than PEG_REVISION, we can use Subversion's
|
||||
# history tracing code to find the right location. If, however,
|
||||
# LIMIT_REVISION is younger than PEG_REVISION, we suffer from
|
||||
# Subversion's lack of forward history searching. Our workaround,
|
||||
# ugly as it may be, involves a binary search through the revisions
|
||||
# between PEG_REVISION and LIMIT_REVISION to find our last live
|
||||
# revision.
|
||||
peg_revision = self._getrev(peg_revision)
|
||||
limit_revision = self._getrev(limit_revision)
|
||||
try:
|
||||
if peg_revision == limit_revision:
|
||||
return peg_revision, path
|
||||
elif peg_revision > limit_revision:
|
||||
fsroot = self._getroot(peg_revision)
|
||||
history = fs.node_history(fsroot, path)
|
||||
while history:
|
||||
path, peg_revision = fs.history_location(history)
|
||||
if peg_revision <= limit_revision:
|
||||
return max(peg_revision, limit_revision), _cleanup_path(path)
|
||||
history = fs.history_prev(history, 1)
|
||||
return peg_revision, _cleanup_path(path)
|
||||
else:
|
||||
orig_id = fs.node_id(self._getroot(peg_revision), path)
|
||||
while peg_revision != limit_revision:
|
||||
mid = (peg_revision + 1 + limit_revision) / 2
|
||||
try:
|
||||
mid_id = fs.node_id(self._getroot(mid), path)
|
||||
except core.SubversionException, e:
|
||||
if e.apr_err == core.SVN_ERR_FS_NOT_FOUND:
|
||||
cmp = -1
|
||||
else:
|
||||
raise
|
||||
else:
|
||||
### Not quite right. Need a comparison function that only returns
|
||||
### true when the two nodes are the same copy, not just related.
|
||||
cmp = fs.compare_ids(orig_id, mid_id)
|
||||
|
||||
if cmp in (0, 1):
|
||||
peg_revision = mid
|
||||
else:
|
||||
limit_revision = mid - 1
|
||||
|
||||
return peg_revision, path
|
||||
finally:
|
||||
pass
|
|
@ -0,0 +1,235 @@
|
|||
# -*-python-*-
|
||||
#
|
||||
# Copyright (C) 1999-2007 The ViewCVS Group. All Rights Reserved.
|
||||
#
|
||||
# By using this file, you agree to the terms and conditions set forth in
|
||||
# the LICENSE.html file which can be found at the top level of the ViewVC
|
||||
# distribution or at http://viewvc.org/license-1.html.
|
||||
#
|
||||
# For more information, visit http://viewvc.org/
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
#
|
||||
# Utilities for controlling processes and pipes on win32
|
||||
#
|
||||
# -----------------------------------------------------------------------
|
||||
|
||||
import os, sys, traceback, string, thread
|
||||
try:
|
||||
import win32api
|
||||
except ImportError, e:
|
||||
raise ImportError, str(e) + """
|
||||
|
||||
Did you install the Python for Windows Extensions?
|
||||
|
||||
http://sourceforge.net/projects/pywin32/
|
||||
"""
|
||||
|
||||
import win32process, win32pipe, win32con
|
||||
import win32event, win32file, winerror
|
||||
import pywintypes, msvcrt
|
||||
|
||||
# Buffer size for spooling
|
||||
SPOOL_BYTES = 4096
|
||||
|
||||
# File object to write error messages
|
||||
SPOOL_ERROR = sys.stderr
|
||||
#SPOOL_ERROR = open("m:/temp/error.txt", "wt")
|
||||
|
||||
def CommandLine(command, args):
|
||||
"""Convert an executable path and a sequence of arguments into a command
|
||||
line that can be passed to CreateProcess"""
|
||||
|
||||
cmd = "\"" + string.replace(command, "\"", "\"\"") + "\""
|
||||
for arg in args:
|
||||
cmd = cmd + " \"" + string.replace(arg, "\"", "\"\"") + "\""
|
||||
return cmd
|
||||
|
||||
def CreateProcess(cmd, hStdInput, hStdOutput, hStdError):
|
||||
"""Creates a new process which uses the specified handles for its standard
|
||||
input, output, and error. The handles must be inheritable. 0 can be passed
|
||||
as a special handle indicating that the process should inherit the current
|
||||
process's input, output, or error streams, and None can be passed to discard
|
||||
the child process's output or to prevent it from reading any input."""
|
||||
|
||||
# initialize new process's startup info
|
||||
si = win32process.STARTUPINFO()
|
||||
si.dwFlags = win32process.STARTF_USESTDHANDLES
|
||||
|
||||
if hStdInput == 0:
|
||||
si.hStdInput = win32api.GetStdHandle(win32api.STD_INPUT_HANDLE)
|
||||
else:
|
||||
si.hStdInput = hStdInput
|
||||
|
||||
if hStdOutput == 0:
|
||||
si.hStdOutput = win32api.GetStdHandle(win32api.STD_OUTPUT_HANDLE)
|
||||
else:
|
||||
si.hStdOutput = hStdOutput
|
||||
|
||||
if hStdError == 0:
|
||||
si.hStdError = win32api.GetStdHandle(win32api.STD_ERROR_HANDLE)
|
||||
else:
|
||||
si.hStdError = hStdError
|
||||
|
||||
# create the process
|
||||
phandle, pid, thandle, tid = win32process.CreateProcess \
|
||||
( None, # appName
|
||||
cmd, # commandLine
|
||||
None, # processAttributes
|
||||
None, # threadAttributes
|
||||
1, # bInheritHandles
|
||||
win32con.NORMAL_PRIORITY_CLASS, # dwCreationFlags
|
||||
None, # newEnvironment
|
||||
None, # currentDirectory
|
||||
si # startupinfo
|
||||
)
|
||||
|
||||
if hStdInput and hasattr(hStdInput, 'Close'):
|
||||
hStdInput.Close()
|
||||
|
||||
if hStdOutput and hasattr(hStdOutput, 'Close'):
|
||||
hStdOutput.Close()
|
||||
|
||||
if hStdError and hasattr(hStdError, 'Close'):
|
||||
hStdError.Close()
|
||||
|
||||
return phandle, pid, thandle, tid
|
||||
|
||||
def CreatePipe(readInheritable, writeInheritable):
|
||||
"""Create a new pipe specifying whether the read and write ends are
|
||||
inheritable and whether they should be created for blocking or nonblocking
|
||||
I/O."""
|
||||
|
||||
r, w = win32pipe.CreatePipe(None, SPOOL_BYTES)
|
||||
if readInheritable:
|
||||
r = MakeInheritedHandle(r)
|
||||
if writeInheritable:
|
||||
w = MakeInheritedHandle(w)
|
||||
return r, w
|
||||
|
||||
def File2FileObject(pipe, mode):
|
||||
"""Make a C stdio file object out of a win32 file handle"""
|
||||
if string.find(mode, 'r') >= 0:
|
||||
wmode = os.O_RDONLY
|
||||
elif string.find(mode, 'w') >= 0:
|
||||
wmode = os.O_WRONLY
|
||||
if string.find(mode, 'b') >= 0:
|
||||
wmode = wmode | os.O_BINARY
|
||||
if string.find(mode, 't') >= 0:
|
||||
wmode = wmode | os.O_TEXT
|
||||
return os.fdopen(msvcrt.open_osfhandle(pipe.Detach(),wmode),mode)
|
||||
|
||||
def FileObject2File(fileObject):
|
||||
"""Get the win32 file handle from a C stdio file object"""
|
||||
return win32file._get_osfhandle(fileObject.fileno())
|
||||
|
||||
def DuplicateHandle(handle):
|
||||
"""Duplicates a win32 handle."""
|
||||
proc = win32api.GetCurrentProcess()
|
||||
return win32api.DuplicateHandle(proc,handle,proc,0,0,win32con.DUPLICATE_SAME_ACCESS)
|
||||
|
||||
def MakePrivateHandle(handle, replace = 1):
|
||||
"""Turn an inherited handle into a non inherited one. This avoids the
|
||||
handle duplication that occurs on CreateProcess calls which can create
|
||||
uncloseable pipes."""
|
||||
|
||||
### Could change implementation to use SetHandleInformation()...
|
||||
|
||||
flags = win32con.DUPLICATE_SAME_ACCESS
|
||||
proc = win32api.GetCurrentProcess()
|
||||
if replace: flags = flags | win32con.DUPLICATE_CLOSE_SOURCE
|
||||
newhandle = win32api.DuplicateHandle(proc,handle,proc,0,0,flags)
|
||||
if replace: handle.Detach() # handle was already deleted by the last call
|
||||
return newhandle
|
||||
|
||||
def MakeInheritedHandle(handle, replace = 1):
|
||||
"""Turn a private handle into an inherited one."""
|
||||
|
||||
### Could change implementation to use SetHandleInformation()...
|
||||
|
||||
flags = win32con.DUPLICATE_SAME_ACCESS
|
||||
proc = win32api.GetCurrentProcess()
|
||||
if replace: flags = flags | win32con.DUPLICATE_CLOSE_SOURCE
|
||||
newhandle = win32api.DuplicateHandle(proc,handle,proc,0,1,flags)
|
||||
if replace: handle.Detach() # handle was deleted by the last call
|
||||
return newhandle
|
||||
|
||||
def MakeSpyPipe(readInheritable, writeInheritable, outFiles = None, doneEvent = None):
|
||||
"""Return read and write handles to a pipe that asynchronously writes all of
|
||||
its input to the files in the outFiles sequence. doneEvent can be None, or a
|
||||
a win32 event handle that will be set when the write end of pipe is closed.
|
||||
"""
|
||||
|
||||
if outFiles is None:
|
||||
return CreatePipe(readInheritable, writeInheritable)
|
||||
|
||||
r, writeHandle = CreatePipe(0, writeInheritable)
|
||||
if readInheritable is None:
|
||||
readHandle, w = None, None
|
||||
else:
|
||||
readHandle, w = CreatePipe(readInheritable, 0)
|
||||
|
||||
thread.start_new_thread(SpoolWorker, (r, w, outFiles, doneEvent))
|
||||
|
||||
return readHandle, writeHandle
|
||||
|
||||
def SpoolWorker(srcHandle, destHandle, outFiles, doneEvent):
|
||||
"""Thread entry point for implementation of MakeSpyPipe"""
|
||||
try:
|
||||
buffer = win32file.AllocateReadBuffer(SPOOL_BYTES)
|
||||
|
||||
while 1:
|
||||
try:
|
||||
#print >> SPOOL_ERROR, "Calling ReadFile..."; SPOOL_ERROR.flush()
|
||||
hr, data = win32file.ReadFile(srcHandle, buffer)
|
||||
#print >> SPOOL_ERROR, "ReadFile returned '%s', '%s'" % (str(hr), str(data)); SPOOL_ERROR.flush()
|
||||
if hr != 0:
|
||||
raise "win32file.ReadFile returned %i, '%s'" % (hr, data)
|
||||
elif len(data) == 0:
|
||||
break
|
||||
except pywintypes.error, e:
|
||||
#print >> SPOOL_ERROR, "ReadFile threw '%s'" % str(e); SPOOL_ERROR.flush()
|
||||
if e.args[0] == winerror.ERROR_BROKEN_PIPE:
|
||||
break
|
||||
else:
|
||||
raise e
|
||||
|
||||
#print >> SPOOL_ERROR, "Writing to %i file objects..." % len(outFiles); SPOOL_ERROR.flush()
|
||||
for f in outFiles:
|
||||
f.write(data)
|
||||
#print >> SPOOL_ERROR, "Done writing to file objects."; SPOOL_ERROR.flush()
|
||||
|
||||
#print >> SPOOL_ERROR, "Writing to destination %s" % str(destHandle); SPOOL_ERROR.flush()
|
||||
if destHandle:
|
||||
#print >> SPOOL_ERROR, "Calling WriteFile..."; SPOOL_ERROR.flush()
|
||||
hr, bytes = win32file.WriteFile(destHandle, data)
|
||||
#print >> SPOOL_ERROR, "WriteFile() passed %i bytes and returned %i, %i" % (len(data), hr, bytes); SPOOL_ERROR.flush()
|
||||
if hr != 0 or bytes != len(data):
|
||||
raise "win32file.WriteFile() passed %i bytes and returned %i, %i" % (len(data), hr, bytes)
|
||||
|
||||
srcHandle.Close()
|
||||
|
||||
if doneEvent:
|
||||
win32event.SetEvent(doneEvent)
|
||||
|
||||
if destHandle:
|
||||
destHandle.Close()
|
||||
|
||||
except:
|
||||
info = sys.exc_info()
|
||||
SPOOL_ERROR.writelines(apply(traceback.format_exception, info), '')
|
||||
SPOOL_ERROR.flush()
|
||||
del info
|
||||
|
||||
def NullFile(inheritable):
|
||||
"""Create a null file handle."""
|
||||
if inheritable:
|
||||
sa = pywintypes.SECURITY_ATTRIBUTES()
|
||||
sa.bInheritHandle = 1
|
||||
else:
|
||||
sa = None
|
||||
|
||||
return win32file.CreateFile("nul",
|
||||
win32file.GENERIC_READ | win32file.GENERIC_WRITE,
|
||||
win32file.FILE_SHARE_READ | win32file.FILE_SHARE_WRITE,
|
||||
sa, win32file.OPEN_EXISTING, 0, None)
|
|
@ -0,0 +1,5 @@
|
|||
This directory contains ViewVC template sets contributed by their
|
||||
respective authors and expected to work against ViewVC 1.0.x. They
|
||||
are not necessarily supported by the ViewVC development community, and
|
||||
do not necessarily carry the same license or copyright as ViewVC
|
||||
itself.
|
|
@ -0,0 +1,7 @@
|
|||
Template Set: newvc
|
||||
Author(s): C. Michael Pilato <cmpilato@red-bean.com>
|
||||
Compatibility: ViewVC 1.1
|
||||
|
||||
The "newvc" template set uses top navigation tabs to flip between
|
||||
various views of a file or directory, and aims for a clean-yet-modern
|
||||
look and feel.
|
|
@ -0,0 +1,128 @@
|
|||
[# Setup page definitions]
|
||||
[define page_title]Diff of:[end]
|
||||
[define help_href][docroot]/help_rootview.html[end]
|
||||
[# end]
|
||||
[include "include/header.ezt" "diff"]
|
||||
|
||||
<form method="get" action="[diff_format_action]" style="display: inline;">
|
||||
<div>
|
||||
[for diff_format_hidden_values]<input type="hidden" name="[diff_format_hidden_values.name]" value="[diff_format_hidden_values.value]"/>[end]
|
||||
<select name="diff_format" onchange="submit()">
|
||||
<option value="h" [is diff_format "h"]selected="selected"[end]>Colored Diff</option>
|
||||
<option value="l" [is diff_format "l"]selected="selected"[end]>Long Colored Diff</option>
|
||||
<option value="f" [is diff_format "f"]selected="selected"[end]>Full Colored Diff</option>
|
||||
<option value="u" [is diff_format "u"]selected="selected"[end]>Unidiff</option>
|
||||
<option value="c" [is diff_format "c"]selected="selected"[end]>Context Diff</option>
|
||||
<option value="s" [is diff_format "s"]selected="selected"[end]>Side by Side</option>
|
||||
</select>
|
||||
<input type="submit" value="Show" />
|
||||
(<a href="[patch_href]">Generate patch</a>)
|
||||
</div>
|
||||
</form>
|
||||
|
||||
<div id="vc_main_body">
|
||||
<!-- ************************************************************** -->
|
||||
|
||||
[if-any raw_diff]
|
||||
<pre>[raw_diff]</pre>
|
||||
[else]
|
||||
|
||||
[define change_right][end]
|
||||
[define last_change_type][end]
|
||||
|
||||
[# these should live in stylesheets]
|
||||
|
||||
<table cellpadding="0" cellspacing="0" style="width: 100%;">
|
||||
[for changes]
|
||||
[is changes.type "change"][else]
|
||||
[if-any change_right][change_right][define change_right][end][end]
|
||||
[end]
|
||||
[is changes.type "header"]
|
||||
<tr>
|
||||
<th class="vc_header" style="width:6%;"><strong>#</strong></th>
|
||||
<th colspan="2" class="vc_header">
|
||||
<strong>Line [changes.line_info_left]</strong> |
|
||||
<strong>Line [changes.line_info_right]</strong>
|
||||
</th>
|
||||
</tr>
|
||||
[else]
|
||||
[is changes.type "add"]
|
||||
<tr>
|
||||
<td id="l[changes.line_number]">[if-any right.annotate_href]<a href="[right.annotate_href]#l[changes.line_number]">[changes.line_number]</a>[else][changes.line_number][end]</td>
|
||||
<td class="vc_diff_plusminus"><strong style="color: green;">+</strong></td>
|
||||
<td class="vc_diff_add">[changes.right]</td>
|
||||
</tr>
|
||||
[else]
|
||||
[is changes.type "remove"]
|
||||
<tr>
|
||||
<td style="text-decoration: line-through">[changes.line_number]</td>
|
||||
<td class="vc_diff_plusminus"><strong style="color: red;">–</strong></td>
|
||||
<td class="vc_diff_remove">[changes.left]</td>
|
||||
</tr>
|
||||
[else]
|
||||
[is changes.type "change"]
|
||||
[if-any changes.have_left]
|
||||
<tr>
|
||||
<td style="text-decoration: line-through">[changes.line_number]</td>
|
||||
<td class="vc_diff_plusminus"><strong style="color: yellow;"><</strong></td>
|
||||
<td class="vc_diff_changes1">[changes.left]</td>
|
||||
</tr>
|
||||
[end]
|
||||
[define change_right][change_right]
|
||||
[if-any changes.have_right]
|
||||
<tr>
|
||||
<td id="l[changes.line_number]">[if-any right.annotate_href]<a href="[right.annotate_href]#l[changes.line_number]">[changes.line_number]</a>[else][changes.line_number][end]</td>
|
||||
<td class="vc_diff_plusminus"><strong style="color: yellow;">></strong></td>
|
||||
<td class="vc_diff_changes2">[changes.right]</td>
|
||||
</tr>[end]
|
||||
[end]
|
||||
[else]
|
||||
[is changes.type "no-changes"]
|
||||
<tr><td colspan="3" style="vc_diff_nochange"><strong>- No changes -</strong></td></tr>
|
||||
[else]
|
||||
[is changes.type "binary-diff"]
|
||||
<tr><td colspan="3" class="vc_diff_binary"><strong>- Binary file revisions differ -</strong></td></tr>
|
||||
[else]
|
||||
[is changes.type "error"]
|
||||
<tr><td colspan="3" class="vc_diff_error"><strong>- ViewVC depends on rcsdiff and GNU diff
|
||||
to create this page. ViewVC cannot find GNU diff. Even if you
|
||||
have GNU diff installed, the rcsdiff program must be configured
|
||||
and compiled with the GNU diff location. -</strong></td></tr>
|
||||
[else][# a line of context]
|
||||
<tr>
|
||||
<td id="l[changes.line_number]">[if-any right.annotate_href]<a href="[right.annotate_href]#l[changes.line_number]">[changes.line_number]</a>[else][changes.line_number][end]</td>
|
||||
<td class="vc_diff_plusminus"> </td>
|
||||
<td style="font-family: monospace; white-space: pre;">[changes.right]</td>
|
||||
</tr>
|
||||
[end][end][end][end][end][end][end]
|
||||
[define last_change_type][changes.type][end]
|
||||
[end]
|
||||
[if-any change_right][change_right][end]
|
||||
</table>
|
||||
|
||||
<h3>Diff Legend</h3>
|
||||
<table class="auto" cellspacing="0">
|
||||
<tr>
|
||||
<td class="vc_diff_plusminus"><strong style="color: red;">–</strong></td>
|
||||
<td class="vc_diff_remove">Removed lines</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td class="vc_diff_plusminus"><strong style="color: green;">+</strong></td>
|
||||
<td class="vc_diff_add">Added lines</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td class="vc_diff_plusminus"><strong style="color: yellow;"><</strong></td>
|
||||
<td class="vc_diff_changes1">Changed lines</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td class="vc_diff_plusminus"><strong style="color: yellow;">></strong></td>
|
||||
<td class="vc_diff_changes2">Changed lines</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
[end]
|
||||
|
||||
<!-- ************************************************************** -->
|
||||
</div>
|
||||
|
||||
[include "include/footer.ezt"]
|
|
@ -0,0 +1,139 @@
|
|||
[# setup page definitions]
|
||||
[define page_title]Index of:[end]
|
||||
[define help_href][docroot]/help_[if-any where]dir[else]root[end]view.html[end]
|
||||
[# end]
|
||||
[include "include/header.ezt" "directory"]
|
||||
|
||||
[if-any where][else]
|
||||
<!-- you may insert repository access instructions here -->
|
||||
[end]
|
||||
|
||||
<table class="auto">
|
||||
[is cfg.options.use_pagesize "0"][else][is picklist_len "1"][else]
|
||||
<tr>
|
||||
<td>Jump to page:</td>
|
||||
<td><form method="get" action="[dir_paging_action]">
|
||||
[for dir_paging_hidden_values]<input type="hidden" name="[dir_paging_hidden_values.name]" value="[dir_paging_hidden_values.value]"/>[end]
|
||||
<select name="dir_pagestart" onchange="submit()">
|
||||
[for picklist]
|
||||
<option [is picklist.count dir_pagestart]selected[end] value="[picklist.count]">Page [picklist.page]: [picklist.start] to [picklist.end]</option>
|
||||
[end]
|
||||
</select>
|
||||
<input type="submit" value="Go" />
|
||||
</form>
|
||||
</td>
|
||||
</tr>
|
||||
[end][end]
|
||||
</table>
|
||||
|
||||
<div id="vc_main_body">
|
||||
<!-- ************************************************************** -->
|
||||
|
||||
<div id="vc_togglables">
|
||||
[is roottype "svn"]
|
||||
[if-any rev]r<a href="[revision_href]">[rev]</a>[end]
|
||||
[else]
|
||||
[is num_dead "0"]
|
||||
[else]
|
||||
[if-any attic_showing]
|
||||
<a href="[hide_attic_href]">Hide
|
||||
[else]
|
||||
<a href="[show_attic_href]">Show
|
||||
[end]
|
||||
dead files</a>
|
||||
[end]
|
||||
[end]
|
||||
</div>
|
||||
|
||||
<table cellspacing="2" class="fixed" id="dirlist">
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="width: 200px" class="vc_header[is sortby "file"]_sort[end]">
|
||||
<a href="[sortby_file_href]#dirlist">File
|
||||
[is sortby "file"]
|
||||
<img class="vc_sortarrow" alt="[is sortdir "down"](rev)[end]"
|
||||
width="13" height="13"
|
||||
src="[docroot]/images/[is sortdir "up"]up[else]down[end].png" />
|
||||
[end]
|
||||
</a>
|
||||
</th>
|
||||
<th class="vc_header[is sortby "rev"]_sort[end]">
|
||||
<a href="[sortby_rev_href]#dirlist">Last Change
|
||||
[is sortby "rev"]
|
||||
<img class="vc_sortarrow" alt="[is sortdir "down"](rev)[end]"
|
||||
width="13" height="13"
|
||||
src="[docroot]/images/[is sortdir "up"]up[else]down[end].png" />
|
||||
[end]
|
||||
</a>
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
|
||||
<tbody>
|
||||
[if-any up_href]
|
||||
<tr class="vc_row_odd">
|
||||
<td colspan="2">
|
||||
<a href="[up_href]">
|
||||
<img src="[docroot]/images/back_small.png" alt="" class="vc_icon"
|
||||
/> ../</a>
|
||||
</td>
|
||||
</tr>
|
||||
[end]
|
||||
[for entries]
|
||||
[define click_href][is entries.pathtype "dir"][entries.view_href][else][if-any entries.prefer_markup][entries.view_href][else][entries.download_href][end][end][end]
|
||||
|
||||
<tr class="vc_row_[if-index entries even]even[else]odd[end]">
|
||||
<td style="width: 200px" onclick="jumpTo('[click_href]')">
|
||||
<a name="[entries.anchor]" href="[click_href]" title="[is entries.pathtype "dir"]View Directory Contents[else][if-any entries.prefer_markup]View[else]Download[end] File Contents[end]">
|
||||
<img src="[docroot]/images/[is entries.pathtype "dir"]dir[else][is entries.state "dead"]broken[else]text[end][end].png" alt="" class="vc_icon" />
|
||||
[entries.name][is entries.pathtype "dir"]/[end]</a>
|
||||
[is entries.state "dead"](dead)[end]
|
||||
</td>
|
||||
|
||||
<td [if-any entries.log_href]onclick="jumpTo('[entries.log_href]')"[end]>
|
||||
[if-any entries.rev]
|
||||
<strong>[if-any entries.log_href]<a href="[entries.log_href]" title="Revision [entries.rev]">[entries.rev]</a>[else][entries.rev][end]</strong>
|
||||
([entries.ago] ago)
|
||||
by <em>[entries.author]</em>:
|
||||
[entries.log]
|
||||
[is entries.pathtype "dir"][is roottype "cvs"]
|
||||
<em>(from [entries.log_file]/[entries.log_rev])</em>
|
||||
[end][end]
|
||||
[end]
|
||||
</td>
|
||||
</tr>
|
||||
[end]
|
||||
</tbody>
|
||||
|
||||
</table>
|
||||
|
||||
<div id="vc_view_summary">
|
||||
[if-any search_re_form]
|
||||
<form class="inline" method="get" action="[search_re_action]">
|
||||
<div class="inline">
|
||||
[for search_re_hidden_values]<input type="hidden" name="[search_re_hidden_values.name]" value="[search_re_hidden_values.value]"/>[end]
|
||||
<input type="text" name="search" value="[search_re]" />
|
||||
<input type="submit" value="Search Files" />
|
||||
</div>
|
||||
</form>
|
||||
[if-any search_re]
|
||||
<form class="inline" method="get" action="[search_re_action]">
|
||||
<div class="inline">
|
||||
[for search_re_hidden_values]<input type="hidden" name="[search_re_hidden_values.name]" value="[search_re_hidden_values.value]"/>[end]
|
||||
<input type="submit" value="Show all files" />
|
||||
</div>
|
||||
</form>
|
||||
[end]
|
||||
|
||||
[end]
|
||||
[include "include/pathrev_form.ezt"]
|
||||
|
||||
[files_shown] file[is files_shown "1"][else]s[end] shown
|
||||
</div>
|
||||
|
||||
[include "include/props.ezt"]
|
||||
|
||||
<!-- ************************************************************** -->
|
||||
</div>
|
||||
|
||||
[include "include/footer.ezt"]
|
|
@ -0,0 +1,8 @@
|
|||
/************************************/
|
||||
/*** ViewVC Help CSS Stylesheet ***/
|
||||
/************************************/
|
||||
body { margin: 0.5em; }
|
||||
img { border: none; }
|
||||
table { width: 100%; }
|
||||
td { vertical-align: top; }
|
||||
col.menu { width:12em; }
|
|
@ -0,0 +1,126 @@
|
|||
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
|
||||
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
|
||||
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
|
||||
<head>
|
||||
<title>ViewVC Help: Directory View</title>
|
||||
<link rel="stylesheet" href="help.css" type="text/css" />
|
||||
</head>
|
||||
<body>
|
||||
<table>
|
||||
<col class="menu" />
|
||||
<col />
|
||||
<tr>
|
||||
<td colspan="2">
|
||||
<h1>ViewVC Help: Directory View</h1>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td>
|
||||
<h3>Help</h3>
|
||||
<a href="help_rootview.html">General</a><br />
|
||||
<strong>Directory View</strong><br />
|
||||
<a href="help_log.html">Log View</a><br />
|
||||
|
||||
<h3>Internet</h3>
|
||||
<a href="http://viewvc.org/index.html">Home</a><br />
|
||||
<a href="http://viewvc.org/upgrading.html">Upgrading</a><br />
|
||||
<a href="http://viewvc.org/contributing.html">Contributing</a><br />
|
||||
<a href="http://viewvc.org/license-1.html">License</a><br />
|
||||
</td><td colspan="2">
|
||||
|
||||
<p>The directory listing view should be a familiar sight to any
|
||||
computer user. It shows the path of the current directory being viewed
|
||||
at the top of the page. Below that is a table summarizing the
|
||||
directory contents, and then comes actual contents, a sortable list of
|
||||
all files and subdirectories inside the current directory.</p>
|
||||
|
||||
<p><a name="summary"></a>The summary table is made up of some or all
|
||||
of the following rows:</p>
|
||||
<ul>
|
||||
<li><a name="summary-files-shown"><strong>Files Shown</strong></a>
|
||||
- Number of files shown in the directory listing. This might be less
|
||||
than the actual number of files in the directory if a
|
||||
<a href="#option-search">regular expression search</a> is in place,
|
||||
hiding files which don't meet the search criteria. In CVS directory
|
||||
listings, this row will also have a link to toggle display of
|
||||
<a href="help_rootview.html#dead-files">dead files</a>, if any are
|
||||
present.</li>
|
||||
|
||||
<li><a name="summary-revision"><strong>Directory
|
||||
Revision</strong></a> - For Subversion directories only.
|
||||
Shown as "# of #" where the first number is the most recent
|
||||
repository revision where the directory (or a path underneath it)
|
||||
was modified. The second number is just the latest repository
|
||||
revision. Both numbers are links to
|
||||
<a href="help_rootview.html#view-rev">revision views</a></li>
|
||||
|
||||
<li><a name="summary-sticky-revision-tag"><strong>Sticky
|
||||
Revision/Tag</strong></a> - shows the current
|
||||
<a href="help_rootview.html#sticky-revision-tag">sticky revision or
|
||||
tag</a> and contains form fields to set or clear it.</li>
|
||||
|
||||
<li><a name="summary-search"><strong>Current Search</strong></a> -
|
||||
If a <a href="#option-search">regular expression search</a> is in place,
|
||||
shows the search string.</li>
|
||||
|
||||
<li><a name="summary-query"><strong>Query</strong></a> - Provides
|
||||
a link to a <a href="help_rootview.html#view-query">query form</a>
|
||||
for the directory</li>
|
||||
</ul>
|
||||
|
||||
<p><a name="list"></a>The actual directory list is a table with
|
||||
filenames and directory names in one column and information about the
|
||||
most recent revisions where each file or directory was modified in the
|
||||
other columns. Column headers can be clicked to sort the directory
|
||||
entries in order by a column, and clicked again to reverse the sort
|
||||
order.</p>
|
||||
|
||||
<p>
|
||||
<!-- If using directory.ezt template -->
|
||||
File names are links to <a href="help_log.html">log views</a>
|
||||
showing a list of revisions where a file was modified. Revision
|
||||
numbers are links to either
|
||||
<a href="help_rootview.html#view-markup">view</a>
|
||||
or <a href="help_rootview.html#view-checkout">download</a> a file
|
||||
(depending on its file type). The links are reversed for directories.
|
||||
Directory revision numbers are links to <a href="help_log.html">log
|
||||
views</a>, while directory names are links showing the contents of those
|
||||
directories.
|
||||
|
||||
<!-- If using dir_alt.ezt template -->
|
||||
<!--
|
||||
File and directory names are links to retrieve their contents.
|
||||
File links may be either
|
||||
<a href="help_rootview.html#view-markup">view</a>
|
||||
or <a href="help_rootview.html#view-download">download</a> links
|
||||
depending on the file type. Directory links go to directory
|
||||
listings. Revision numbers are links to <a href="help_log.html">log
|
||||
views</a> showing lists of revisions where a file or directory was
|
||||
modified.
|
||||
-->
|
||||
|
||||
Also, in CVS repositories with the <a
|
||||
href="help_rootview.html#view-graph">graph view</a> enabled, there
|
||||
will be small icons next to file names which are links to revision
|
||||
graphs.</p>
|
||||
|
||||
<p>Depending on how ViewVC is configured, there may be more options
|
||||
at the bottom of directory pages:</p>
|
||||
|
||||
<ul>
|
||||
<li><a name="option-search"><strong>Regular expression
|
||||
search</strong></a> - If enabled, will show a form field accepting
|
||||
a search string (a
|
||||
<a href="http://doc.python.org/lib/re-syntax.html">python regular
|
||||
expression</a>). Once submitted, only files that have at least
|
||||
one occurance of the expression will show up in directory listings.
|
||||
</li>
|
||||
<li><a name="option-tarball"><strong>Tarball download</strong></a> -
|
||||
If enabled, will show a link to download a gzipped tar archive of
|
||||
the directory contents.</li>
|
||||
</ul>
|
||||
|
||||
</td></tr></table>
|
||||
<hr />
|
||||
<address><a href="mailto:users@viewvc.tigris.org">ViewVC Users Mailinglist</a></address>
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,71 @@
|
|||
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
|
||||
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
|
||||
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
|
||||
<head>
|
||||
<title>ViewVC Help: Log View</title>
|
||||
<link rel="stylesheet" href="help.css" type="text/css" />
|
||||
<meta http-equiv="Content-Type" content="text/html;charset=utf-8" />
|
||||
</head>
|
||||
<body>
|
||||
<table>
|
||||
<col class="menu" />
|
||||
<col />
|
||||
<tr>
|
||||
<td colspan="2">
|
||||
<h1>ViewVC Help: Log View</h1>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td>
|
||||
<h3>Help</h3>
|
||||
<a href="help_rootview.html">General</a><br />
|
||||
<a href="help_dirview.html">Directory View</a><br />
|
||||
<strong>Log View</strong><br />
|
||||
|
||||
<h3>Internet</h3>
|
||||
<a href="http://viewvc.org/index.html">Home</a><br />
|
||||
<a href="http://viewvc.org/upgrading.html">Upgrading</a><br />
|
||||
<a href="http://viewvc.org/contributing.html">Contributing</a><br />
|
||||
<a href="http://viewvc.org/license-1.html">License</a><br />
|
||||
</td><td colspan="2">
|
||||
<p>
|
||||
The log view displays the revision history of the selected source
|
||||
file or directory. For each revision the following information is
|
||||
displayed:
|
||||
|
||||
<ul>
|
||||
<li>The revision number. In Subversion repositories, this is a
|
||||
link to the <a href="help_rootview.html#view-rev">revision
|
||||
view</a></li>
|
||||
<li>For files, links to
|
||||
<a href="help_rootview.html#view-markup">view</a>,
|
||||
<a href="help_rootview.html#view-checkout">download</a>, and
|
||||
<a href="help_rootview.html#view-annotate">annotate</a> the
|
||||
revision. For directories, a link to
|
||||
<a href="help_dirview.html">list directory contents</a></li>
|
||||
<li>A link to select the revision for diffs (see below)</li>
|
||||
<li>The date and age of the change</li>
|
||||
<li>The author of the modification</li>
|
||||
<li>The CVS branch (usually <em>MAIN</em>, if not on a branch)</li>
|
||||
<li>Possibly a list of CVS tags bound to the revision (if any)</li>
|
||||
<li>The size of the change measured in added and removed lines of
|
||||
code. (CVS only)</li>
|
||||
<li>The size of the file in bytes at the time of the revision
|
||||
(Subversion only)</li>
|
||||
<li>Links to view diffs to the previous revision or possibly to
|
||||
an arbitrary selected revision (if any, see above)</li>
|
||||
<li>If the revision is the result of a copy, the path and revision
|
||||
copied from</li>
|
||||
<li>If the revision precedes a copy or rename, the path at the
|
||||
time of the revision</li>
|
||||
<li>And last but not least, the commit log message which should tell
|
||||
about the reason for the change.</li>
|
||||
</ul>
|
||||
<p>
|
||||
At the bottom of the page you will find a form which allows
|
||||
to request diffs between arbitrary revisions.
|
||||
</p>
|
||||
</td></tr></table>
|
||||
<hr />
|
||||
<address><a href="mailto:users@viewvc.tigris.org">ViewVC Users Mailinglist</a></address>
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,66 @@
|
|||
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
|
||||
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
|
||||
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
|
||||
<head>
|
||||
<title>ViewVC Help: Query The Commit Database</title>
|
||||
<link rel="stylesheet" href="help.css" type="text/css" />
|
||||
</head>
|
||||
<body>
|
||||
<table>
|
||||
<col class="menu" />
|
||||
<col />
|
||||
<tr>
|
||||
<td colspan="2">
|
||||
<h1>ViewVC Help: Query The Commit Database</h1>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td>
|
||||
<h3>Other Help:</h3>
|
||||
<a href="help_rootview.html">General</a><br />
|
||||
<a href="help_dirview.html">Directory View</a><br />
|
||||
<a href="help_log.html">Classic Log View</a><br />
|
||||
<a href="help_logtable.html">Alternative Log View</a><br />
|
||||
<strong>Query Database</strong>
|
||||
|
||||
<h3>Internet</h3>
|
||||
<a href="http://viewvc.org/index.html">Home</a><br />
|
||||
<a href="http://viewvc.org/upgrading.html">Upgrading</a><br />
|
||||
<a href="http://viewvc.org/contributing.html">Contributing</a><br />
|
||||
<a href="http://viewvc.org/license-1.html">License</a><br />
|
||||
</td><td colspan="2">
|
||||
|
||||
<p>
|
||||
Select your parameters for querying the CVS commit database in the
|
||||
form at the top of the page. You
|
||||
can search for multiple matches by typing a comma-seperated list
|
||||
into the text fields. Regular expressions, and wildcards are also
|
||||
supported. Blank text input fields are treated as wildcards.
|
||||
</p>
|
||||
<p>
|
||||
Any of the text entry fields can take a comma-seperated list of
|
||||
search arguments. For example, to search for all commits from
|
||||
authors <em>jpaint</em> and <em>gstein</em>, just type: <code>jpaint,
|
||||
gstein</code> in the <em>Author</em> input box. If you are searching
|
||||
for items containing spaces or quotes, you will need to quote your
|
||||
request. For example, the same search above with quotes is:
|
||||
<code>"jpaint", "gstein"</code>.
|
||||
</p>
|
||||
<p>
|
||||
Wildcard and regular expression searches are entered in a similar
|
||||
way to the quoted requests. You must quote any wildcard or
|
||||
regular expression request, and a command character preceeds the
|
||||
first quote. The command character <code>l</code>(lowercase L) is for wildcard
|
||||
searches, and the wildcard character is a percent (<code>%</code>). The
|
||||
command character for regular expressions is <code>r</code>, and is
|
||||
passed directly to MySQL, so you'll need to refer to the MySQL
|
||||
manual for the exact regex syntax. It is very similar to Perl. A
|
||||
wildard search for all files with a <em>.py</em> extention is:
|
||||
<code>l"%.py"</code> in the <em>File</em> input box. The same search done
|
||||
with a regular expression is: <code>r".*\.py"</code>.
|
||||
</p>
|
||||
<p>
|
||||
All search types can be mixed, as long as they are seperated by
|
||||
commas.
|
||||
</p>
|
||||
</td></tr></table>
|
||||
</body></html>
|
|
@ -0,0 +1,166 @@
|
|||
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
|
||||
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
|
||||
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
|
||||
<head>
|
||||
<title>ViewVC Help: General</title>
|
||||
<link rel="stylesheet" href="help.css" type="text/css" />
|
||||
</head>
|
||||
<body>
|
||||
<table>
|
||||
<col class="menu" />
|
||||
<col />
|
||||
<tr>
|
||||
<td colspan="2">
|
||||
<h1>ViewVC Help: General</h1>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td>
|
||||
<h3>Help</h3>
|
||||
<strong>General</strong><br />
|
||||
<a href="help_dirview.html">Directory View</a><br />
|
||||
<a href="help_log.html">Log View</a><br />
|
||||
|
||||
<h3>Internet</h3>
|
||||
<a href="http://viewvc.org/index.html">Home</a><br />
|
||||
<a href="http://viewvc.org/upgrading.html">Upgrading</a><br />
|
||||
<a href="http://viewvc.org/contributing.html">Contributing</a><br />
|
||||
<a href="http://viewvc.org/license-1.html">License</a><br />
|
||||
</td><td colspan="2">
|
||||
|
||||
<p><em>ViewVC</em> is a WWW interface for CVS and Subversion
|
||||
repositories. It allows you to browse the files and directories in a
|
||||
repository while showing you metadata from the repository history: log
|
||||
messages, modification dates, author names, revision numbers, copy
|
||||
history, and so on. It provides several different views of repository
|
||||
data to help you find the information you are looking for:</p>
|
||||
|
||||
<ul>
|
||||
<li><a name="view-dir" href="help_dirview.html"><strong>Directory
|
||||
View</strong></a> - Shows a list of files and subdirectories in a
|
||||
directory of the repository, along with metadata like author names and
|
||||
log entries.</li>
|
||||
|
||||
<li><a name="view-log" href="help_log.html"><strong>Log
|
||||
View</strong></a> - Shows a revision by revision list of all the
|
||||
changes that have made to a file or directory in the repository, with
|
||||
metadata and links to views of each revision.</li>
|
||||
|
||||
<li><a name="view-markup"><strong>File Contents View (Markup
|
||||
View)</strong></a> - Shows the contents of a file at a particular
|
||||
revision, with revision information at the top of the page. File
|
||||
revisions which are GIF, PNG, or JPEG images are displayed inline on
|
||||
the page. Other file types are displayed as marked up text. The markup
|
||||
may be limited to turning URLs and email addresses into links, or
|
||||
configured to show colorized source code.</li>
|
||||
|
||||
<li><a name="view-checkout"><strong>File Download (Checkout
|
||||
View)</strong></a> - Retrieves the unaltered contents of a file
|
||||
revision. Browsers may try to display the file, or just save it to
|
||||
disk.</li>
|
||||
|
||||
<li><a name="view-annotate"><strong>File Annotate View</strong></a> -
|
||||
Shows the contents of a file revision and breaks it down line by line,
|
||||
showing the revision number where each one was last modified, along
|
||||
with links and other information. <em>This view is disabled in some
|
||||
ViewVC configurations</em></li>
|
||||
|
||||
<li><a name="view-diff"><strong>File Diff View</strong></a> - Shows
|
||||
the changes made between two revisions of a file</li>
|
||||
|
||||
<li><a name="view-tarball"><strong>Directory Tarball View</strong> -
|
||||
Retrieves a gzipped tar archive containing the contents of a
|
||||
directory.<em>This view is disabled in the default ViewVC
|
||||
configuration.</em></li>
|
||||
|
||||
<li><a name="view-query"><strong>Directory Query View</strong></a> -
|
||||
Shows information about changes made to all subdirectories and files
|
||||
under a parent directory, sorted and filtered by criteria you specify.
|
||||
<em>This view is disabled in the default ViewVC configuration.</em>
|
||||
</li>
|
||||
|
||||
<li><a name="view-rev"><strong>Revision View</strong> - Shows
|
||||
information about a revision including log message, author, and a list
|
||||
of changed paths. <em>For Subversion repositories only.</em></li>
|
||||
|
||||
<li><a name="view-graph"><strong>Graph View</strong></a> - Shows a
|
||||
graphical representation of a file's revisions and branches complete
|
||||
with tag and author names and links to markup and diff pages.
|
||||
<em>For CVS repositories only, and disabled in the default
|
||||
configuration.</em></li>
|
||||
</ul>
|
||||
|
||||
<h3><a name="multiple-repositories">Multiple Repositories</a></h3>
|
||||
|
||||
<p>A single installation of ViewVC is often used to provide access to
|
||||
more than one repository. In these installations, ViewVC shows a
|
||||
<em>Project Root</em> drop down box in the top right corner of every
|
||||
generated page to allow for quick access to any repository.</p>
|
||||
|
||||
<h3><a name="sticky-revision-tag">Sticky Revision and Tag</a></h3>
|
||||
|
||||
<p>By default, ViewVC will show the files and directories and revisions
|
||||
that currently exist in the repository. But it's also possible to browse
|
||||
the contents of a repository at a point in its past history by choosing
|
||||
a "sticky tag" (in CVS) or a "sticky revision" (in Subversion) from the
|
||||
forms at the top of directory and log pages. They're called sticky
|
||||
because once they're chosen, they stick around when you navigate to
|
||||
other pages, until you reset them. When they're set, directory and log
|
||||
pages only show revisions preceding the specified point in history. In
|
||||
CVS, when a tag refers to a branch or a revision on a branch, only
|
||||
revisions from the branch history are shown, including branch points and
|
||||
their preceding revisions.</p>
|
||||
|
||||
<h3><a name="dead-files">Dead Files</a></h3>
|
||||
|
||||
<p>In CVS directory listings, ViewVC can optionally display dead files.
|
||||
Dead files are files which used to be in a directory but are currently
|
||||
deleted, or files which just don't exist in the currently selected
|
||||
<a href="#sticky-revision-tag">sticky tag</a>. Dead files cannot be
|
||||
shown in Subversion repositories. The only way to see a deleted file in
|
||||
a Subversion directory is to navigate to a sticky revision where the
|
||||
file previously existed.</p>
|
||||
|
||||
<h3><a name="artificial-tags">Artificial Tags</a></h3>
|
||||
|
||||
<p>In CVS Repositories, ViewVC adds artificial tags <em>HEAD</em> and
|
||||
<em>MAIN</em> to tag listings and accepts them in place of revision
|
||||
numbers and real tag names in all URLs. <em>MAIN</em> acts like a branch
|
||||
tag pointing at the default branch, while <em>HEAD</em> acts like a
|
||||
revision tag pointing to the latest revision on the default branch. The
|
||||
default branch is usually just the trunk, but may be set to other
|
||||
branches inside individual repository files. CVS will always check out
|
||||
revisions from a file's default branch when no other branch is specified
|
||||
on the command line.</p>
|
||||
|
||||
<h3><a name="more-information">More Information</a></h3>
|
||||
|
||||
<p>More information about <em>ViewVC</em> is available from
|
||||
<a href="http://viewvc.org/">viewvc.org</a>.
|
||||
See the links below for guides to CVS and Subversion</p>
|
||||
|
||||
<h4>Documentation about CVS</h4>
|
||||
<blockquote>
|
||||
<p>
|
||||
<a href="http://cvsbook.red-bean.com/"><em>Open Source
|
||||
Development with CVS</em></a><br />
|
||||
<a href="http://www.loria.fr/~molli/cvs/doc/cvs_toc.html">CVS
|
||||
User's Guide</a><br />
|
||||
<a href="http://cellworks.washington.edu/pub/docs/cvs/tutorial/cvs_tutorial_1.html">Another CVS tutorial</a><br />
|
||||
<a href="http://www.csc.calpoly.edu/~dbutler/tutorials/winter96/cvs/">Yet another CVS tutorial (a little old, but nice)</a><br />
|
||||
<a href="http://www.cs.utah.edu/dept/old/texinfo/cvs/FAQ.txt">An old but very useful FAQ about CVS</a>
|
||||
</p>
|
||||
</blockquote>
|
||||
|
||||
<h4>Documentation about Subversion</h3>
|
||||
<blockquote>
|
||||
<p>
|
||||
<a href="http://svnbook.red-bean.com/"><em>Version Control with
|
||||
Subversion</em></a><br />
|
||||
</p>
|
||||
</blockquote>
|
||||
|
||||
</td></tr></table>
|
||||
<hr />
|
||||
<address><a href="mailto:users@viewvc.tigris.org">ViewVC Users Mailinglist</a></address>
|
||||
</body>
|
||||
</html>
|
After Width: | Height: | Size: 337 B |
After Width: | Height: | Size: 205 B |
After Width: | Height: | Size: 247 B |
After Width: | Height: | Size: 601 B |
After Width: | Height: | Size: 228 B |
After Width: | Height: | Size: 167 B |
After Width: | Height: | Size: 1004 B |
After Width: | Height: | Size: 338 B |
After Width: | Height: | Size: 1.0 KiB |
After Width: | Height: | Size: 226 B |
After Width: | Height: | Size: 168 B |
After Width: | Height: | Size: 8.2 KiB |
|
@ -0,0 +1,4 @@
|
|||
function jumpTo(url)
|
||||
{
|
||||
window.location = url;
|
||||
}
|
|
@ -0,0 +1,332 @@
|
|||
/*******************************/
|
||||
/*** ViewVC CSS Stylesheet ***/
|
||||
/*******************************/
|
||||
|
||||
/*** Standard Tags ***/
|
||||
html, body {
|
||||
background-color: white;
|
||||
color: black;
|
||||
font-family: sans-serif;
|
||||
font-size: 100%;
|
||||
margin: 5px;
|
||||
}
|
||||
|
||||
a {
|
||||
text-decoration: none;
|
||||
color: rgb(30%,30%,60%);
|
||||
}
|
||||
img { border: none; }
|
||||
table {
|
||||
width: 100%;
|
||||
margin: 0;
|
||||
border: none;
|
||||
}
|
||||
td, th {
|
||||
vertical-align: top;
|
||||
}
|
||||
th { white-space: nowrap; }
|
||||
table.auto {
|
||||
width: auto;
|
||||
}
|
||||
table.fixed {
|
||||
width: 100%;
|
||||
table-layout: fixed;
|
||||
}
|
||||
table.fixed td {
|
||||
overflow: hidden;
|
||||
text-overflow: ellipsis;
|
||||
white-space: nowrap;
|
||||
}
|
||||
form { margin: 0; }
|
||||
address { font-style: normal; display: inline; }
|
||||
.inline { display: inline; }
|
||||
|
||||
/*** Icons ***/
|
||||
.vc_icon {
|
||||
width: 16px;
|
||||
height: 16px;
|
||||
border: none;
|
||||
padding: 0 1px;
|
||||
}
|
||||
|
||||
#vc_header {
|
||||
padding: 0 0 10px 0;
|
||||
border-bottom: 10px solid #94bd5e;
|
||||
}
|
||||
|
||||
#vc_footer {
|
||||
text-align: right;
|
||||
font-size: 85%;
|
||||
padding: 10px 0 0 0;
|
||||
border-top: 10px solid #94bd5e;
|
||||
}
|
||||
|
||||
#vc_topmatter {
|
||||
float: right;
|
||||
text-align: right;
|
||||
font-size: 85%;
|
||||
}
|
||||
|
||||
#vc_current_path {
|
||||
color: rgb(50%,50%,50%);
|
||||
padding: 10px 0;
|
||||
font-size: 140%;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
#vc_current_path a {
|
||||
color: rgb(60%,60%,60%);
|
||||
}
|
||||
|
||||
#vc_current_path a:hover {
|
||||
background-color: rgb(90%,90%,90%);
|
||||
}
|
||||
|
||||
#vc_current_path .thisitem {
|
||||
color: #94bd5e;
|
||||
}
|
||||
|
||||
#vc_current_path .pathdiv {
|
||||
padding: 0 0.1em;
|
||||
}
|
||||
|
||||
#vc_view_selection_group {
|
||||
background: black;
|
||||
color: white;
|
||||
margin: 0 0 5px 0;
|
||||
padding: 5px;
|
||||
text-align: right;
|
||||
}
|
||||
|
||||
#vc_view_selection_group a {
|
||||
padding: 5px;
|
||||
font-size: 100%;
|
||||
color: white;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
#vc_view_selection_group a.vc_view_link_this, #vc_view_selection_group a.vc_view_link:hover {
|
||||
color: #94bd5e;
|
||||
}
|
||||
|
||||
#vc_view_selection_group a:hover {
|
||||
background-color: black;
|
||||
}
|
||||
|
||||
#vc_view_main {
|
||||
border-top: 1px solid black;
|
||||
border-bottom: 1px solid black;
|
||||
}
|
||||
|
||||
#vc_togglables {
|
||||
text-align: right;
|
||||
font-size: 85%;
|
||||
}
|
||||
|
||||
#vc_main_body {
|
||||
background: white;
|
||||
padding: 5px 0 20px 0;
|
||||
}
|
||||
|
||||
#vc_view_summary {
|
||||
font-size: 85%;
|
||||
text-align: right;
|
||||
margin-top: 5px;
|
||||
}
|
||||
|
||||
|
||||
/*** Table Headers ***/
|
||||
.vc_header, .vc_header_sort {
|
||||
text-align: left;
|
||||
vertical-align: top;
|
||||
border-bottom: 1px solid black;
|
||||
background-color: rgb(80%,80%,80%);
|
||||
}
|
||||
.vc_header_sort {
|
||||
background-color: rgb(85%,85%,85%);
|
||||
}
|
||||
|
||||
|
||||
/*** Table Rows ***/
|
||||
.vc_row_even {
|
||||
background-color: rgb(95%,95%,95%);
|
||||
}
|
||||
.vc_row_odd {
|
||||
background-color: rgb(90%,90%,90%);
|
||||
}
|
||||
|
||||
|
||||
/*** Directory View ***/
|
||||
#dirlist td, #dirlist th {
|
||||
padding: 0.2em;
|
||||
vertical-align: middle;
|
||||
}
|
||||
#dirlist tr:hover {
|
||||
background-color: white;
|
||||
}
|
||||
|
||||
|
||||
/*** Log messages ***/
|
||||
.vc_log {
|
||||
/* unfortunately, white-space: pre-wrap isn't widely supported ... */
|
||||
white-space: -moz-pre-wrap; /* Mozilla based browsers */
|
||||
white-space: -pre-wrap; /* Opera 4 - 6 */
|
||||
white-space: -o-pre-wrap; /* Opera >= 7 */
|
||||
white-space: pre-wrap; /* CSS3 */
|
||||
word-wrap: break-word; /* IE 5.5+ */
|
||||
}
|
||||
|
||||
|
||||
/*** Properties Listing ***/
|
||||
.vc_properties {
|
||||
margin: 1em 0;
|
||||
}
|
||||
.vc_properties h2 {
|
||||
font-size: 115%;
|
||||
}
|
||||
.vc_properties td, .vc_properties th {
|
||||
padding: 0.2em;
|
||||
}
|
||||
|
||||
|
||||
/*** File Content Markup Styles ***/
|
||||
.vc_summary {
|
||||
background-color: #eeeeee;
|
||||
}
|
||||
#vc_file td {
|
||||
border-right-style: solid;
|
||||
border-right-color: #505050;
|
||||
text-decoration: none;
|
||||
font-weight: normal;
|
||||
font-style: normal;
|
||||
padding: 1px 5px;
|
||||
}
|
||||
.vc_file_line_number {
|
||||
border-right-width: 1px;
|
||||
background-color: #eeeeee;
|
||||
color: #505050;
|
||||
text-align: right;
|
||||
}
|
||||
.vc_file_line_author, .vc_file_line_rev {
|
||||
border-right-width: 1px;
|
||||
text-align: right;
|
||||
}
|
||||
.vc_file_line_text {
|
||||
border-right-width: 0px;
|
||||
background-color: white;
|
||||
font-family: monospace;
|
||||
text-align: left;
|
||||
white-space: pre;
|
||||
width: 100%;
|
||||
}
|
||||
.pygments-c { color: #408080; font-style: italic } /* Comment */
|
||||
.pygments-err { border: 1px solid #FF0000 } /* Error */
|
||||
.pygments-k { color: #008000; font-weight: bold } /* Keyword */
|
||||
.pygments-o { color: #666666 } /* Operator */
|
||||
.pygments-cm { color: #408080; font-style: italic } /* Comment.Multiline */
|
||||
.pygments-cp { color: #BC7A00 } /* Comment.Preproc */
|
||||
.pygments-c1 { color: #408080; font-style: italic } /* Comment.Single */
|
||||
.pygments-cs { color: #408080; font-style: italic } /* Comment.Special */
|
||||
.pygments-gd { color: #A00000 } /* Generic.Deleted */
|
||||
.pygments-ge { font-style: italic } /* Generic.Emph */
|
||||
.pygments-gr { color: #FF0000 } /* Generic.Error */
|
||||
.pygments-gh { color: #000080; font-weight: bold } /* Generic.Heading */
|
||||
.pygments-gi { color: #00A000 } /* Generic.Inserted */
|
||||
.pygments-go { color: #808080 } /* Generic.Output */
|
||||
.pygments-gp { color: #000080; font-weight: bold } /* Generic.Prompt */
|
||||
.pygments-gs { font-weight: bold } /* Generic.Strong */
|
||||
.pygments-gu { color: #800080; font-weight: bold } /* Generic.Subheading */
|
||||
.pygments-gt { color: #0040D0 } /* Generic.Traceback */
|
||||
.pygments-kc { color: #008000; font-weight: bold } /* Keyword.Constant */
|
||||
.pygments-kd { color: #008000; font-weight: bold } /* Keyword.Declaration */
|
||||
.pygments-kp { color: #008000 } /* Keyword.Pseudo */
|
||||
.pygments-kr { color: #008000; font-weight: bold } /* Keyword.Reserved */
|
||||
.pygments-kt { color: #B00040 } /* Keyword.Type */
|
||||
.pygments-m { color: #666666 } /* Literal.Number */
|
||||
.pygments-s { color: #BA2121 } /* Literal.String */
|
||||
.pygments-na { color: #7D9029 } /* Name.Attribute */
|
||||
.pygments-nb { color: #008000 } /* Name.Builtin */
|
||||
.pygments-nc { color: #0000FF; font-weight: bold } /* Name.Class */
|
||||
.pygments-no { color: #880000 } /* Name.Constant */
|
||||
.pygments-nd { color: #AA22FF } /* Name.Decorator */
|
||||
.pygments-ni { color: #999999; font-weight: bold } /* Name.Entity */
|
||||
.pygments-ne { color: #D2413A; font-weight: bold } /* Name.Exception */
|
||||
.pygments-nf { color: #0000FF } /* Name.Function */
|
||||
.pygments-nl { color: #A0A000 } /* Name.Label */
|
||||
.pygments-nn { color: #0000FF; font-weight: bold } /* Name.Namespace */
|
||||
.pygments-nt { color: #008000; font-weight: bold } /* Name.Tag */
|
||||
.pygments-nv { color: #19177C } /* Name.Variable */
|
||||
.pygments-ow { color: #AA22FF; font-weight: bold } /* Operator.Word */
|
||||
.pygments-w { color: #bbbbbb } /* Text.Whitespace */
|
||||
.pygments-mf { color: #666666 } /* Literal.Number.Float */
|
||||
.pygments-mh { color: #666666 } /* Literal.Number.Hex */
|
||||
.pygments-mi { color: #666666 } /* Literal.Number.Integer */
|
||||
.pygments-mo { color: #666666 } /* Literal.Number.Oct */
|
||||
.pygments-sb { color: #BA2121 } /* Literal.String.Backtick */
|
||||
.pygments-sc { color: #BA2121 } /* Literal.String.Char */
|
||||
.pygments-sd { color: #BA2121; font-style: italic } /* Literal.String.Doc */
|
||||
.pygments-s2 { color: #BA2121 } /* Literal.String.Double */
|
||||
.pygments-se { color: #BB6622; font-weight: bold } /* Literal.String.Escape */
|
||||
.pygments-sh { color: #BA2121 } /* Literal.String.Heredoc */
|
||||
.pygments-si { color: #BB6688; font-weight: bold } /* Literal.String.Interpol */
|
||||
.pygments-sx { color: #008000 } /* Literal.String.Other */
|
||||
.pygments-sr { color: #BB6688 } /* Literal.String.Regex */
|
||||
.pygments-s1 { color: #BA2121 } /* Literal.String.Single */
|
||||
.pygments-ss { color: #19177C } /* Literal.String.Symbol */
|
||||
.pygments-bp { color: #008000 } /* Name.Builtin.Pseudo */
|
||||
.pygments-vc { color: #19177C } /* Name.Variable.Class */
|
||||
.pygments-vg { color: #19177C } /* Name.Variable.Global */
|
||||
.pygments-vi { color: #19177C } /* Name.Variable.Instance */
|
||||
.pygments-il { color: #666666 } /* Literal.Number.Integer.Long */
|
||||
|
||||
|
||||
/*** Diff Styles ***/
|
||||
.vc_diff_plusminus { width: 1em; }
|
||||
.vc_diff_remove, .vc_diff_add, .vc_diff_changes1, .vc_diff_changes2 {
|
||||
font-family: monospace;
|
||||
white-space: pre;
|
||||
}
|
||||
.vc_diff_remove { background: rgb(100%,60%,60%); }
|
||||
.vc_diff_add { background: rgb(60%,100%,60%); }
|
||||
.vc_diff_changes1 { background: rgb(100%,100%,70%); color: rgb(50%,50%,50%); text-decoration: line-through; }
|
||||
.vc_diff_changes2 { background: rgb(100%,100%,0%); }
|
||||
.vc_diff_nochange, .vc_diff_binary, .vc_diff_error {
|
||||
font-family: sans-serif;
|
||||
font-size: smaller;
|
||||
}
|
||||
|
||||
/*** Intraline Diff Styles ***/
|
||||
.vc_idiff_add {
|
||||
background-color: #aaffaa;
|
||||
}
|
||||
.vc_idiff_change {
|
||||
background-color:#ffff77;
|
||||
}
|
||||
.vc_idiff_remove {
|
||||
background-color:#ffaaaa;
|
||||
}
|
||||
.vc_idiff_empty {
|
||||
background-color:#e0e0e0;
|
||||
}
|
||||
|
||||
table.vc_idiff col.content {
|
||||
width: 50%;
|
||||
}
|
||||
table.vc_idiff tbody {
|
||||
font-family: monospace;
|
||||
/* unfortunately, white-space: pre-wrap isn't widely supported ... */
|
||||
white-space: -moz-pre-wrap; /* Mozilla based browsers */
|
||||
white-space: -pre-wrap; /* Opera 4 - 6 */
|
||||
white-space: -o-pre-wrap; /* Opera >= 7 */
|
||||
white-space: pre-wrap; /* CSS3 */
|
||||
word-wrap: break-word; /* IE 5.5+ */
|
||||
}
|
||||
table.vc_idiff tbody th {
|
||||
background-color:#e0e0e0;
|
||||
text-align:right;
|
||||
}
|
||||
|
||||
|
||||
/*** Query Form ***/
|
||||
.vc_query_form {
|
||||
}
|
|
@ -0,0 +1,51 @@
|
|||
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
|
||||
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
|
||||
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
|
||||
<!-- ViewVC :: http://www.viewvc.org/ -->
|
||||
<head>
|
||||
<title>ViewVC Exception</title>
|
||||
</head>
|
||||
<body>
|
||||
<h3>An Exception Has Occurred</h3>
|
||||
|
||||
[if-any msg]
|
||||
<p>[msg]</p>
|
||||
[end]
|
||||
|
||||
[if-any status]
|
||||
<h4>HTTP Response Status</h4>
|
||||
<p><pre>[status]</pre></p>
|
||||
<hr />
|
||||
[end]
|
||||
|
||||
[if-any msg][else]
|
||||
<h4>Python Traceback</h4>
|
||||
<p><pre>
|
||||
[stacktrace]
|
||||
</pre></p>
|
||||
[end]
|
||||
|
||||
[# Here follows a bunch of space characters, present to ensure that
|
||||
our error message is larger than 512 bytes so that IE's "Friendly
|
||||
Error Message" won't show. For more information, see
|
||||
http://oreillynet.com/onjava/blog/2002/09/internet_explorer_subverts_err.html]
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,61 @@
|
|||
[# setup page definitions]
|
||||
[define page_title]Annotation of:[end]
|
||||
[define help_href][docroot]/help_rootview.html[end]
|
||||
[# end]
|
||||
[include "include/header.ezt" "annotate"]
|
||||
[include "include/fileview.ezt"]
|
||||
|
||||
<div id="vc_main_body">
|
||||
<!-- ************************************************************** -->
|
||||
|
||||
[define last_rev]0[end]
|
||||
[define rowclass]vc_row_odd[end]
|
||||
|
||||
[if-any lines]
|
||||
<div id="vc_file">
|
||||
<table cellspacing="0" cellpadding="0">
|
||||
<tr>
|
||||
<th class="vc_header">Line</th>
|
||||
[is annotation "annotated"]
|
||||
<th class="vc_header">User</th>
|
||||
<th class="vc_header">Rev</th>
|
||||
[end]
|
||||
<th class="vc_header">File contents</th>
|
||||
</tr>
|
||||
[for lines]
|
||||
[is lines.rev last_rev]
|
||||
[else]
|
||||
[is rowclass "vc_row_even"]
|
||||
[define rowclass]vc_row_odd[end]
|
||||
[else]
|
||||
[define rowclass]vc_row_even[end]
|
||||
[end]
|
||||
[end]
|
||||
|
||||
<tr class="[rowclass]" id="l[lines.line_number]">
|
||||
<td class="vc_file_line_number">[lines.line_number]</td>
|
||||
[is annotation "annotated"]
|
||||
<td class="vc_file_line_author">[is lines.rev last_rev] [else][lines.author][end]</td>
|
||||
<td class="vc_file_line_rev">[is lines.rev last_rev] [else][if-any lines.diff_href]<a href="[lines.diff_href]">[end][lines.rev][if-any lines.diff_href]</a>[end][end]</td>
|
||||
[end]
|
||||
<td class="vc_file_line_text">[lines.text]</td>
|
||||
</tr>
|
||||
[define last_rev][lines.rev][end]
|
||||
[end]
|
||||
</table>
|
||||
</div>
|
||||
|
||||
[else]
|
||||
[if-any image_src_href]
|
||||
<div id="vc_file_image">
|
||||
<img src="[image_src_href]" alt="" />
|
||||
</div>
|
||||
[end]
|
||||
[end]
|
||||
|
||||
[include "include/props.ezt"]
|
||||
|
||||
<!-- ************************************************************** -->
|
||||
</div>
|
||||
|
||||
[include "include/footer.ezt"]
|
|
@ -0,0 +1,15 @@
|
|||
[# setup page definitions]
|
||||
[define page_title]Graph of:[end]
|
||||
[define help_href][docroot]/help_rootview.html[end]
|
||||
[# end]
|
||||
|
||||
[include "include/header.ezt" "graph"]
|
||||
|
||||
<div style="text-align:center;">
|
||||
[imagemap]
|
||||
<img usemap="#MyMapName"
|
||||
src="[imagesrc]"
|
||||
alt="Revisions of [where]" />
|
||||
</div>
|
||||
|
||||
[include "include/footer.ezt"]
|
|
@ -0,0 +1,70 @@
|
|||
|
||||
<div style="border-bottom: solid 1px black;">
|
||||
<p id="diff">This form allows you to request diffs between any two
|
||||
revisions of this file.
|
||||
For each of the two "sides" of the diff,
|
||||
[if-any tags]
|
||||
select a symbolic revision name using the selection box, or choose
|
||||
'Use Text Field' and enter a numeric revision.
|
||||
[else]
|
||||
enter a numeric revision.
|
||||
[end]
|
||||
</p>
|
||||
|
||||
<form method="get" action="[diff_select_action]" id="diff_select">
|
||||
|
||||
<table cellpadding="2" cellspacing="0" class="auto">
|
||||
<tr>
|
||||
<td> </td>
|
||||
<td>
|
||||
[for diff_select_hidden_values]<input type="hidden" name="[diff_select_hidden_values.name]" value="[diff_select_hidden_values.value]"/>[end]
|
||||
Diffs between
|
||||
[if-any tags]
|
||||
<select name="r1">
|
||||
<option value="text" selected="selected">Use Text Field</option>
|
||||
[for tags]
|
||||
<option value="[tags.rev]:[tags.name]">[tags.name]</option>
|
||||
[end]
|
||||
</select>
|
||||
<input type="text" size="12" name="tr1"
|
||||
value="[if-any rev_selected][rev_selected][else][first_revision][end]"
|
||||
onchange="document.getElementById('diff_select').r1.selectedIndex=0" />
|
||||
[else]
|
||||
<input type="text" size="12" name="r1"
|
||||
value="[if-any rev_selected][rev_selected][else][first_revision][end]" />
|
||||
[end]
|
||||
|
||||
and
|
||||
[if-any tags]
|
||||
<select name="r2">
|
||||
<option value="text" selected="selected">Use Text Field</option>
|
||||
[for tags]
|
||||
<option value="[tags.rev]:[tags.name]">[tags.name]</option>
|
||||
[end]
|
||||
</select>
|
||||
<input type="text" size="12" name="tr2"
|
||||
value="[last_revision]"
|
||||
onchange="document.getElementById('diff_select').r2.selectedIndex=0" />
|
||||
[else]
|
||||
<input type="text" size="12" name="r2" value="[last_revision]" />
|
||||
[end]
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td> </td>
|
||||
<td>
|
||||
Type of Diff should be a
|
||||
<select name="diff_format" onchange="submit()">
|
||||
<option value="h" [is diff_format "h"]selected="selected"[end]>Colored Diff</option>
|
||||
<option value="l" [is diff_format "l"]selected="selected"[end]>Long Colored Diff</option>
|
||||
<option value="f" [is diff_format "f"]selected="selected"[end]>Full Colored Diff</option>
|
||||
<option value="u" [is diff_format "u"]selected="selected"[end]>Unidiff</option>
|
||||
<option value="c" [is diff_format "c"]selected="selected"[end]>Context Diff</option>
|
||||
<option value="s" [is diff_format "s"]selected="selected"[end]>Side by Side</option>
|
||||
</select>
|
||||
<input type="submit" value=" Get Diffs " />
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
</form>
|
||||
</div>
|
|
@ -0,0 +1,74 @@
|
|||
<table class="auto">
|
||||
<tr>
|
||||
<td>Revision:</td>
|
||||
<td><strong>[if-any revision_href]<a href="[revision_href]">[rev]</a>[else][rev][end]</strong> [if-any vendor_branch] <em>(vendor branch)</em>[end]</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<tr>
|
||||
<td>Committed:</td>
|
||||
<td>[if-any date]<em>[date]</em> [end][if-any ago]([ago] ago) [end][if-any author]by <em>[author]</em>[end]</td>
|
||||
</tr>
|
||||
[if-any orig_path]
|
||||
<tr>
|
||||
<td>Original Path:</td>
|
||||
<td><strong><a href="[orig_href]"><em>[orig_path]</em></a></strong></td>
|
||||
</tr>
|
||||
[end]
|
||||
[if-any branches]
|
||||
<tr>
|
||||
<td>Branch:</td>
|
||||
<td><strong>[branches]</strong></td>
|
||||
</tr>
|
||||
[end]
|
||||
[if-any tags]
|
||||
<tr>
|
||||
<td>CVS Tags:</td>
|
||||
<td><strong>[tags]</strong></td>
|
||||
</tr>
|
||||
[end]
|
||||
[if-any branch_points]
|
||||
<tr>
|
||||
<td>Branch point for:</td>
|
||||
<td><strong>[branch_points]</strong></td>
|
||||
</tr>
|
||||
[end]
|
||||
[is roottype "cvs"][if-any changed]
|
||||
<tr>
|
||||
<td>Changes since <strong>[prev]</strong>:</td>
|
||||
<td><strong>[changed] lines</strong></td>
|
||||
</tr>
|
||||
[end][end]
|
||||
[is roottype "svn"][if-any size]
|
||||
<td>File size:</td>
|
||||
<td>[size] byte(s)</td>
|
||||
</tr>
|
||||
[end][end]
|
||||
[if-any lockinfo]
|
||||
<td>Lock status:</td>
|
||||
<td>[lockinfo]</td>
|
||||
[end]
|
||||
[is state "dead"]
|
||||
<tr>
|
||||
<td>State:</td>
|
||||
<td><strong><em>FILE REMOVED</em></strong></td>
|
||||
</tr>
|
||||
[end]
|
||||
[if-any annotation]
|
||||
[is annotation "binary"]
|
||||
<tr>
|
||||
<td colspan="2"><strong>Unable to calculate annotation data on binary file contents.</strong></td>
|
||||
</tr>
|
||||
[end]
|
||||
[is annotation "error"]
|
||||
<tr>
|
||||
<td colspan="2"><strong>Error occurred while calculating annotation data.</strong></td>
|
||||
</tr>
|
||||
[end]
|
||||
[end]
|
||||
[if-any log]
|
||||
<tr>
|
||||
<td>Log Message:</td>
|
||||
<td><pre class="vc_log">[log]</span></td>
|
||||
</tr>
|
||||
[end]
|
||||
</table>
|
|
@ -0,0 +1,10 @@
|
|||
</div> <!-- vc_view_main -->
|
||||
|
||||
<div id="vc_footer">
|
||||
[if-any cfg.general.address]Administered by <address><a href="mailto:[cfg.general.address]">[cfg.general.address]</a></address><br/>[end]
|
||||
Powered by <a href="http://viewvc.tigris.org/">ViewVC [vsn]</a>
|
||||
[if-any rss_href]<br/><a href="[rss_href]" title="RSS 2.0 feed"><img src="[docroot]/images/feed-icon-16x16.jpg" class="vc_icon" alt="RSS 2.0 feed" /></a>[else] [end]
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,63 @@
|
|||
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
|
||||
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
|
||||
|
||||
<!-- ViewVC :: http://www.viewvc.org/ -->
|
||||
|
||||
<head>
|
||||
<title>[[]ViewVC] [page_title] [if-any rootname][rootname][if-any where]/[where][end][end]</title>
|
||||
<meta name="generator" content="ViewVC [vsn]" />
|
||||
<link rel="stylesheet" href="[docroot]/styles.css" type="text/css" />
|
||||
<script src="[docroot]/scripts.js"></script>
|
||||
[if-any rss_href]
|
||||
<link rel="alternate" type="application/rss+xml" href="[rss_href]" title="ViewVC RSS: [if-any rootname][rootname][if-any where]/[where][end][end]">
|
||||
[end]
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<div id="vc_header">
|
||||
|
||||
<div id="vc_topmatter">
|
||||
[if-any username]Logged in as: <strong>[username]</strong> |[end]
|
||||
<a href="[help_href]">ViewVC Help</a>
|
||||
</div>
|
||||
|
||||
<div id="vc_logo">
|
||||
<a href="http://www.viewvc.org/"><img src="[docroot]/images/viewvc-logo.png" alt="ViewVC logotype" width="240" height="70" /></a>
|
||||
</div>
|
||||
|
||||
<div id="vc_view_selection_group">
|
||||
[is pathtype "dir"]
|
||||
<a class="vc_view_link[is view "dir"]_this[end]" href="[view_href]">View Directory</a>
|
||||
[if-any log_href]
|
||||
| <a class="vc_view_link[is view "log"]_this[end]" href="[log_href]">Revision Log</a>
|
||||
[end]
|
||||
[if-any queryform_href]
|
||||
| <a class="vc_view_link[is view "queryform"]_this[end]" href="[queryform_href]">Commit Query</a>
|
||||
[end]
|
||||
[if-any tarball_href]
|
||||
| <a class="vc_view_link" href="[tarball_href]">Download Tarball</a>
|
||||
[end]
|
||||
[end]
|
||||
[is pathtype "file"]
|
||||
<a class="vc_view_link[is view "markup"]_this[end]" href="[view_href]">View File</a>
|
||||
| <a class="vc_view_link[is view "log"]_this[end]" href="[log_href]">Revision Log</a>
|
||||
| <a class="vc_view_link[is view "annotate"]_this[end]" href="[annotate_href]">Show Annotations</a>
|
||||
[if-any graph_href]
|
||||
| <a class="vc_view_link[is view "graph"]_this[end]" href="[graph_href]">Revision Graph</a>
|
||||
[end]
|
||||
| <a class="vc_view_link" href="[download_href]">Download File</a>
|
||||
[end]
|
||||
[if-any revision_href]
|
||||
| <a class="vc_view_link[is view "revision"]_this[end]" href="[revision_href]">View Changeset</a>
|
||||
[end]
|
||||
| <a class="vc_view_link[is view "roots"]_this[end]" href="[roots_href]">Root Listing</a>
|
||||
</div>
|
||||
|
||||
<div id="vc_current_path">
|
||||
[if-any roots_href]<a href="[roots_href]">root</a>[end][if-any nav_path]<span class="pathdiv">/</span>[for nav_path][if-any nav_path.href]<a href="[nav_path.href]">[end][if-index nav_path last]<span class="thisitem">[end][nav_path.name][if-index nav_path last]</span>[end][if-any nav_path.href]</a>[end][if-index nav_path last][else]<span class="pathdiv">/</span>[end][end][end]
|
||||
</div>
|
||||
|
||||
</div> <!-- vc_header -->
|
||||
|
||||
<div id="vc_view_main">
|
|
@ -0,0 +1,53 @@
|
|||
<form method="get" action="[pathrev_action]" style="display: inline">
|
||||
<div style="display: inline">
|
||||
[for pathrev_hidden_values]<input type="hidden" name="[pathrev_hidden_values.name]" value="[pathrev_hidden_values.value]"/>[end]
|
||||
[is roottype "cvs"]
|
||||
[define pathrev_selected][pathrev][end]
|
||||
<select name="pathrev" onchange="submit()">
|
||||
<option value=""></option>
|
||||
[if-any branch_tags]
|
||||
<optgroup label="Branches">
|
||||
[for branch_tags]
|
||||
[is branch_tags pathrev]
|
||||
<option selected>[branch_tags]</option>
|
||||
[define pathrev_selected][end]
|
||||
[else]
|
||||
<option>[branch_tags]</option>
|
||||
[end]
|
||||
[end]
|
||||
</optgroup>
|
||||
[end]
|
||||
<optgroup label="Non-branch tags">
|
||||
[for plain_tags]
|
||||
[is plain_tags pathrev]
|
||||
<option selected>[plain_tags]</option>
|
||||
[define pathrev_selected][end]
|
||||
[else]
|
||||
<option>[plain_tags]</option>
|
||||
[end]
|
||||
[end]
|
||||
</optgroup>
|
||||
[if-any pathrev_selected]
|
||||
<option selected>[pathrev_selected]</option>
|
||||
[end]
|
||||
</select>
|
||||
[else]
|
||||
<input type="text" name="pathrev" value="[pathrev]" size="6"/>
|
||||
[end]
|
||||
<input type="submit" value="Set Sticky [is roottype "cvs"]Tag[else]Revision[end]" />
|
||||
</div>
|
||||
</form>
|
||||
|
||||
[if-any pathrev]
|
||||
<form method="get" action="[pathrev_clear_action]" style="display: inline">
|
||||
<div style="display: inline">
|
||||
[for pathrev_clear_hidden_values]<input type="hidden" name="[pathrev_clear_hidden_values.name]" value="[pathrev_clear_hidden_values.value]"/>[end]
|
||||
[if-any lastrev]
|
||||
[is pathrev lastrev][else]<input type="submit" value="Set to [lastrev]" />[end]
|
||||
(<i>Current path doesn't exist after revision <strong>[lastrev]</strong></i>)
|
||||
[else]
|
||||
<input type="submit" value="Clear" />
|
||||
[end]
|
||||
</div>
|
||||
</form>
|
||||
[end]
|
|
@ -0,0 +1,26 @@
|
|||
[if-any properties]
|
||||
<hr/>
|
||||
<div class="vc_properties">
|
||||
<h2>Properties</h2>
|
||||
<table cellspacing="2" class="fixed">
|
||||
<thead>
|
||||
<tr>
|
||||
<th class="vc_header_sort" style="width: 200px;">Name</th>
|
||||
<th class="vc_header">Value</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
[for properties]
|
||||
<tr class="vc_row_[if-index properties even]even[else]odd[end]">
|
||||
<td><strong>[properties.name]</strong></td>
|
||||
[if-any properties.undisplayable]
|
||||
<td><em>Property value is undisplayable.</em></td>
|
||||
[else]
|
||||
<td style="white-space: pre;">[properties.value]</td>
|
||||
[end]
|
||||
</tr>
|
||||
[end]
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
[end]
|
|
@ -0,0 +1,247 @@
|
|||
[# setup page definitions]
|
||||
[define page_title]Log of:[end]
|
||||
[define help_href][docroot]/help_log.html[end]
|
||||
[# end]
|
||||
[include "include/header.ezt" "log"]
|
||||
|
||||
<table class="auto">
|
||||
|
||||
[if-any default_branch]
|
||||
<tr>
|
||||
<td>Default branch:</td>
|
||||
<td>[for default_branch]<a href="[default_branch.href]">[default_branch.name]</a>[if-index default_branch last][else], [end]
|
||||
[end]</td>
|
||||
</tr>
|
||||
[end]
|
||||
|
||||
[is pathtype "file"]
|
||||
[if-any view_href]
|
||||
<tr>
|
||||
<td>Links to HEAD:</td>
|
||||
<td>
|
||||
(<a href="[view_href]">view</a>)
|
||||
[if-any download_href](<a href="[download_href]">download</a>)[end]
|
||||
[if-any download_text_href](<a href="[download_text_href]">as text</a>)[end]
|
||||
[if-any annotate_href](<a href="[annotate_href]">annotate</a>)[end]
|
||||
</td>
|
||||
</tr>
|
||||
[end]
|
||||
|
||||
[if-any tag_view_href]
|
||||
<tr>
|
||||
<td>Links to [pathrev]:</td>
|
||||
<td>
|
||||
(<a href="[tag_view_href]">view</a>)
|
||||
[if-any tag_download_href](<a href="[tag_download_href]">download</a>)[end]
|
||||
[if-any tag_download_text_href](<a href="[tag_download_text_href]">as text</a>)[end]
|
||||
[if-any tag_annotate_href](<a href="[tag_annotate_href]">annotate</a>)[end]
|
||||
</td>
|
||||
</tr>
|
||||
[end]
|
||||
[end]
|
||||
|
||||
<tr>
|
||||
<td>Sticky [is roottype "cvs"]Tag[else]Revision[end]:</td>
|
||||
<td>[include "include/pathrev_form.ezt"]</td>
|
||||
</tr>
|
||||
|
||||
[is cfg.options.use_pagesize "0"][else][is picklist_len "1"][else]
|
||||
<tr>
|
||||
<td>Jump to page:</td>
|
||||
<td><form method="get" action="[log_paging_action]">
|
||||
[for log_paging_hidden_values]<input type="hidden" name="[log_paging_hidden_values.name]" value="[log_paging_hidden_values.value]"/>[end]
|
||||
<select name="log_pagestart" onchange="submit()">
|
||||
[for picklist]
|
||||
[if-any picklist.more]
|
||||
<option [is picklist.count log_pagestart]selected[end] value="[picklist.count]">Page [picklist.page]: [picklist.start] ...</option>
|
||||
[else]
|
||||
<option [is picklist.count log_pagestart]selected[end] value="[picklist.count]">Page [picklist.page]: [picklist.start] - [picklist.end]</option>
|
||||
[end]
|
||||
[end]
|
||||
</select>
|
||||
<input type="submit" value="Go" />
|
||||
</form>
|
||||
</td>
|
||||
</tr>
|
||||
[end][end]
|
||||
|
||||
<tr>
|
||||
<td>Sort logs by:</td>
|
||||
<td><form method="get" action="[logsort_action]">
|
||||
<div>
|
||||
<a name="logsort"></a>
|
||||
[for logsort_hidden_values]<input type="hidden" name="[logsort_hidden_values.name]" value="[logsort_hidden_values.value]"/>[end]
|
||||
<select name="logsort" onchange="submit()">
|
||||
<option value="cvs" [is logsort "cvs"]selected="selected"[end]>Not sorted</option>
|
||||
<option value="date" [is logsort "date"]selected="selected"[end]>Commit date</option>
|
||||
<option value="rev" [is logsort "rev"]selected="selected"[end]>Revision</option>
|
||||
</select>
|
||||
<input type="submit" value=" Sort " />
|
||||
</div>
|
||||
</form>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
</table>
|
||||
|
||||
<div id="vc_main_body">
|
||||
<!-- ************************************************************** -->
|
||||
|
||||
[define first_revision][end]
|
||||
[define last_revision][end]
|
||||
|
||||
[for entries]
|
||||
|
||||
[if-index entries first][define first_revision][entries.rev][end][end]
|
||||
[if-index entries last]
|
||||
[define last_revision][entries.rev][end]
|
||||
<div>
|
||||
[else]
|
||||
<div style="border-bottom: 1px dotted black">
|
||||
[end]
|
||||
|
||||
[is entries.state "dead"]
|
||||
Revision <strong>[entries.rev]</strong>
|
||||
[else]
|
||||
<a name="rev[entries.rev]"></a>
|
||||
[for entries.tag_names]<a name="[entries.tag_names]"></a>
|
||||
[end]
|
||||
[for entries.branch_names]<a name="[entries.branch_names]"></a>
|
||||
[end]
|
||||
|
||||
Revision [is roottype "svn"]<a href="[entries.revision_href]"><strong>[entries.rev]</strong></a>[else]<strong>[entries.rev]</strong>[end] -
|
||||
[if-any entries.view_href]
|
||||
[is pathtype "file"]
|
||||
(<a href="[entries.view_href]">view</a>)
|
||||
[else]
|
||||
<a href="[entries.view_href]">Directory Listing</a>
|
||||
[end]
|
||||
[end]
|
||||
[if-any entries.download_href](<a href="[entries.download_href]">download</a>)[end]
|
||||
[if-any entries.download_text_href](<a href="[entries.download_text_href]">as text</a>)[end]
|
||||
[if-any entries.annotate_href](<a href="[entries.annotate_href]">annotate</a>)[end]
|
||||
|
||||
[is pathtype "file"]
|
||||
[# if you don't want to allow select for diffs then remove this section]
|
||||
[is entries.rev rev_selected]
|
||||
- <strong>[[]selected]</strong>
|
||||
[else]
|
||||
- <a href="[entries.sel_for_diff_href]">[[]select for diffs]</a>
|
||||
[end]
|
||||
[end]
|
||||
[end]
|
||||
|
||||
[if-any entries.vendor_branch]
|
||||
<em>(vendor branch)</em>
|
||||
[end]
|
||||
|
||||
<br />
|
||||
|
||||
[is roottype "svn"]
|
||||
[if-index entries last]Added[else]Modified[end]
|
||||
[end]
|
||||
|
||||
<em>[if-any entries.date][entries.date][else](unknown date)[end]</em>
|
||||
[if-any entries.ago]([entries.ago] ago)[end]
|
||||
by <em>[if-any entries.author][entries.author][else](unknown author)[end]</em>
|
||||
|
||||
[if-any entries.orig_path]
|
||||
<br />Original Path: <a href="[entries.orig_href]"><em>[entries.orig_path]</em></a>
|
||||
[end]
|
||||
|
||||
[if-any entries.branches]
|
||||
<br />Branch:
|
||||
[for entries.branches]
|
||||
<a href="[entries.branches.href]"><strong>[entries.branches.name]</strong></a>[if-index entries.branches last][else],[end]
|
||||
[end]
|
||||
[end]
|
||||
|
||||
[if-any entries.tags]
|
||||
<br />CVS Tags:
|
||||
[for entries.tags]
|
||||
<a href="[entries.tags.href]"><strong>[entries.tags.name]</strong></a>[if-index entries.tags last][else],[end]
|
||||
[end]
|
||||
[end]
|
||||
|
||||
[if-any entries.branch_points]
|
||||
<br />Branch point for:
|
||||
[for entries.branch_points]
|
||||
<a href="[entries.branch_points.href]"><strong>[entries.branch_points.name]</strong></a>[if-index entries.branch_points last][else],[end]
|
||||
[end]
|
||||
[end]
|
||||
|
||||
[if-any entries.prev]
|
||||
[if-any entries.changed]
|
||||
[is roottype "cvs"]
|
||||
<br />Changes since <strong>[entries.prev]: [entries.changed] lines</strong>
|
||||
[end]
|
||||
[end]
|
||||
[end]
|
||||
|
||||
[if-any entries.lockinfo]
|
||||
<br />Lock status: [entries.lockinfo]
|
||||
[end]
|
||||
|
||||
[is roottype "svn"]
|
||||
[if-any entries.size]
|
||||
<br />File length: [entries.size] byte(s)
|
||||
[end]
|
||||
|
||||
[if-any entries.copy_path]
|
||||
<br />Copied from: <a href="[entries.copy_href]"><em>[entries.copy_path]</em></a> revision [entries.copy_rev]
|
||||
[end]
|
||||
[end]
|
||||
|
||||
[is entries.state "dead"]
|
||||
<br /><strong><em>FILE REMOVED</em></strong>
|
||||
[else]
|
||||
[is pathtype "file"]
|
||||
[if-any entries.prev]
|
||||
<br />Diff to <a href="[entries.diff_to_prev_href]">previous [entries.prev]</a>
|
||||
[if-any human_readable]
|
||||
[else]
|
||||
(<a href="[entries.diff_to_prev_href]&diff_format=h">colored</a>)
|
||||
[end]
|
||||
[end]
|
||||
|
||||
[is roottype "cvs"]
|
||||
[if-any entries.branch_point]
|
||||
, to <a href="[entries.diff_to_branch_href]">branch point [entries.branch_point]</a>
|
||||
[if-any human_readable]
|
||||
[else]
|
||||
(<a href="[entries.diff_to_branch_href]&diff_format=h">colored</a>)
|
||||
[end]
|
||||
[end]
|
||||
|
||||
[if-any entries.next_main]
|
||||
, to <a href="[entries.diff_to_main_href]">next main [entries.next_main]</a>
|
||||
[if-any human_readable]
|
||||
[else]
|
||||
(<a href="[entries.diff_to_main_href]&diff_format=h">colored</a>)
|
||||
[end]
|
||||
[end]
|
||||
[end]
|
||||
|
||||
[if-any entries.diff_to_sel_href]
|
||||
[if-any entries.prev], [else]<br />Diff[end]
|
||||
to <a href="[entries.diff_to_sel_href]">selected [rev_selected]</a>
|
||||
[if-any human_readable]
|
||||
[else]
|
||||
(<a href="[entries.diff_to_sel_href]&diff_format=h">colored</a>)
|
||||
[end]
|
||||
[end]
|
||||
[end]
|
||||
[end]
|
||||
|
||||
<pre class="vc_log">[entries.log]</pre>
|
||||
</div>
|
||||
[end]
|
||||
|
||||
<!-- ************************************************************** -->
|
||||
</div>
|
||||
|
||||
[is pathtype "file"]
|
||||
[include "include/diff_form.ezt"]
|
||||
[end]
|
||||
|
||||
[include "include/footer.ezt"]
|
|
@ -0,0 +1,18 @@
|
|||
[# setup page definitions]
|
||||
[define page_title]View of:[end]
|
||||
[define help_href][docroot]/help_rootview.html[end]
|
||||
[# end]
|
||||
[include "include/header.ezt" "markup"]
|
||||
[include "include/fileview.ezt"]
|
||||
|
||||
<div id="vc_main_body">
|
||||
<!-- ************************************************************** -->
|
||||
|
||||
<div id="vc_markup"><pre>[markup]</pre></div>
|
||||
|
||||
[include "include/props.ezt"]
|
||||
|
||||
<!-- ************************************************************** -->
|
||||
</div>
|
||||
|
||||
[include "include/footer.ezt"]
|
|
@ -0,0 +1,241 @@
|
|||
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
|
||||
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
|
||||
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
|
||||
<!-- ViewVC :: http://www.viewvc.org/ -->
|
||||
<head>
|
||||
<title>Checkin Database Query</title>
|
||||
<link rel="stylesheet" href="[docroot]/styles.css" type="text/css" />
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
[# setup page definitions]
|
||||
[define help_href][docroot]/help_query.html[end]
|
||||
[# end]
|
||||
|
||||
<p>
|
||||
Select your parameters for querying the CVS commit database. You
|
||||
can search for multiple matches by typing a comma-seperated list
|
||||
into the text fields. Regular expressions, and wildcards are also
|
||||
supported. Blank text input fields are treated as wildcards.
|
||||
</p>
|
||||
<p>
|
||||
Any of the text entry fields can take a comma-seperated list of
|
||||
search arguments. For example, to search for all commits from
|
||||
authors <em>jpaint</em> and <em>gstein</em>, just type: <strong>jpaint,
|
||||
gstein</strong> in the <em>Author</em> input box. If you are searching
|
||||
for items containing spaces or quotes, you will need to quote your
|
||||
request. For example, the same search above with quotes is:
|
||||
<strong>"jpaint", "gstein"</strong>.
|
||||
</p>
|
||||
<p>
|
||||
|
||||
Wildcard and regular expression searches are entered in a similar
|
||||
way to the quoted requests. You must quote any wildcard or
|
||||
regular expression request, and a command charactor preceeds the
|
||||
first quote. The command charactor <strong>l</strong> is for wildcard
|
||||
searches, and the wildcard charactor is a percent (<strong>%</strong>). The
|
||||
command charactor for regular expressions is <strong>r</strong>, and is
|
||||
passed directly to MySQL, so you'll need to refer to the MySQL
|
||||
manual for the exact regex syntax. It is very similar to Perl. A
|
||||
wildard search for all files with a <em>.py</em> extention is:
|
||||
<strong>l"%.py"</strong> in the <em>File</em> input box. The same search done
|
||||
with a regular expression is: <strong>r".*\.py"</strong>.
|
||||
</p>
|
||||
<p>
|
||||
All search types can be mixed, as long as they are seperated by
|
||||
commas.
|
||||
</p>
|
||||
|
||||
<form method="get" action="">
|
||||
|
||||
<div class="vc_query_form">
|
||||
<table cellspacing="0" cellpadding="2" class="auto">
|
||||
<tr>
|
||||
<td>
|
||||
<table>
|
||||
<tr>
|
||||
<td style="vertical-align:top;">
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<td align="right">CVS Repository:</td>
|
||||
<td>
|
||||
<input type="text" name="repository" size="40" value="[repository]" />
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="right">CVS Branch:</td>
|
||||
<td>
|
||||
<input type="text" name="branch" size="40" value="[branch]" />
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="right">Directory:</td>
|
||||
<td>
|
||||
<input type="text" name="directory" size="40" value="[directory]" />
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="right">File:</td>
|
||||
<td>
|
||||
<input type="text" name="file" size="40" value="[file]" />
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="right">Author:</td>
|
||||
<td>
|
||||
<input type="text" name="who" size="40" value="[who]" />
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
</td>
|
||||
<td style="vertical-align:top;">
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<td align="left">Sort By:</td>
|
||||
<td>
|
||||
<select name="sortby">
|
||||
<option value="date" [is sortby "date"]selected="selected"[end]>Date</option>
|
||||
<option value="author" [is sortby "author"]selected="selected"[end]>Author</option>
|
||||
<option value="file" [is sortby "file"]selected="selected"[end]>File</option>
|
||||
</select>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td colspan="2">
|
||||
<table cellspacing="0" cellpadding="0">
|
||||
<tr>
|
||||
<td>Date:</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><input type="radio" name="date" value="hours"
|
||||
[is date "hours"]checked="checked"[end] /></td>
|
||||
<td>In the last
|
||||
<input type="text" name="hours" value="[hours]" size="4" />hours
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><input type="radio" name="date" value="day"
|
||||
[is date "day"]checked="checked"[end] /></td>
|
||||
<td>In the last day</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><input type="radio" name="date" value="week"
|
||||
[is date "week"]checked="checked"[end] /></td>
|
||||
<td>In the last week</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><input type="radio" name="date" value="month"
|
||||
[is date "month"]checked="checked"[end] /></td>
|
||||
<td>In the last month</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><input type="radio" name="date" value="all"
|
||||
[is date "all"]checked="checked"[end] /></td>
|
||||
<td>Since the beginning of time</td>
|
||||
</tr>
|
||||
</table>
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
</td>
|
||||
<td>
|
||||
<input type="submit" value="Search" />
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
</div>
|
||||
|
||||
</form>
|
||||
|
||||
[is query "skipped"]
|
||||
[else]
|
||||
<p><strong>[num_commits]</strong> matches found.</p>
|
||||
|
||||
[if-any commits]
|
||||
<table cellspacing="0" cellpadding="2">
|
||||
<thead>
|
||||
<tr class="vc_header">
|
||||
<th>Revision</th>
|
||||
<th>File</th>
|
||||
<th>Branch</th>
|
||||
<th>+/-</th>
|
||||
<th>Date</th>
|
||||
<th>Author</th>
|
||||
[# uncommment, if you want a separate Description column: (also see below)
|
||||
<th>Description</th>
|
||||
]
|
||||
</tr>
|
||||
</thead>
|
||||
[for commits]
|
||||
<tbody>
|
||||
[for commits.files]
|
||||
<tr class="vc_row_[if-index commits even]even[else]odd[end]">
|
||||
<td style="vertical-align:top;">
|
||||
[if-any commits.files.rev][commits.files.rev][else] [end]
|
||||
</td>
|
||||
<td style="vertical-align:top;">[commits.files.link]</td>
|
||||
<td style="vertical-align:top;">
|
||||
[if-any commits.files.branch][commits.files.branch][else] [end]
|
||||
</td>
|
||||
<td style="vertical-align:top;">
|
||||
[is commits.files.type "Add"]<ins>[end]
|
||||
[is commits.files.type "Change"]<a href="[commits.files.difflink]">[end]
|
||||
[is commits.files.type "Remove"]<del>[end]
|
||||
[commits.files.plus]/[commits.files.minus]
|
||||
[is commits.files.type "Add"]</ins>[end]
|
||||
[is commits.files.type "Change"]</a>[end]
|
||||
[is commits.files.type "Remove"]</del>[end]
|
||||
</td>
|
||||
<td style="vertical-align:top;">
|
||||
[if-any commits.files.date][commits.files.date][else] [end]
|
||||
</td>
|
||||
<td style="vertical-align:top;">
|
||||
[if-any commits.files.author][commits.files.author][else] [end]
|
||||
</td>
|
||||
|
||||
[# uncommment, if you want a separate Description column:
|
||||
{if-index commits.files first{
|
||||
<td style="vertical-align:top;" rowspan="{commits.num_files}">
|
||||
{commits.log}
|
||||
</td>
|
||||
{end}
|
||||
|
||||
(substitute brackets for the braces)
|
||||
]
|
||||
</tr>
|
||||
[# and also take the following out in the "Description column"-case:]
|
||||
[if-index commits.files last]
|
||||
<tr class="vc_row_[if-index commits even]even[else]odd[end]">
|
||||
<td> </td>
|
||||
<td colspan="5"><strong>Log:</strong><br />
|
||||
<pre class="vc_log">[commits.log]</pre></td>
|
||||
</tr>
|
||||
[end]
|
||||
[# ---]
|
||||
[end]
|
||||
</tbody>
|
||||
[end]
|
||||
|
||||
<tr class="vc_header">
|
||||
<th style="text-align:left;vertical-align:top;"> </th>
|
||||
<th style="text-align:left;vertical-align:top;"> </th>
|
||||
<th style="text-align:left;vertical-align:top;"> </th>
|
||||
<th style="text-align:left;vertical-align:top;"> </th>
|
||||
<th style="text-align:left;vertical-align:top;"> </th>
|
||||
<th style="text-align:left;vertical-align:top;"> </th>
|
||||
[# uncommment, if you want a separate Description column:
|
||||
<th style="text-align:left;vertical-align:top;"> </th>
|
||||
]
|
||||
</tr>
|
||||
</table>
|
||||
[end]
|
||||
[end]
|
||||
[include "include/footer.ezt"]
|
|
@ -0,0 +1,201 @@
|
|||
[# setup page definitions]
|
||||
[define page_title]Query on:[end]
|
||||
[define help_href][docroot]/help_rootview.html[end]
|
||||
[# end]
|
||||
[include "include/header.ezt" "query"]
|
||||
|
||||
<form action="[query_action]" method="get">
|
||||
|
||||
<div class="vc_query_form">
|
||||
[for query_hidden_values]<input type="hidden" name="[query_hidden_values.name]" value="[query_hidden_values.value]"/>[end]
|
||||
<table cellspacing="0" cellpadding="5" class="auto">
|
||||
[is roottype "cvs"]
|
||||
[# For subversion, the branch field is not used ]
|
||||
<tr>
|
||||
<th style="text-align:right;vertical-align:top;">Branch:</th>
|
||||
<td>
|
||||
<input type="text" name="branch" value="[branch]" />
|
||||
<label for="branch_match_exact">
|
||||
<input type="radio" name="branch_match" id="branch_match_exact"
|
||||
value="exact" [is branch_match "exact"]checked="checked"[end] />
|
||||
exact
|
||||
</label>
|
||||
<label for="branch_match_glob">
|
||||
<input type="radio" name="branch_match" id="branch_match_glob"
|
||||
value="glob" [is branch_match "glob"]checked="checked"[end] />
|
||||
glob pattern
|
||||
</label>
|
||||
<label for="branch_match_regex">
|
||||
<input type="radio" name="branch_match" id="branch_match_regex"
|
||||
value="regex" [is branch_match "regex"]checked="checked"[end] />
|
||||
regex
|
||||
</label>
|
||||
<label for="branch_match_notregex">
|
||||
<input type="radio" name="branch_match" id="branch_match_notregex"
|
||||
value="notregex" [is branch_match "notregex"]checked="checked"[end] />
|
||||
<em>not</em> regex
|
||||
</label>
|
||||
</td>
|
||||
</tr>
|
||||
[end]
|
||||
<tr>
|
||||
<th style="text-align:right;vertical-align:top;">Subdirectory:</th>
|
||||
<td>
|
||||
<input type="text" name="dir" value="[dir]" />
|
||||
<em>(You can list multiple directories separated by commas.)</em>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th style="text-align:right;vertical-align:top;">File:</th>
|
||||
<td>
|
||||
<input type="text" name="file" value="[file]" />
|
||||
<label for="file_match_exact">
|
||||
<input type="radio" name="file_match" id="file_match_exact"
|
||||
value="exact" [is file_match "exact"]checked="checked"[end] />
|
||||
exact
|
||||
</label>
|
||||
<label for="file_match_glob">
|
||||
<input type="radio" name="file_match" id="file_match_glob"
|
||||
value="glob" [is file_match "glob"]checked="checked"[end] />
|
||||
glob pattern
|
||||
</label>
|
||||
<label for="file_match_regex">
|
||||
<input type="radio" name="file_match" id="file_match_regex"
|
||||
value="regex" [is file_match "regex"]checked="checked"[end] />
|
||||
regex
|
||||
</label>
|
||||
<label for="file_match_notregex">
|
||||
<input type="radio" name="file_match" id="file_match_notregex"
|
||||
value="notregex" [is file_match "notregex"]checked="checked"[end] />
|
||||
<em>not</em> regex
|
||||
</label>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th style="text-align:right;vertical-align:top;">Who:</th>
|
||||
<td>
|
||||
<input type="text" name="who" value="[who]" />
|
||||
<label for="who_match_exact">
|
||||
<input type="radio" name="who_match" id="who_match_exact"
|
||||
value="exact" [is who_match "exact"]checked="checked"[end] />
|
||||
exact
|
||||
</label>
|
||||
<label for="who_match_glob">
|
||||
<input type="radio" name="who_match" id="who_match_glob"
|
||||
value="glob" [is who_match "glob"]checked="checked"[end] />
|
||||
glob pattern
|
||||
</label>
|
||||
<label for="who_match_regex">
|
||||
<input type="radio" name="who_match" id="who_match_regex"
|
||||
value="regex" [is who_match "regex"]checked="checked"[end] />
|
||||
regex
|
||||
</label>
|
||||
<label for="who_match_notregex">
|
||||
<input type="radio" name="who_match" id="who_match_notregex"
|
||||
value="notregex" [is who_match "notregex"]checked="checked"[end] />
|
||||
<em>not</em> regex
|
||||
</label>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th style="text-align:right;vertical-align:top;">Comment:</th>
|
||||
<td>
|
||||
<input type="text" name="comment" value="[comment]" />
|
||||
<label for="comment_match_exact">
|
||||
<input type="radio" name="comment_match" id="comment_match_exact"
|
||||
value="exact" [is comment_match "exact"]checked=""[end] />
|
||||
exact
|
||||
</label>
|
||||
<label for="comment_match_glob">
|
||||
<input type="radio" name="comment_match" id="comment_match_glob"
|
||||
value="glob" [is comment_match "glob"]checked=""[end] />
|
||||
glob pattern
|
||||
</label>
|
||||
<label for="comment_match_regex">
|
||||
<input type="radio" name="comment_match" id="comment_match_regex"
|
||||
value="regex" [is comment_match "regex"]checked=""[end] />
|
||||
regex
|
||||
</label>
|
||||
<label for="comment_match_notregex">
|
||||
<input type="radio" name="comment_match" id="comment_match_notregex"
|
||||
value="notregex" [is comment_match "notregex"]checked=""[end] />
|
||||
<em>not</em> regex
|
||||
</label>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th style="text-align:right;vertical-align:top;">Sort By:</th>
|
||||
<td>
|
||||
<select name="querysort">
|
||||
<option value="date" [is querysort "date"]selected="selected"[end]>Date</option>
|
||||
<option value="author" [is querysort "author"]selected="selected"[end]>Author</option>
|
||||
<option value="file" [is querysort "file"]selected="selected"[end]>File</option>
|
||||
</select>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th style="text-align:right;vertical-align:top;">Date:</th>
|
||||
<td>
|
||||
<table cellspacing="0" cellpadding="0">
|
||||
<tr>
|
||||
<td><input type="radio" name="date" id="date_hours"
|
||||
value="hours" [is date "hours"]checked="checked"[end] /></td>
|
||||
<td>
|
||||
<label for="date_hours">In the last</label>
|
||||
<input type="text" name="hours" value="[hours]" size="4" />
|
||||
hours
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><input type="radio" name="date" id="date_day"
|
||||
value="day" [is date "day"]checked="checked"[end] /></td>
|
||||
<td><label for="date_day">In the last day</label></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><input type="radio" name="date" id="date_week"
|
||||
value="week" [is date "week"]checked="checked"[end] /></td>
|
||||
<td><label for="date_week">In the last week</label></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><input type="radio" name="date" id="date_month"
|
||||
value="month" [is date "month"]checked="checked"[end] /></td>
|
||||
<td><label for="date_month">In the last month</label></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><input type="radio" name="date" id="date_all"
|
||||
value="all" [is date "all"]checked="checked"[end] /></td>
|
||||
<td><label for="date_all">Since the beginning of time</label></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><input type="radio" name="date" id="date_explicit"
|
||||
value="explicit" [is date "explicit"]checked="checked"[end] /></td>
|
||||
<td>
|
||||
<label for="date_explicit">Between</label>
|
||||
<input type="text" name="mindate" value="[mindate]" size="20" />
|
||||
and
|
||||
<input type="text" name="maxdate" value="[maxdate]" size="20" />
|
||||
<br />
|
||||
(use the form <strong>yyyy-mm-dd hh:mm:ss</strong>)
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th style="text-align:right;vertical-align:top;">Limit:</th>
|
||||
<td>
|
||||
Show at most
|
||||
<input type="text" name="limit_changes" value="[limit_changes]" size="5" />
|
||||
changed files per commit. <em>(Use 0 to show all files.)</em>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td>
|
||||
<td><input type="submit" value="Search" /></td>
|
||||
</tr>
|
||||
</table>
|
||||
</div>
|
||||
|
||||
</form>
|
||||
|
||||
[include "include/footer.ezt"]
|