1
0
mirror of https://github.com/vitalif/viewvc-4intranet synced 2019-04-16 04:14:59 +03:00

Compare commits

..

1 Commits

Author SHA1 Message Date
(no author)
cf7c087fbd This commit was manufactured by cvs2svn to create tag 'V0_9_1'.
git-svn-id: http://viewvc.tigris.org/svn/viewvc/tags/V0_9_1@431 8cb11bc2-c004-0410-86c3-e597b4017df7
2001-12-27 05:18:35 +00:00
471 changed files with 11480 additions and 49574 deletions

459
CHANGES
View File

@@ -1,461 +1,3 @@
4Intra.net/CUSTIS improvements
* Support for full-text search over file contents, including binary
documents like *.doc and so on using Sphinx Search and Apache Tika
server. Patched Tika with fixes for #TIKA709 and #TIKA964 is highly
recommended:
http://wiki.4intra.net/public/tika-app-1.2-fix-TIKA709-TIKA964.jar
(SHA1 efef722a5e2322f7c2616d096552a48134dc5faa)
* Access right checks in query results.
* Access right checks for repository root directories.
* New query parameters: repository, repo type, revision number.
* Authorizer for CVSnt ACLs.
* InnoDB, additional database indexes and some search query
optimisations.
* Support for specifying path to MySQL UNIX socket.
* Asynchronous hook examples for updating SVN and CVS repos.
* Slightly more correct charset guessing, especially for Russian.
* Support for diffing added/removed files.
* File lists in RSS feeds for 'classic' template.
* Path configuration via a single 'viewvcinstallpath.py' file,
not via editing multiple bin/* files.
* Link to repository list instead of viewvc.org from the logo.
* "rcsfile service" support used to workaround command execution
problems (fork/signal problems) from Apache mod_python.
* Some new fields in query form: revision number, repository,
repository type, and the branch field visible not only in
CVS roots, but also without the selected repository.
* Correct per-root overriding of configuration options.
Version 1.2.0 (released ??-???-????)
* bumped minimum support Python version to 2.4
* implemented support for property diffs (issue #383)
* allow user-configurable cvsgraph display (issue #336)
* allow rNNNN syntax for Subversion revision numbers (issue #441)
Version 1.1.20 (released 24-Apr-2013)
* fix tab-to-space handling regression in markup view
* fix regression in root lookup handling (issue #526)
Version 1.1.19 (released 22-Apr-2013)
* improve root lookup performance (issue #523)
* new 'max_filesize_kbytes' config option and handling (issue #524)
* tarball generation improvements:
- preserve Subversion symlinks in generated tarballs (issue #487)
- reduce memory usage of tarball generation logic
- fix double compression of generated tarballs (issue #525)
* file content handling improvements:
- expanded support for encoding detection and transcoding (issue #11)
- fix tab-to-space conversion bugs in markup, annotate, and diff views
- fix handling of trailing whitespace in diff view
* add support for timestamp display in ISO8601 format (issue #46)
Version 1.1.18 (released 28-Feb-2013)
* fix exception raised by BDB-backed SVN repositories (issue #519)
* hide revision-less files when rcsparse is in use
* include branchpoints in branch views using rcsparse (issue #347)
* miscellaneous cvsdb improvements:
- add --port option to make-database (issue #521)
- explicitly name columns in queries (issue #522)
- update MySQL syntax to avoid discontinued "TYPE=" terms
Version 1.1.17 (released 25-Oct-2012)
* fix exception caused by uninitialized variable usage (issue #516)
Version 1.1.16 (released 24-Oct-2012)
* security fix: escape "extra" diff info to avoid XSS attack (issue #515)
* add 'binary_mime_types' configuration option and handling (issue #510)
* fix 'select for diffs' persistence across log pages (issue #512)
* remove lock status and filesize check on directories in remote SVN views
* fix bogus 'Annotation of' page title for non-annotated view (issue #514)
Version 1.1.15 (released 22-Jun-2012)
* security fix: complete authz support for remote SVN views (issue #353)
* security fix: log msg leak in SVN revision view with unreadable copy source
* fix several instances of incorrect information in remote SVN views
* increase performance of some revision metadata lookups in remote SVN views
* fix RSS feed regression introduced in 1.1.14
Version 1.1.14 (released 12-Jun-2012)
* fix annotation of svn files with non-URI-safe paths (issue #504)
* handle file:/// Subversion rootpaths as local roots (issue #446)
* fix bug caused by trying to case-normalize anon usernames (issue #505)
* speed up log handling by reusing tokenization results (issue #506)
* add support for custom revision log markup rules (issue #246)
Version 1.1.13 (released 23-Jan-2012)
* fix svndbadmin failure on deleted paths under Subversion 1.7 (issue #499)
* fix annotation of files in svn roots with non-URI-safe paths
* fix stray annotation warning in markup display of images
* more gracefully handle attempts to display binary content (issue #501)
Version 1.1.12 (released 03-Nov-2011)
* fix path display in patch and certain diff views (issue #485)
* fix broken cvsdb glob searching (issue 486)
* allow svn revision specifiers to have leading r's (issue #441, #448)
* allow environmental override of configuration location (issue #494)
* fix exception HTML-escaping non-string data under WSGI (issue #454)
* add links to root logs from roots view (issue #470)
* use Pygments lexer-guessing functionality (issue #495)
Version 1.1.11 (released 17-May-2011)
* security fix: remove user-reachable override of cvsdb row limit
* fix broken standalone.py -c and -d options handling
* add --help option to standalone.py
* fix stack trace when asked to checkout a directory (issue #478)
* improve memory usage and speed of revision log markup (issue #477)
* fix broken annotation view in CVS keyword-bearing files (issue #479)
* warn users when query results are incomplete (issue #433)
* avoid parsing errors on RCS newphrases in the admin section (issue #483)
* make rlog parsing code more robust in certain error cases (issue #444)
Version 1.1.10 (released 15-Mar-2011)
* fix stack trace in Subversion revision info logic (issue #475, issue #476)
Version 1.1.9 (released 18-Feb-2011)
* vcauth universal access determinations (issue #425)
* rework svn revision info cache for performance
* make revision log "extra pages" count configurable
* fix Subversion 1.4.x revision log compatibility code regression
* display sanitized error when authzfile is malformed
* restore markup of URLs in file contents (issue #455)
* optionally display last-committed metadata in roots view (issue #457)
Version 1.1.8 (released 02-Dec-2010)
* fix slowness triggered by allow_compress=1 configuration (issue #467)
* allow use of 'fcrypt' for Windows standalone.py authn support (issue #471)
* yield more useful error on directory markup/annotate request (issue #472)
Version 1.1.7 (released 09-Sep-2010)
* display Subversion revision properties in the revision view (issue #453)
* fix exception in 'standalone.py -r REPOS' when run without a config file
* fix standalone.py server root deployments (--script-alias='')
* add rudimentary Basic authentication support to standalone.py (issue #49)
* fix obscure "unexpected NULL parent pool" Subversion bindings error
* enable path info / link display in remote Subversion root revision view
* fix vhost name case handling inconsistency (issue #466)
* use svn:mime-type property charset param as encoding hint
* markup Subversion revision references in log messages (issue #313)
* add rudimentary support for FastCGI-based deployments (issue #464)
* fix query script WSGI deployment
* add configuration to fix query script cross-linking to ViewVC
Version 1.1.6 (released 02-Jun-2010)
* add rudimentary support for WSGI-based deployments (issue #397)
* fix exception caused by trying to HTML-escape non-string data (issue #454)
* fix incorrect RSS feed Content-Type header (issue #449)
* fix RSS <title> encoding problem (issue #451)
* allow 'svndbadmin purge' to work on missing repositories (issue #452)
Version 1.1.5 (released 29-Mar-2010)
* security fix: escape user-provided search_re input to avoid XSS attack
Version 1.1.4 (released 10-Mar-2010)
* security fix: escape user-provided query form input to avoid XSS attack
* fix standalone.py failure (when per-root options aren't used) (issue #445)
* fix annotate failure caused by ignored svn_config_dir (issue #447)
Version 1.1.3 (released 22-Dec-2009)
* security fix: add root listing support of per-root authz config
* security fix: query.py requires 'forbidden' authorizer (or none) in config
* fix URL-ification of truncated log messages (issue #3)
* fix regexp input validation (issue #426, #427, #440)
* add support for configurable tab-to-spaces conversion
* fix not-a-sequence error in diff view
* allow viewvc-install to work when templates-contrib is absent
* minor template improvements/corrections
* expose revision metadata in diff view (issue #431)
* markup file/directory item property URLs and email addresses (issue #434)
* make ViewVC cross copies in Subversion history by default
* fix bug that caused standalone.py failure under Python 1.5.2 (issue #442)
* fix support for per-vhost overrides of authorizer parameters (issue #411)
* fix root name identification in query.py interface
Version 1.1.2 (released 11-Aug-2009)
* security fix: validate the 'view' parameter to avoid XSS attack
* security fix: avoid printing illegal parameter names and values
* add optional support for character encoding detection (issue #400)
* fix username case handling in svnauthz module (issue #419)
* fix cvsdbadmin/svnadmin rebuild error on missing repos (issue #420)
* don't drop leading blank lines from colorized file contents (issue #422)
* add file.ezt template logic for optionally hiding binary file contents
Version 1.1.1 (released 03-Jun-2009)
* fix broken query form (missing required template variables) (issue #416)
* fix bug in cvsdb which caused rebuild operations to lose data (issue #417)
* fix cvsdb purge/rebuild repos lookup to error on missing repos
* fix misleading file contents view page title
Version 1.1.0 (released 13-May-2009)
* add support for full content diffs (issue #153)
* make many more data dictionary items available to all views
* various rcsparse and tparse module fixes
* add daemon mode to standalone.py (issue #235)
* rework helper application configuration options (issues #229, #62)
* teach standalone.py to recognize Subversion repositories via -r option
* now interpret relative paths in "viewvc.conf" as relative to that file
* add 'purge' subcommand to cvsdbadmin and svndbadmin (issue #271)
* fix orphaned data bug in cvsdbadmin/svndbadmin rebuild (issue #271)
* add support for query by log message (issues #22, #121)
* fix bug parsing 'svn blame' output with too-long author names (issue #221)
* fix default standalone.py port to be within private IANA range (issue #234)
* add unified configury of allowed views; checkout view disabled by default
* add support for ranges of revisions to svndbadmin (issue #224)
* make the query handling more forgiving of malformatted subdirs (issue #244)
* add support for per-root configuration overrides (issue #371)
* add support for optional email address mangling (issue #290)
* extensible path-based authorization subsystem (issue #268), supporting:
- Subversion authz files (new)
- regexp-based path hiding (for compat with 1.0.x)
- file glob top-level directory hiding (for compat with 1.0.x)
* allow default file view to be "markup" (issue #305)
* add support for displaying file/directory properties (issue #39)
* pagination improvements
* add gzip output encoding support for template-driven pages
* fix cache control bugs (issue #259)
* add RSS feed URL generation for file history
* add support for remote creation of ViewVC checkins database
* add integration with Pygments for syntax highlighting
* preserve executability of Subversion files in tarballs (issue #233)
* add ability to set Subversion runtime config dir (issue #351, issue #339)
* show RSS/query links only for roots found in commits database (issue #357)
* recognize Subversion svn:mime-type property values (issue #364)
* hide CVS files when viewing tags/branches on which they don't exist
* allow hiding of errorful entries from the directory view (issue #105)
* fix directory view sorting UI
* tolerate malformed Accept-Language headers (issue #396)
* allow MIME type mapping overrides in ViewVC configuration (issue #401)
* fix exception in rev-sorted remote Subversion directory views (issue #409)
* allow setting of page sizes for log and dir views individually (issue #402)
Version 1.0.13 (released 24-Oct-2012)
* security fix: escape "extra" diff info to avoid XSS attack (issue #515)
* security fix: remove user-reachable override of cvsdb row limit
* fix obscure "unexpected NULL parent pool" Subversion bindings error
* fix svndbadmin failure on deleted paths under Subversion 1.7 (issue #499)
Version 1.0.12 (released 02-Jun-2010)
* fix exception caused by trying to HTML-escape non-string data (issue #454)
Version 1.0.11 (released 29-Mar-2010)
* security fix: escape user-provided search_re input to avoid XSS attack
Version 1.0.10 (released 10-Mar-2010)
* security fix: escape user-provided query form input to avoid XSS attack
* fix errors viewing remote Subversion paths with URI-unsafe characters
* fix regexp input validation (issue #426, #427, #440)
Version 1.0.9 (released 11-Aug-2009)
* security fix: validate the 'view' parameter to avoid XSS attack
* security fix: avoid printing illegal parameter names and values
Version 1.0.8 (released 05-May-2009)
* fix directory view sorting UI
* tolerate malformed Accept-Language headers (issue #396)
* fix directory log views in revision-less Subversion repositories
* fix exception in rev-sorted remote Subversion directory views (issue #409)
Version 1.0.7 (released 14-Oct-2008)
* fix regression in the 'as text' download view (issue #373)
Version 1.0.6 (released 16-Sep-2008)
* security fix: ignore arbitrary user-provided MIME types (issue #354)
* fix bug in regexp search filter when used with sticky tag (issue #346)
* fix bug in handling of certain 'co' output (issue #348)
* fix regexp search filter template bug
* fix annotate code syntax error
* fix mod_python import cycle (issue #369)
Version 1.0.5 (released 28-Feb-2008)
* security fix: omit commits of all-forbidden files from query results
* security fix: disallow direct URL navigation to hidden CVSROOT folder
* security fix: strip forbidden paths from revision view
* security fix: don't traverse log history thru forbidden locations
* security fix: honor forbiddenness via diff view path parameters
* new 'forbiddenre' regexp-based path authorization feature
* fix root name conflict resolution inconsistencies (issue #287)
* fix an oversight in the CVS 1.12.9 loginfo-handler support
* fix RSS feed content type to be more specific (issue #306)
* fix entity escaping problems in RSS feed data (issue #238)
* fix bug in tarball generation for remote Subversion repositories
* fix query interface file-count-limiting logic
* fix query results plus/minus count to ignore forbidden files
* fix blame error caused by 'svn' unable to create runtime config dir
Version 1.0.4 (released 10-Apr-2007)
* fix some markup bugs in query views (issue #266)
* fix loginfo-handler's support for CVS 1.12.9 (issues #151, #257)
* make viewvc-install able to run from an arbitrary location
* update viewvc-install's output for readability
* fix bug writing commits to non-MyISAM databases (issue #262)
* allow long paths in generated tarballs (issue #12)
* fix bug interpreting EZT substitute patterns
* fix broken markup view disablement
* fix broken directory view link generation in directory log view
* fix Windows-specific viewvc-install bugs
* fix broke query result links for Subversion deleted items (issue #296)
* fix some output XHTML validation buglets
* fix database query cache staleness problems (issue #180)
Version 1.0.3 (released 13-Oct-2006)
* fix bug in path shown for Subversion deleted-under-copy items (issue #265)
* security fix: declare charset for views to avoid IE UTF7 XSS attack
Version 1.0.2 (released 29-Sep-2006)
* minor documentation fixes
* fix Subversion annotate functionality on Windows (issue #18)
* fix annotate assertions on uncanonicalized #include paths (issue #208)
* make RSS URL method match the method used to generate it (issue #245)
* fix Subversion annotation to run non-interactively, preventing hangs
* fix bug in custom syntax highlighter fallback logic
* fix bug in PHP CGI hack to avoid force-cgi-redirect errors
Version 1.0.1 (released 20-Jul-2006)
* fix exception on log page when use_pagesize is enabled
* fix an XHTML validation bug in the footer template (issue #239)
* fix handling of single-component CVS revision numbers (issue #237)
* fix bug in download-as-text URL link generation (issue #241)
* fix query.cgi bug, missing 'rss_href' template data item (issue #249)
* no longer omit empty Subversion directories from tarballs (issue #250)
* use actual modification time for Subversion directories in tarballs
Version 1.0 (released 01-May-2006)
* add support for viewing Subversion repositories
* add support for running on MS Windows
* generate strict XHTML output
* add support for caching by sending "Last-Modified", "Expires",
"ETag", and "Cache-Control" headers
* add support for Mod_Python on Apache 2.x and ASP on IIS
* Several changes to standalone.py:
- -h commandline option to specify hostname for non local use.
- -r commandline option may be repeated to use more than repository
before actually installing ViewCVS.
- New GUI field to test paging.
* add new, better-integrated query interface
* add integrated RSS feeds
* add new "root_as_url_component" option to embed root names as
path components in ViewCVS URLs for a more natural URL scheme
in ViewCVS configurations with multiple repositories.
* add new "use_localtime" option to display local times instead of UTC times
* add new "root_parents" option to make it possible to add and
remove repositories without modifying the ViewCVS configuration
* add new "template_dir" option to facilitate switching between sets of
templates
* add new "sort_group_dirs" option to disable grouping of
directories in directory listings
* add new "port" option to connect to a MySQL database on a nonstandard port
* make "default_root" option optional. When no root is specified,
show a page listing all available repositories
* add "default_file_view" option to make it possible for relative
links and image paths in checked out HTML files to work without
the need for special /*checkout*/ prefixes in URLs. Deprecate
"checkout_magic" option and disable by default
* add "limit_changes" option to limit number of changed files shown per
commit by default in query results and in the Subversion revision view
* hide CVS "Attic" directories and add simple toggle for showing
dead files in directory listings
* show Unified, Context and Side-by-side diffs in HTML instead of
in bare text pages
* make View/Download links work the same for all file types
* add links to tip of selected branch on log page
* allow use of "Highlight" program for colorizing
* enable enscript colorizing for more file types
* add sorting arrows for directory views
* get rid of popup windows for checkout links
* obfuscate email addresses in html output by encoding @ symbol
with an HTML character reference
* add paging capability
* Improvements to templates
- add new template authoring guide
- increase coverage, use templates to produce HTML for diff pages,
markup pages, annotate pages, and error pages
- move more common page elements into includes
- add new template variables providing ViewCVS URLs for more
links between related pages and less URL generation inside
templates
* add new [define] EZT directive for assigning variables within templates
* add command line argument parsing to install script to allow
non-interactive installs
* add stricter parameter validation to lower likelihood of cross-site
scripting vulnerabilities
* add support for cvsweb's "mime_type=text/x-cvsweb-markup" URLs
* fix incompatibility with enscript 1.6.3
* fix bug in parsing FreeBSD rlog output
* work around rlog assumption all two digit years in RCS files are
relative to the year 1900.
* change loginfo-handler to cope with spaces in filenames and
support a simpler command line invocation from CVS
* make cvsdbadmin work properly when invoked on CVS subdirectory
paths instead of top-level CVS root paths
* show diff error when comparing two binary files
* make regular expression search skip binary files
* make regular expression search skip nonversioned files in CVS
directories instead of choking on them
* fix tarball generator so it doesn't include forbidden modules
* output "404 Not Found" errors instead of "403 Forbidden" errors
to not reveal whether forbidden paths exist
* fix sorting bug in directory view
* reset log and directory page numbers when leaving those pages
* reset sort direction in directory listing when clicking new columns
* fix "Accept-Language" handling for Netscape 4.x browsers
* fix file descriptor leak in standalone server
* clean up zombie processes from running enscript
* fix mysql "Too many connections" error in cvsdbadmin
* get rid of mxDateTime dependency for query database
* store query database times in UTC instead of local time
* fix daylight saving time bugs in various parts of the code
Version 0.9.4 (released 17-Aug-2005)
* security fix: omit forbidden/hidden modules from query results.
Version 0.9.3 (released 17-May-2005)
* security fix: disallow bad "content-type" input [CAN-2004-1062]
* security fix: disallow bad "sortby" and "cvsroot" input [CAN-2002-0771]
* security fix: omit forbidden/hidden modules from tarballs [CAN-2004-0915]
Version 0.9.2 (released 15-Jan-2002)
* fix redirects to Attic for diffs
* fix diffs that have no changes (causing an infinite loop)
Version 0.9.1 (released 26-Dec-2001)
* fix a problem with some syntax in ndiff.py which isn't compatible
@@ -486,6 +28,7 @@ Version 0.9 (released 23-Dec-2001)
* create dir_alternate.ezt for the flipped rev/name links
* various UI tweaks for the directory pages
Version 0.8 (released 10-Dec-2001)
* add EZT templating mechanism for generating output pages

View File

@@ -1,28 +0,0 @@
The following people have commit access to the ViewVC sources.
Note that this is not a full list of ViewVC's authors, however --
for that, you'd need to look over the log messages to see all the
patch contributors.
If you have a question or comment, it's probably best to mail
dev@viewvc.tigris.org, rather than mailing any of these people
directly.
gstein Greg Stein <gstein@lyra.org>
jpaint Jay Painter <???>
akr Tanaka Akira <???>
timcera Tim Cera <???>
pefu Peter Funk <???>
lbruand Lucas Bruand <???>
cmpilato C. Michael Pilato <cmpilato@collab.net>
rey4 Russell Yanofsky <rey4@columbia.edu>
mharig Mark Harig <???>
northeye Takuo Kitame <???>
jamesh James Henstridge <???>
maxb Max Bowsher <maxb1@ukf.net>
eh Erik Hülsmann <e.huelsmann@gmx.net>
mhagger Michael Haggerty <mhagger@alum.mit.edu>
## Local Variables:
## coding:utf-8
## End:
## vim:encoding=utf8

759
INSTALL
View File

@@ -1,12 +1,10 @@
CONTENTS
--------
TO THE IMPATIENT
SECURITY INFORMATION
INSTALLING VIEWVC
APACHE CONFIGURATION
UPGRADING VIEWVC
INSTALLING VIEWCVS
UPGRADING VIEWCVS
SQL CHECKIN DATABASE
ENABLING SYNTAX COLORATION
ENSCRIPT CONFIGURATION
CVSGRAPH CONFIGURATION
IF YOU HAVE PROBLEMS...
@@ -15,508 +13,227 @@ TO THE IMPATIENT
----------------
Congratulations on getting this far. :-)
Required Software And Configuration Needed To Run ViewVC:
Prerequisites: Python 1.5 or later
(http://www.python.org/)
RCS, Revision Control System
(http://www.cs.purdue.edu/homes/trinkle/RCS/)
read-only, physical access to a CVS repository
(See http://www.cvshome.org/ for more information)
In General:
Optional: a web server capable of running CGI programs
(for example, Apache at http://httpd.apache.org/)
GNU-diff to replace broken diff implementations
(http://www.gnu.org/software/diffutils/diffutils.html)
MySQL to create and query a commit database
(http://www.mysql.com/)
(http://sourceforge.net/projects/mysql-python)
(and Python 1.5.2 or later)
Enscript to colorize code displayed from the CVS repository
(http://people.ssh.com/mtr/genscript/)
CvsGraph for a graphical representation of the CVS revisions
(http://www.akhphd.au.dk/~bertho/cvsgraph/)
* Python 2, version 2.4 or later (sorry, no 3.x support yet)
(http://www.python.org/)
GUI Operation:
For CVS Support:
* RCS, Revision Control System
(http://www.cs.purdue.edu/homes/trinkle/RCS/)
* GNU-diff to replace diff implementations without the -u option
(http://www.gnu.org/software/diffutils/diffutils.html)
* read-only, physical access to a CVS repository
(See http://www.cvshome.org/ for more information)
For Subversion Support:
* Subversion, Version Control System, 1.3.1 or later
(binary installation and Python bindings)
(http://subversion.apache.org/)
Optional:
* a web server capable of running CGI programs
(for example, Apache at http://httpd.apache.org/)
* MySQL 3.22 and MySQLdb 0.9.0 or later to create a commit database
(http://www.mysql.com/)
(http://sourceforge.net/projects/mysql-python)
* Pygments 0.9 or later, syntax highlighting engine
(http://pygments.org)
* CvsGraph 1.5.0 or later, graphical CVS revision tree generator
(http://www.akhphd.au.dk/~bertho/cvsgraph/)
Quick sanity check:
If you just want to see what your repository looks like when seen
through ViewVC, type:
$ bin/standalone.py -r /PATH/TO/REPOSITORY
This will start a tiny ViewVC server at http://localhost:49152/viewvc/,
to which you can connect with your browser.
If you just want to see what your CVS repository looks like with
ViewCVS, type "./standalone.py -g -r /PATH/TO/CVS/ROOT". This
will start a tiny webserver serving at http://localhost:7467/.
Standard operation:
To start installing right away (on UNIX): type "./viewvc-install"
To start installing right away (on UNIX): type "./viewcvs-install"
in the current directory and answer the prompts. When it
finishes, edit the file viewvc.conf in the installation directory
to tell ViewVC the paths to your CVS and Subversion repositories.
Next, configure your web server (in the way appropriate to that browser)
to run <VIEWVC_INSTALLATION_DIRECTORY>/bin/cgi/viewvc.cgi. The section
`INSTALLING VIEWVC' below is still recommended reading.
finishes, edit the file viewcvs.conf in the installation directory
to tell viewcvs the paths to your CVS repositories. Next,
configure your web server to run <INSTALL>/cgi/viewcvs.cgi, as
appropriate for your web server. The section `INSTALLING VIEWCVS'
below is still recommended reading.
SECURITY INFORMATION
--------------------
ViewVC provides a feature which allows version controlled content to
be served to web browsers just like static web server content. So, if
you have a directory full of interrelated HTML files that is housed in
your version control repository, ViewVC can serve those files as HTML.
You'll see in your web browser what you'd see if the files were part
of your website, with working references to stylesheets and images and
links to other pages.
It is important to realize, however, that as useful as that feature
is, there is some risk security-wise in its use. Essentially, anyone
with commit access to the CVS or Subversion repositories served by
ViewVC has the ability to affect site content. If a discontented or
ignorant user commits malicious HTML to a version controlled file
(perhaps just by way of documenting examples of such), that malicious
HTML is effectively published and live on your ViewVC instance.
Visitors viewing those versioned controlled documents get the
malicious code, too, which might not be what the original author
intended.
For this reason, ViewVC's "checkout" view is disabled by default. If
you wish to enable it, simply add "co" to the list of views enabled in
the allowed_views configuration option.
INSTALLING VIEWVC
INSTALLING VIEWCVS
------------------
NOTE: Windows users can refer to windows/README for Windows-specific
installation instructions.
1) To get viewcvs.cgi to work, make sure that you have Python 1.5 or
greater installed and a webserver which is capable of executing
CGI scripts (either based on the .cgi extension, or by placing the
script within a specific directory).
1) To get viewvc.cgi to work, make sure that you have Python installed
and a webserver which is capable of executing CGI scripts (either
based on the .cgi extension, or by placing the script within a specific
directory).
You need to have RCS installed. Specifically, "rlog", "rcsdiff",
and "co". This script was tested against RedHat's rcs-5.7-10.rpm
Note that to browse CVS repositories, the viewvc.cgi script needs to
have READ-ONLY, physical access to the repository (or a copy of it).
Therefore, rsh/ssh or pserver access to the repository will not work.
And you need to have the RCS utilities installed, specifically "rlog",
"rcsdiff", and "co".
Note, that the viewcvs.cgi script needs to have READ-ONLY, physical
access to the CVS repository (or a copy of it). Therefore, rsh/ssh or
pserver access to the repository will not work.
2) Installation is handled by the ./viewvc-install script. Run this
For the more human readable diff formats you need a modern diff utility.
If you are using Linux, this is no problem. But on commercial unices
you might want to install GNU-diff to be able to use unified or
side-by-side diffs.
If you want to use cvsgraph, you have to obtain and install this
separately. See below. This was tested with cvsgraph-1.1.2.
For the checkin database to work, you will need MySQL >= 3.22,
and the Python DBAPI 2.0 module, MySQLdb. This was tested with
MySQLdb 0.9.1.
2) Installation is handled by the ./viewcvs-install script. Run this
script and you will be prompted for a installation root path.
The default is /usr/local/viewvc-VERSION, where VERSION is
the version of this ViewVC release. The installer sets the install
path in some of the files, and ViewVC cannot be moved to a
The default is /usr/local/viewcvs-VERSION, where VERSION is
the version of this ViewCVS release. The installer sets the install
path in some of the files, and ViewCVS cannot be moved to a
different path after the install.
Note: while 'root' is usually required to create /usr/local/viewvc,
ViewVC does not have to be installed as root, nor does it run as root.
It is just as valid to place ViewVC in a home directory, too.
Note: while 'root' is usually required to create /usr/local/viewcvs,
ViewCVS does not have to be installed as root, nor does it run as root.
It is just as valid to place ViewCVS in a home directory, too.
Note: viewvc-install will create directories if needed. It will
Note: viewcvs-install will create directories if needed. It will
prompt before overwriting files that may have been modified (such
as viewvc.conf), thus making it safe to install over the top of
as viewcvs.conf), thus making it safe to install over the top of
a previous installation. It will always overwrite program files,
however.
3) Edit <VIEWVC_INSTALLATION_DIRECTORY>/viewvc.conf for your specific
3) Edit <VIEWCVS_INSTALLATION_DIRECTORY>/viewcvs.conf for your specific
configuration. In particular, examine the following configuration options:
cvs_roots (for CVS)
svn_roots (for Subversion)
root_parents (for CVS or Subversion)
cvs_roots
default_root
root_as_url_component
rcs_dir
mime_types_files
rcs_path
mime_types_file
There are some other options that are usually nice to change. See
viewvc.conf for more information. ViewVC provides a working,
default look. However, if you want to customize the look of ViewVC
then edit the files in <VIEWVC_INSTALLATION_DIRECTORY>/templates.
viewcvs.conf for more information. ViewCVS provides a working,
default look. However, if you want to customize the look of ViewCVS
then edit the files in <VIEWCVS_INSTALLATION_DIRECTORY>/templates.
You need knowledge about HTML to edit the templates.
4) The CGI programs are in <VIEWVC_INSTALLATION_DIRECTORY>/bin/cgi/. You can
4) The CGI programs are in <VIEWCVS_INSTALLATION_DIRECTORY>/cgi/. You can
symlink to this directory from somewhere in your published HTTP server
path if your webserver is configured to follow symbolic links. You can
also copy the installed <VIEWVC_INSTALLATION_DIRECTORY>/bin/cgi/*.cgi
scripts after the install (unlike the other files in ViewVC, the scripts
under bin/ can be moved).
also copy the installed <VIEWCVS_INSTALLATION_DIRECTORY>/cgi/*.cgi scripts
after the install (unlike the other files in ViewCVS, the CGI scripts can
be moved).
If you are using Apache, then see below at the section titled
APACHE CONFIGURATION.
If you are using Apache, then the ScriptAlias directive is very
useful for pointing directly to the viwecvs.cgi script.
NOTE: for security reasons, it is not advisable to install ViewVC
NOTE: for security reasons, it is not advisable to install ViewCVS
directly into your published HTTP directory tree (due to the MySQL
passwords in viewvc.conf).
passwords in viewcvs.conf).
That's it for repository browsing. Instructions for getting the SQL
checkin database working are below.
5) That's it for repository browsing. Instructions for getting the
SQL checkin database working are below.
APACHE CONFIGURATION
--------------------
1) Locate your Apache configuration file(s).
Typical locations are /etc/httpd/httpd.conf,
/etc/httpd/conf/httpd.conf, and /etc/apache/httpd.conf. Depending
on how Apache was installed, you may also look under /usr/local/etc
or /etc/local. Use the vendor documentation or the find utility if
in doubt.
2) Depending on how your Apache configuration is setup by default, you
might need to explicitly allow high-level access to the ViewVC
install location.
<Directory <VIEWVC_INSTALLATION_DIRECTORY>>
Order allow,deny
Allow from all
</Directory>
For example, if ViewVC is installed in /usr/local/viewvc-1.0 on
your system:
<Directory /usr/local/viewvc-1.0>
Order allow,deny
Allow from all
</Directory>
3) Configure Apache to expose ViewVC to users at the URL of your choice.
ViewVC provides several different ways to do this. Choose one of
the following methods:
-----------------------------------
METHOD A: CGI mode via ScriptAlias
-----------------------------------
The ScriptAlias directive is very useful for pointing
directly to the viewvc.cgi script. Simply insert a line containing
ScriptAlias /viewvc <VIEWVC_INSTALLATION_DIRECTORY>/bin/cgi/viewvc.cgi
into your httpd.conf file. Choose the location in httpd.conf where
also the other ScriptAlias lines reside. Some examples:
ScriptAlias /viewvc /usr/local/viewvc-1.0/bin/cgi/viewvc.cgi
ScriptAlias /query /usr/local/viewvc-1.0/bin/cgi/query.cgi
----------------------------------------
METHOD B: CGI mode in cgi-bin directory
----------------------------------------
Copy the CGI scripts from
<VIEWVC_INSTALLATION_DIRECTORY>/bin/cgi/*.cgi
to the /cgi-bin/ directory configured in your httpd.conf file.
You can override configuration file location using:
SetEnv VIEWVC_CONF_PATHNAME /etc/viewvc.conf
------------------------------------------
METHOD C: CGI mode in ExecCGI'd directory
------------------------------------------
Copy the CGI scripts from
<VIEWVC_INSTALLATION_DIRECTORY>/bin/cgi/*.cgi
to the directory of your choosing in the Document Root adding the following
Apache directives for the directory in httpd.conf or an .htaccess file:
Options +ExecCGI
AddHandler cgi-script .cgi
Note: For this to work mod_cgi has to be loaded. And for the .htaccess file
to be effective, "AllowOverride All" or "AllowOverride Options FileInfo"
needs to have been specified for the directory.
------------------------------------------
METHOD D: Using mod_python (if installed)
------------------------------------------
Copy the Python scripts and .htaccess file from
<VIEWVC_INSTALLATION_DIRECTORY>/bin/mod_python/
to a directory being served by Apache.
In httpd.conf, make sure that "AllowOverride All" or at least
"AllowOverride FileInfo Options" are enabled for the directory
you copied the files to.
You can override configuration file location using:
SetEnv VIEWVC_CONF_PATHNAME /etc/viewvc.conf
Note: If you are using Mod_Python under Apache 1.3 the tarball generation
feature may not work because it uses multithreading. This works fine
under Apache 2.
----------------------------------------
METHOD E: Using mod_wsgi (if installed)
----------------------------------------
Copy the Python scripts file from
<VIEWVC_INSTALLATION_DIRECTORY>/bin/wsgi/
to the directory of your choosing. Modify httpd.conf with the
following directives:
WSGIScriptAlias /viewvc <VIEWVC_INSTALLATION_DIRECTORY>/bin/wsgi/viewvc.wsgi
WSGIScriptAlias /query <VIEWVC_INSTALLATION_DIRECTORY>/bin/wsgi/query.wsgi
You'll probably also need the following directive because of the
not-quite-sanctioned way that ViewVC manipulates Python objects.
WSGIApplicationGroup %{GLOBAL}
Note: WSGI support in ViewVC is at this time quite rudimentary,
bordering on downright experimental. Your mileage may vary.
-----------------------------------------
METHOD F: Using mod_fcgid (if installed)
-----------------------------------------
This uses ViewVC's WSGI support (from above), but supports using FastCGI,
and is a somewhat hybrid approach of several of the above methods.
Especially if fcgi is already being used for other purposes, e.g. PHP,
also using fcgi can prevent the need for including additional modules
(e.g. mod_python or mod_wsgi) within Apache, which may help lessen Apache's
memory usage and/or help improve performance.
This depends on mod_fcgid:
http://httpd.apache.org/mod_fcgid/
as well as the fcgi server from Python's flup package:
http://pypi.python.org/pypi/flup
http://trac.saddi.com/flup
The following are some example httpd.conf fragments you can use to
support this configuration:
ScriptAlias /viewvc /usr/local/viewvc/bin/wsgi/viewvc.fcgi
ScriptAlias /query /usr/local/viewvc/bin/wsgi/query.fcgi
4) [Optional] Provide direct access to icons, stylesheets, etc.
ViewVC's HTML templates reference various stylesheets and icons
provided by ViewVC itself. By default, ViewVC generates URLs to
those artifacts which point back into ViewVC (using a magic
syntax); ViewVC in turn handles such magic URL requests by
streaming back the contents of the requested icon or stylesheet
file. While this simplifies the configuration and initial
deployment of ViewVC, it's not the most efficient approach to
deliver what is essentially static content.
To improve performance, consider carving out a URL space in your
webserver's configuration solely for this static content and
instruct ViewVC to use that space when generating URLs for that
content. For example, you might add an Alias such as the following
to your httpd.conf:
Alias /viewvc-docroot /usr/local/viewvc/templates/default/docroot
And then, in viewvc.conf, set the 'docroot' option to the same
location:
docroot = /viewvc-docroot
WARNING: As always when using Alias directives, be careful that you
have them in the correct order. For example, if you use an
ordering such as the following, Apache will hand requests for your
static documents off to ViewVC as if they were versioned resources:
ScriptAlias /viewvc /usr/local/viewvc/bin/wsgi/viewvc.fcgi
Alias /viewvc/static /usr/local/viewvc/templates/default/docroot
The correct order would be:
Alias /viewvc/static /usr/local/viewvc/templates/default/docroot
ScriptAlias /viewvc /usr/local/viewvc/bin/wsgi/viewvc.fcgi
(That said, it's best to avoid such namespace nesting altogether if
you can.)
5) [Optional] Add access control.
In your httpd.conf you can control access to certain modules by
adding directives like this:
<Location "<url to viewvc.cgi>/<modname_you_wish_to_access_ctl>">
AllowOverride None
AuthUserFile /path/to/passwd/file
AuthName "Client Access"
AuthType Basic
require valid-user
</Location>
WARNING: If you enable the "checkout_magic" or "allow_tar" options, you
will need to add additional location directives to prevent people
from sneaking in with URLs like:
http://<server_name>/viewvc/*checkout*/<module_name>
http://<server_name>/viewvc/~checkout~/<module_name>
http://<server_name>/viewvc/<module_name>.tar.gz?view=tar
6) Restart Apache.
The commands to do this vary. "httpd -k restart" and "apache -k
restart" are two common variants. On RedHat Linux it is done using
the command "/sbin/service httpd restart" and on SuSE Linux it is
done with "rcapache restart". Other systems use "apachectl restart".
7) [Optional] Protect your ViewVC instance from server-whacking webcrawlers.
As ViewVC is a web-based application which each page containing various
links to other pages and views, you can expect your server's performance
to suffer if a webcrawler finds your ViewVC instance and begins
traversing those links. We highly recommend that you add your ViewVC
location to a site-wide robots.txt file. Visit the Wikipedia page
for Robots.txt (http://en.wikipedia.org/wiki/Robots.txt) for more
information.
WARNING: ViewCVS has not been tested on web servers operating on the
Win32 platform.
UPGRADING VIEWVC
UPGRADING VIEWCVS
-----------------
Please read the file upgrading-howto.html in the docs/ subdirectory.
Please read the file upgrading.html in the website subdirectory or
at <http://viewcvs.sourceforge.net/upgrading.html>.
SQL CHECKIN DATABASE
--------------------
This feature is a clone of the Mozilla Project's Bonsai database. It
catalogs every commit in the CVS or Subversion repository into a SQL
database. In fact, the databases are 100% compatible.
catalogs every commit in the CVS repository into a SQL database. In fact,
the databases are 100% compatible.
Various queries can be performed on the database. After installing ViewVC,
Various queries can be performed on the database. After installing ViewCVS,
there are some additional steps required to get the database working.
1) You need MySQL and MySQLdb (a Python DBAPI 2.0 module) installed.
1) You need MySQL >= 3.22, and the Python module MySQLdb 0.9.0 installed.
Python 1.5.2 is REQUIRED by MySQLdb, therefore to use this part of
ViewCVS you must be using Python 1.5.2. Additionally you will need the
mxDateTime extension. I've tested with version 1.3.0
2) You need to create a MySQL user who has permission to create databases.
Optionally, you can create a second user with read-only access to the
database.
3) Run the <VIEWVC_INSTALLATION_DIRECTORY>/bin/make-database script. It will
3) Run the <VIEWCVS_INSTALLATION_DIRECTORY>/make-database script. It will
prompt you for your MySQL user, password, and the name of database you
want to create. The database name defaults to "ViewVC". This script
want to create. The database name defaults to "ViewCVS". This script
creates the database and sets up the empty tables. If you run this on a
existing ViewVC database, you will lose all your data!
existing ViewCVS database, you will lose all your data!
4) Edit your <VIEWVC_INSTALLATION_DIRECTORY>/viewvc.conf file.
4) Edit your <VIEWCVS_INSTALLATION_DIRECTORY>/viewcvs.conf file.
There is a [cvsdb] section. You will need to set:
enabled = 1 # Whether to enable query support in viewvc.cgi
host = # MySQL database server host
port = # MySQL database server port (default is 3306)
database_name = # name of database you created with make-database
user = # read/write database user
passwd = # password for read/write database user
readonly_user = # read-only database user
readonly_passwd = # password for the read-only user
Note that it's pretty safe in this instance for your read-only user
and your read-write user to be the same.
5) At this point, you need to tell your version control system(s) to
publish their commit information to the database. This is done
using utilities that ViewVC provides.
To publish CVS commits into the database:
Two programs are provided for updating the checkin database from
a CVS repository, cvsdbadmin and loginfo-handler. They serve
two different purposes. The cvsdbadmin program walks through
your CVS repository and adds every commit in every file. This
is commonly used for initializing the database from a repository
which has been in use. The loginfo-handler script is executed
by the CVS server's CVSROOT/loginfo system upon each commit. It
makes real-time updates to the checkin database as commits are
made to the repository.
To build a database of all the commits in the CVS repository
/home/cvs, invoke: "./cvsdbadmin rebuild /home/cvs". If you
want to update the checkin database, invoke: "./cvsdbadmin
update /home/cvs". The update mode checks to see if a commit is
already in the database, and only adds it if it is absent.
To get real-time updates, you'll want to checkout the CVSROOT
module from your CVS repository and edit CVSROOT/loginfo. For
folks running CVS 1.12 or better, add this line:
ALL <VIEWVC_INSTALLATION_DIRECTORY>/bin/loginfo-handler %p %{sVv}
If you are running CVS 1.11 or earlier, you'll want a slightly
different command line in CVSROOT/loginfo:
ALL <VIEWVC_INSTALLATION_DIRECTORY>/bin/loginfo-handler %{sVv}
If you have other scripts invoked by CVSROOT/loginfo, you will
want to make sure to change any running under the "DEFAULT"
keyword to "ALL" like the loginfo handler, and probably
carefully read the execution rules for CVSROOT/loginfo from the
CVS manual.
If you are running the Unix port of CVS-NT, the handler script
need to know about it. CVS-NT delivers commit information to
loginfo scripts differently than the way mainstream CVS does.
Your command line should look like this:
ALL <VIEWVC_INSTALLATION_DIRECTORY>/bin/loginfo-handler %{sVv} cvsnt
host = # MySQL database server host
database_name = # the name of the database you created with
# make-database
user = # the read/write database user
passwd = # password for read/write database user
readonly_user = # the readonly database user -- it's pretty
# safe to use the read/write user here
readonly_passwd = # password for the readonly user
To publish Subversion commits into the database:
To build a database of all the commits in the Subversion
repository /home/svn, invoke: "./svndbadmin rebuild /home/svn".
If you want to update the checkin database, invoke:
"./svndbadmin update /home/svn".
To get real time updates, you will need to add a post-commit
hook (for the repository example above, the script should go in
/home/svn/hooks/post-commit). The script should look something
like this:
#!/bin/sh
REPOS="$1"
REV="$2"
<VIEWVC_INSTALLATION_DIRECTORY>/bin/svndbadmin update \
"$REPOS" "$REV"
If you allow revision property changes in your repository,
create a post-revprop-change hook script which uses the same
'svndbadmin update' command as the post-commit script, except
with the addition of the --force option:
#!/bin/sh
REPOS="$1"
REV="$2"
<VIEWVC_INSTALLATION_DIRECTORY>/bin/svndbadmin update --force \
"$REPOS" "$REV"
This will make sure that the checkin database stays consistent
when you change the svn:log, svn:author or svn:date revision
properties.
5) Two programs are provided for updating the checkin database,
cvsdbadmin and loginfo-handler. They serve two different purposes.
The cvsdbadmin program walks through your CVS repository and adds
every commit in every file. This is commonly used for initializing
the database from a repository which has been in use. The
loginfo-handler script is executed by the CVS server's CVSROOT/loginfo
system upon each commit. It makes real-time updates to the checkin
database as commits are made to the repository.
You should be ready to go. Click one of the "Query revision history"
links in ViewVC directory listings and give it a try.
To build a database of all the commits in the CVS repository /home/cvs,
invoke: "./cvsdbadmin rebuild /home/cvs". If you want to update
the checkin database, invoke: "./cvsdbadmin update /home/cvs". The
update mode checks to see if a commit is already in the database,
and only adds it if it is absent.
To get real-time updates, you'll want to checkout the CVSROOT module
from your CVS repository and edit CVSROOT/loginfo. Add the line:
ALL (echo %{sVv}; cat) | <VIEWCVS_INSTALLATION_DIRECTORY>/loginfo-handler
If you have other scripts invoked by CVSROOT/loginfo, you will want
to make sure to change any running under the "DEFAULT" keyword to
"ALL" like the loginfo handler, and probably carefully read the
execution rules for CVSROOT/loginfo from the CVS manual.
6) You may want to modify the HTML template file:
<VIEWCVS_INSTALLATION_DIRECTORY>/templates/query.ezt
This is used by the query.cgi script to generate part of its HTML output.
At some point the currently hardcoded table output will also vanish.
7) You should be ready to go. Load up the query.cgi script and give
it a try.
ENABLING SYNTAX COLORATION
--------------------------
ENSCRIPT CONFIGURATION
----------------------
ViewVC uses Pygments (http://pygments.org) for syntax coloration. You
need only install a suitable version of that module, and if ViewVC
finds it in your Python module path, it will use it (unless you
specifically disable the feature by setting use_pygments = 0 in your
viewvc.conf file).
Enscript is program that can colorize sourcecode of a lot of languages.
Linux distributions like for example SuSE Linux from at least 7.0
up to the recently released 7.3 already contain a precompiled and
configured enscript 1.6.2 package.
1) Download genscript from http://people.ssh.com/mtr/genscript/
2) Configure and compile per instructions with enscript.
(I 've not done this, since I'm using the precompiled package
delivered with SuSE Linux)
3) Set the 'use_enscript' option in viewcvs.conf to 1.
4) That's it!
5) If you want to colorize exotic languages, you might have to
patch 'lib/viewcvs.py' and add a new highlighting file to enscript.
I've done this for Modula-2 and submitted the file to the
enscript maintainer long ago. If interested in this patch for
enscript mailto:pefu@sourceforge.net
CVSGRAPH CONFIGURATION
@@ -526,116 +243,88 @@ CvsGraph is a program that can display a clickable, graphical tree
of files in a CVS repository.
WARNING: Under certain circumstances (many revisions of a file
or many branches or both) CvsGraph can generate very huge images.
or many branches or both) cvsgraph can generate very huge images.
Especially on thin clients these images may crash the Web-Browser.
Currently there is no known way to avoid this behavior of CvsGraph.
Currently there is no known way to avoid this behavior of cvsgraph.
So you have been warned!
Nevertheless, CvsGraph can be quite helpful on repositories with
Nevertheless cvsgraph can be quite helpful on repositories with
a reasonable number of revisions and branches.
1) Install CvsGraph using your system's package manager or downloading
from the project home page.
1) Install viewcvs according to instructions in 'INSTALLING
VIEWCVS' section above. The installation directory is where
the 'viewcvs-install' script copied and configured the viewcvs
programs.
2) Set the 'use_cvsgraph' options in viewvc.conf to 1.
2) Download CvsGraph from http://www.akhphd.au.dk/~bertho/cvsgraph/
3) You may also need to set the 'cvsgraph_path' option if the
CvsGraph executable is not located on the system PATH.
3) Configure and compile per instructions with CvsGraph. I had
problems with 'configure' finding the gd library. Had to create
a link from libgd.so to libgd.do.4.0.0. On Solaris you might
want to edit the link command line and add the option -R if
you have you libraries at non-standard location.
4) There is a file <VIEWVC_INSTALLATION_DIRECTORY>/cvsgraph.conf that
4) Place the 'cvsgraph' executable into a directory readable by the
userid running the web server. (default is '/usr/local/bin' if
you simply type 'make install' in the cvsgraph directory).
5) Check the setting of the 'cvsgraph_path' option in viewcvs.conf:
/usr/local/bin/ is most often NOT contained in $PATH of the
webserver process (e.g. Apache), so you will have to edit this.
Set the 'use_cvsgraph' option in viewcvs.conf to 1.
6) That's it!
7) There is a file <VIEWCVS_INSTALLATION_DIRECTORY>/cvsgraph.conf that
you may want to edit if desired to set color and font characteristics.
See the cvsgraph.conf documentation. No edits are required in
cvsgraph.conf for operation with ViewVC.
SUBVERSION INTEGRATION
----------------------
Unlike the CVS integration, which simply wraps the RCS and CVS utility
programs, the Subversion integration requires additional Python
libraries. To use ViewVC with Subversion, make sure you have both
Subversion itself and the Subversion Python bindings installed. These
can be obtained through typical package distribution mechanisms, or
may be build from source. (See the files 'INSTALL' and
'subversion/bindings/swig/INSTALL' in the Subversion source tree for
more details on how to build and install Subversion and its Python
bindings.)
Generally speaking, you'll know when your installation of Subversion's
bindings has been successful if you can import the 'svn.core' module
from within your Python interpreter. Here's an example of doing so
which doubles as a quick way to check what version of the Subversion
Python binding you have:
% python
Python 2.2.2 (#1, Oct 29 2002, 02:47:30)
[GCC 2.96 20000731 (Red Hat Linux 7.2 2.96-108.7.2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from svn.core import *
>>> "%s.%s.%s" % (SVN_VER_MAJOR, SVN_VER_MINOR, SVN_VER_PATCH)
'1.3.1'
>>>
Note that by default, Subversion installs its bindings in a location
that is not in Python's default module search path (for example, on
Linux systems the default is usually /usr/local/lib/svn-python). You
need to remedy this, either by adding this path to Python's module
search path, or by relocating the bindings to some place in that
search path.
For example, you might want to create .pth file in your Python
installation directory's site-packages area which tells Python where
to find additional modules (in this case, you Subversion Python
bindings). You would do this as follows (and as root):
$ echo "/path/to/svn/bindings" > /path/to/python/site-packages/svn.pth
(Though, obviously, with the correct paths specified.)
Configuration of the Subversion repositories happens in much the same
way as with CVS repositories, only with the 'svn_roots' configuration
variable instead of the 'cvs_roots' one.
cvsgraph.conf for operation with viewcvs.
IF YOU HAVE PROBLEMS ...
------------------------
If nothing seems to work:
If you've trouble to make viewcvs.cgi work:
* Check if you can execute CGI-scripts (Apache needs to have an
ScriptAlias /cgi-bin or cgi-script Handler defined). Try to
execute a simple CGI-script that often comes with the distribution
of the webserver; locate the logfiles and try to find hints which
explain the malfunction
=== If nothing seems to work:
* View the entries in the webserver's error.log
o check if you can execute CGI-scripts (Apache needs to have an
ScriptAlias /cgi-bin or cgi-script Handler defined). Try to
execute a simple CGI-script that often comes with the distribution
of the webserver; locate the logfiles and try to find hints
which explain the malfunction
o view the entries in the webserver's error.log
If ViewVC seems to work but doesn't show the expected result (Typical
error: you can't see any files)
* Check whether the CGI-script has read-permissions to your
CVS-Repository. The CGI-script generally runs as the same user
that the web server does, often user 'nobody' or 'httpd'.
* Does ViewVC find your RCS utilities? (edit rcs_dir)
If something else happens or you can't get it to work:
* Check the ViewVC home page:
http://viewvc.org/
* Review the ViewVC mailing list archive to see if somebody else had
the same problem, and it was solved:
http://viewvc.tigris.org/servlets/SummarizeList?listName=users
* Check the ViewVC issue database to see if the problem you are
seeing is the result of a known bug:
http://viewvc.tigris.org/issues/query.cgi
o make sure there is a trailing slash on the URL. for example:
* Send mail to the ViewVC mailing list, users@viewvc.tigris.org.
NOTE: make sure you provide an accurate description of the problem
-- including the version of ViewVC you are using -- and any
relevant tracebacks or error logs.
http://www.example.com/cgi-bin/viewcvs.cgi/
(ViewCVS should perform a redirection to ensure this, but a report
has indicated that it doesn't always do this... please send more
of these bug reports if you run into this)
=== If viewcvs seems to work but doesn't show the expected result
(Typical error: you can't see any files)
o check whether the CGI-script has read-permissions to your
CVS-Repository. The CGI-script often runs as the user 'nobody'
or 'httpd' ..
o does viewcvs find your RCS utilities? (edit rcs_path)
=== If something else happens or you can't get it to work:
o check the ViewCVS home page:
http://viewcvs.sourceforge.net/
o review the ViewCVS mailing list archive to see if somebody else had
the same problem, and it was solved:
http://mailman.lyra.org/pipermail/viewcvs/
o send mail to the ViewCVS mailing list: viewcvs@lyra.org
NOTE: make sure you provide an accurate description of the problem
and any relevant tracebacks or error logs.

View File

@@ -1,70 +0,0 @@
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html>
<head>
<title>ViewVC: License v1</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
</head>
<body>
<p>The following text constitutes the license agreement for the <a
href="http://www.viewvc.org/">ViewVC</a> software (formerly known
as ViewCVS). It is an agreement between <a
href="http://www.viewvc.org/who.html#sec-viewcvs-group">The ViewCVS
Group</a> and the users of ViewVC.</p>
<blockquote>
<p><strong>Copyright &copy; 1999-2013 The ViewCVS Group. All rights
reserved.</strong></p>
<p>By using ViewVC, you agree to the terms and conditions set forth
below:</p>
<p>Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:</p>
<ol>
<li>Redistributions of source code must retain the above copyright
notice, this list of conditions and the following
disclaimer.</li>
<li>Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.</li>
</ol>
<p>THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
SUCH DAMAGE.</p>
</blockquote>
<hr />
<p>The following changes have occured to this license over time:</p>
<ul>
<li>May 12, 2001 &mdash; copyright years updated</li>
<li>September 5, 2002 &mdash; copyright years updated</li>
<li>March 17, 2006 &mdash; software renamed from "ViewCVS"</li>
<li>April 10, 2007 &mdash; copyright years updated</li>
<li>February 22, 2008 &mdash; copyright years updated</li>
<li>March 18, 2009 &mdash; copyright years updated</li>
<li>March 29, 2010 &mdash; copyright years updated</li>
<li>February 18, 2011 &mdash; copyright years updated</li>
<li>January 23, 2012 &mdash; copyright years updated</li>
<li>January 04, 2013 &mdash; copyright years updated</li>
</ul>
</body>
</html>

8
README
View File

@@ -1,9 +1,3 @@
ViewVC -- Viewing the content of CVS/SVN repositories with a Webbrowser.
This is the 4Intra.net patched version with some extra features.
ViewCVS -- Viewing the content of CVS repositories with a Webbrowser.
Please read the file INSTALL for more information.
And see windows/README for more information on running ViewVC on
Microsoft Windows.

View File

@@ -32,7 +32,7 @@ TODO ITEMS
*) include a ConfigParser.py to help older Python installations
*) add a check for the rcs programs/paths to viewvc-install. clarify the
*) add a check for the rcs programs/paths to viewcvs-install. clarify the
dependency on RCS in the docs.
*) have a "check" mode that verifies binaries are available on rcs_path
@@ -46,8 +46,8 @@ KNOWN BUGS
is non existant.
*) With old repositories containing many branches, tags or thousands
or revisions, the cvsgraph feature becomes unusable (see INSTALL).
ViewVC can't do much about this, but it might be possible to
or revisions, the cvsgraph feature becomes unusable (see INSTALL).
ViewCVS can't do much about this, but it might be possible to
investigate the number of branches, tags and revision in advance
and disable the cvsgraph links, if the numbers exceed a certain
treshold.

View File

@@ -1,64 +0,0 @@
<%@ LANGUAGE = Python %>
<%
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# query.asp: View CVS/SVN commit database by web browser
#
# -----------------------------------------------------------------------
#
# This is a teeny stub to launch the main ViewVC app. It checks the load
# average, then loads the (precompiled) query.py file and runs it.
#
# -----------------------------------------------------------------------
#
#########################################################################
# INSTALL-TIME CONFIGURATION
#
# These values will be set during the installation process. During
# development, there will be no 'viewvcinstallpath.py'
#
import viewvcinstallpath
LIBRARY_DIR = viewvcinstallpath.LIBRARY_DIR
CONF_PATHNAME = viewvcinstallpath.CONF_PATHNAME
#########################################################################
#
# Adjust sys.path to include our library directory
#
import sys
if LIBRARY_DIR:
if not LIBRARY_DIR in sys.path:
sys.path.insert(0, LIBRARY_DIR)
#########################################################################
import sapi
import viewvc
import query
server = sapi.AspServer(Server, Request, Response, Application)
try:
cfg = viewvc.load_config(CONF_PATHNAME, server)
viewvc_base_url = cfg.query.viewvc_base_url
if viewvc_base_url is None:
viewvc_base_url = "viewvc.asp"
query.main(server, cfg, viewvc_base_url)
finally:
s.close()
%>

View File

@@ -1,74 +0,0 @@
#!/usr/bin/env python
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# query.cgi: View CVS/SVN commit database by web browser
#
# -----------------------------------------------------------------------
#
# This is a teeny stub to launch the main ViewVC app. It checks the load
# average, then loads the (precompiled) viewvc.py file and runs it.
#
# -----------------------------------------------------------------------
#
#########################################################################
# INSTALL-TIME CONFIGURATION
#
# These values will be set during the installation process. During
# development, there will be no 'viewvcinstallpath.py'
#
import viewvcinstallpath
LIBRARY_DIR = viewvcinstallpath.LIBRARY_DIR
CONF_PATHNAME = viewvcinstallpath.CONF_PATHNAME
#########################################################################
#
# Adjust sys.path to include our library directory.
#
import sys
import os
if LIBRARY_DIR:
sys.path.insert(0, LIBRARY_DIR)
else:
sys.path.insert(0, os.path.abspath(os.path.join(sys.argv[0],
"../../../lib")))
#########################################################################
#
# If admins want nicer processes, here's the place to get them.
#
#try:
# os.nice(20) # bump the nice level of this process
#except:
# pass
#########################################################################
#
# Go do the work.
#
import sapi
import viewvc
import query
server = sapi.CgiServer()
cfg = viewvc.load_config(CONF_PATHNAME, server)
viewvc_base_url = cfg.query.viewvc_base_url
if viewvc_base_url is None:
viewvc_base_url = "viewvc.cgi"
query.main(server, cfg, viewvc_base_url)

View File

@@ -1,8 +0,0 @@
#!/bin/sh
#
# Set this script up with something like:
#
# ScriptAlias /viewvc-strace /home/gstein/src/viewvc/cgi/viewvc-strace.sh
#
thisdir="`dirname $0`"
exec strace -q -r -o /tmp/v-strace.log "${thisdir}/viewvc.cgi"

View File

@@ -1,70 +0,0 @@
#!/usr/bin/env python
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# viewvc: View CVS/SVN repositories via a web browser
#
# -----------------------------------------------------------------------
#
# This is a teeny stub to launch the main ViewVC app. It checks the load
# average, then loads the (precompiled) viewvc.py file and runs it.
#
# -----------------------------------------------------------------------
#
#########################################################################
# INSTALL-TIME CONFIGURATION
#
# These values will be set during the installation process. During
# development, there will be no 'viewvcinstallpath.py'
#
import viewvcinstallpath
LIBRARY_DIR = viewvcinstallpath.LIBRARY_DIR
CONF_PATHNAME = viewvcinstallpath.CONF_PATHNAME
#########################################################################
#
# Adjust sys.path to include our library directory.
#
import sys
import os
if LIBRARY_DIR:
sys.path.insert(0, LIBRARY_DIR)
else:
sys.path.insert(0, os.path.abspath(os.path.join(sys.argv[0],
"../../../lib")))
#########################################################################
#
# If admins want nicer processes, here's the place to get them.
#
#try:
# os.nice(20) # bump the nice level of this process
#except:
# pass
#########################################################################
#
# Go do the work.
#
import sapi
import viewvc
server = sapi.CgiServer()
cfg = viewvc.load_config(CONF_PATHNAME, server)
viewvc.main(server, cfg)

View File

@@ -1,216 +0,0 @@
#!/bin/sh
# ViewVC installation script for CustIS
if [ -f custis-install-config ]; then
. custis-install-config
else
cat >custis-install-config <<EOF
CVS_ASYNC=
SVN_ASYNC=
CVS_USER=www-data
CVS_GROUP=cvs.users
CVSROOTS=
SVNROOT=
SVN_FORBIDDENRE=
DB_HOST=
DB_PORT=
DB_SOCKET=/var/run/mysqld/mysqld.sock
DB=
DB_USER=
DB_PASSWD=
VIEWVC_DIR=
VIEWVC_URI=/
VIEWVC_URI_SLASH=/
VIEWVC_STATIC_URI=/static
http_proxy=
no_proxy=
EOF
echo Empty 'custis-install-config' initialized, please edit it before installation.
exit
fi
if [ ! "$DB" -o ! "$VIEWVC_DIR" ]; then
echo Please set up 'custis-install-config' before installation.
exit
fi
################################################################################
export http_proxy
export no_proxy
DEPS="python libapache2-mod-python rcs diff cvsnt subversion subversion-tools python-setuptools python-subversion python-mysqldb cvsgraph"
echo "*** Installing dependency packages: $DEPS"
apt-get install $DEPS
echo "*** Installing Pygments for Python"
wget http://pypi.python.org/packages/source/P/Pygments/Pygments-0.11.1.tar.gz
tar -zxf Pygments-0.11.1.tar.gz
cd Pygments-0.11.1
python setup.py install
cd ..
echo "*** Installing ViewVC into $VIEWVC_DIR"
if [ ! -e $VIEWVC_DIR/viewvc.conf ]; then
../viewvc-install <<EOF
$VIEWVC_DIR
EOF
fi
echo "*** Writing ViewVC configuration into $VIEWVC_DIR/viewvc.conf"
for CVSROOT in $CVSROOTS; do
if [ "$vccvsroots" ]; then
vccvsroots="$vccvsroots, "
fi
vcrootname=`basename $CVSROOT`
vccvsroots="$vccvsroots$vcrootname: $CVSROOT"
done;
cat >"$VIEWVC_DIR/viewvc.conf" <<EOF
[general]
cvs_roots = $vccvsroots
root_parents = $SVNROOT : svn
cvsnt_exe_path = /usr/bin/cvsnt
mime_types_file = /etc/mime.types
address = Admin address: stas [gav-gav] custis [ru]
kv_files =
languages = en-us, ru-ru
[utilities]
rcs_dir = /usr/bin
cvsnt = /usr/bin/cvsnt
svn = /usr/bin/svn
diff = /usr/bin/diff
cvsgraph = /usr/bin/cvsgraph
[options]
allowed_views = markup, annotate, roots
authorizer = forbiddenre
checkout_magic = 0
cross_copies = 1
cvsgraph_conf = $VIEWVC_DIR/cvsgraph.conf
default_file_view = log
diff_format = h
docroot = $VIEWVC_STATIC_URI
enable_syntax_coloration = 1
generate_etags = 1
hide_attic = 1
hide_cvsroot = 1
hide_errorful_entries = 0
hr_breakable = 1
hr_funout = 0
hr_ignore_keyword_subst = 1
hr_ignore_white = 1
hr_intraline = 0
http_expiration_time = 600
limit_changes = 100
log_sort = date
mangle_email_addresses = 1
root_as_url_component = 1
short_log_len = 80
show_log_in_markup = 1
show_logs = 1
show_subdir_lastmod = 0
sort_by = file
sort_group_dirs = 1
svn_config_dir =
template_dir = templates
use_cvsgraph = 1
use_localtime = 1
use_pagesize = 0
use_rcsparse = 0
use_re_search = 0
[templates]
[cvsdb]
enabled = 1
host = $DB_HOST
socket = $DB_SOCKET
database_name = $DB
user = $DB_USER
passwd = $DB_PASSWD
readonly_user = $DB_USER
readonly_passwd = $DB_PASSWD
[vhosts]
[authz-forbidden]
forbidden =
[authz-forbiddenre]
forbiddenre = $SVN_FORBIDDENRE
[authz-svnauthz]
authzfile =
EOF
echo "*** Initializing database: $DB using $DB_USER@$DB_HOST"
$VIEWVC_DIR/bin/make-database <<EOF
$DB_HOST
$DB_USER
$DB_PASSWD
$DB
EOF
echo "*** Configuring Apache"
cat >/etc/apache2/sites-available/viewvc <<EOF
<VirtualHost *:80>
ServerName viewvc.office.custis.ru
ServerAlias viewvc
ServerAdmin sysadmins@custis.ru
ServerSignature Off
ErrorLog /var/log/apache2/viewvc-error.log
TransferLog /var/log/apache2/viewvc-access.log
LogLevel warn
# ViewVC at $VIEWVC_URI installed at $VIEWVC_DIR
DocumentRoot $VIEWVC_DIR/bin/mod_python
Alias $VIEWVC_STATIC_URI $VIEWVC_DIR/templates/docroot
Alias $VIEWVC_URI $VIEWVC_DIR/bin/mod_python/
<Location ~ ^${VIEWVC_URI_SLASH}*(\?.*)?$>
Options -Indexes
RewriteEngine On
RewriteRule .* ${VIEWVC_URI_SLASH}viewvc.py%{REQUEST_URI} [R,L]
</Location>
<Directory $VIEWVC_DIR/bin/mod_python>
Options +ExecCGI
AddHandler python-program .py
PythonHandler handler
PythonDebug Off
</Directory>
</VirtualHost>
EOF
a2enmod python
a2enmod mod_python
a2enmod rewrite
a2ensite viewvc
echo "*** Restarting Apache"
apache2ctl stop
sleep 1
apache2ctl start
echo "*** Building commit database for CVS"
for CVSROOT in $CVSROOTS; do
$VIEWVC_DIR/bin/cvsdbadmin rebuild $CVSROOT
done;
echo "*** Building commit database for Subversion repositories"
for i in `ls $SVNROOT`; do
if [ -d "$SVNROOT/$i" ]; then
$VIEWVC_DIR/bin/svndbadmin -v rebuild "$SVNROOT/$i"
fi
done;
# setup hooks for CVS
./setup-cvs-hooks "$CVSROOTS" "$VIEWVC_DIR" "$CVS_USER" "$CVS_GROUP" "$CVS_ASYNC"
# setup hooks for Subversion
./setup-svn-hooks "$SVNROOT" "$VIEWVC_DIR" "$SVN_ASYNC"

View File

@@ -1,211 +0,0 @@
#!/usr/bin/python
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# administrative program for CVSdb; this is primarily
# used to add/rebuild CVS repositories to the database
#
# -----------------------------------------------------------------------
#
#########################################################################
# INSTALL-TIME CONFIGURATION
#
# These values will be set during the installation process. During
# development, there will be no 'viewvcinstallpath.py'
#
import viewvcinstallpath
LIBRARY_DIR = viewvcinstallpath.LIBRARY_DIR
CONF_PATHNAME = viewvcinstallpath.CONF_PATHNAME
# Adjust sys.path to include our library directory
import sys
import os
if LIBRARY_DIR:
sys.path.insert(0, LIBRARY_DIR)
else:
sys.path.insert(0, os.path.abspath(os.path.join(sys.argv[0], "../../lib")))
#########################################################################
import os
import cvsdb
import viewvc
import vclib.ccvs
from stat import *
def UpdateFile(db, repository, path, update, latest_checkin, quiet_level, encoding = None):
try:
if update:
mtime = os.stat(repository.rcsfile(path, 1))[ST_MTIME]
if mtime < latest_checkin:
return
commit_list = cvsdb.GetUnrecordedCommitList(repository, path, db)
else:
commit_list = cvsdb.GetCommitListFromRCSFile(repository, path)
except cvsdb.error, e:
print '[ERROR] %s' % (e)
return
except vclib.ItemNotFound, e:
return
file = '/'.join(path)
printing = 0
if update:
if quiet_level < 1 or (quiet_level < 2 and len(commit_list)):
printing = 1
print '[%s [%d new commits]]' % (file, len(commit_list)),
else:
if quiet_level < 2:
printing = 1
print '[%s [%d commits]]' % (file, len(commit_list)),
## add the commits into the database
for commit in commit_list:
if encoding:
commit.SetFile(commit.GetFile().decode(encoding).encode('utf-8'))
db.AddCommit(commit)
if printing:
sys.stdout.write('.')
sys.stdout.flush()
if printing:
print
def RecurseUpdate(db, repository, directory, update, latest_checkin,
quiet_level, encoding = None):
for entry in repository.listdir(directory, None, {}):
path = directory + [entry.name]
if entry.errors:
continue
if entry.kind is vclib.DIR:
RecurseUpdate(db, repository, path, update, latest_checkin,
quiet_level, encoding)
continue
if entry.kind is vclib.FILE:
UpdateFile(db, repository, path, update, latest_checkin,
quiet_level, encoding)
def RootPath(path, quiet_level):
"""Break os path into cvs root path and other parts"""
root = os.path.abspath(path)
path_parts = []
p = root
while 1:
if os.path.exists(os.path.join(p, 'CVSROOT')):
root = p
if quiet_level < 2:
print "Using repository root `%s'" % root
break
p, pdir = os.path.split(p)
if not pdir:
del path_parts[:]
if quiet_level < 1:
print "Using repository root `%s'" % root
print "Warning: CVSROOT directory not found."
break
path_parts.append(pdir)
root = cvsdb.CleanRepository(root)
path_parts.reverse()
return root, path_parts
def usage():
cmd = os.path.basename(sys.argv[0])
sys.stderr.write(
"""Administer the ViewVC checkins database data for the CVS repository
located at REPOS-PATH.
Usage: 1. %s [[-q] -q] rebuild REPOS-PATH
2. %s [[-q] -q] update REPOS-PATH
3. %s [[-q] -q] purge REPOS-PATH
1. Rebuild the commit database information for the repository located
at REPOS-PATH, after first purging information specific to that
repository (if any).
2. Update the commit database information for all unrecorded commits
in the repository located at REPOS-PATH.
3. Purge information specific to the repository located at REPOS-PATH
from the database.
Use the -q flag to cause this script to be less verbose; use it twice to
invoke a peaceful state of noiselessness.
""" % (cmd, cmd, cmd))
sys.exit(1)
## main
if __name__ == '__main__':
args = sys.argv
# check the quietness level (0 = verbose, 1 = new commits, 2 = silent)
quiet_level = 0
while 1:
try:
index = args.index('-q')
quiet_level = quiet_level + 1
del args[index]
except ValueError:
break
# validate the command
if len(args) <= 2:
usage()
command = args[1].lower()
if command not in ('rebuild', 'update', 'purge'):
sys.stderr.write('ERROR: unknown command %s\n' % command)
usage()
# get repository and path, and do the work
root, path_parts = RootPath(args[2], quiet_level)
rootpath = vclib.ccvs.canonicalize_rootpath(root)
try:
cfg = viewvc.load_config(CONF_PATHNAME)
db = cvsdb.ConnectDatabase(cfg)
if command in ('rebuild', 'purge'):
if quiet_level < 2:
print "Purging existing data for repository root `%s'" % root
try:
db.PurgeRepository(root, len(path_parts) and '/'.join(path_parts) or '')
except cvsdb.UnknownRepositoryError, e:
if command == 'purge':
sys.stderr.write("ERROR: " + str(e) + "\n")
sys.exit(1)
if command in ('rebuild', 'update'):
repository = vclib.ccvs.CVSRepository(None, rootpath, None,
cfg.utilities, 0, cfg.guesser())
latest_checkin = db.GetLatestCheckinTime(repository)
if latest_checkin is None:
command = 'rebuild'
RecurseUpdate(db, repository, path_parts,
command == 'update', latest_checkin, quiet_level,
cfg.options.cvs_ondisk_charset)
except KeyboardInterrupt:
print
print '** break **'
sys.exit(0)

View File

@@ -1,57 +0,0 @@
#!/usr/bin/perl
use strict;
my ($type, $path, $branch, $user, $perm);
my $R = ''; # -R for recursive
my $lvl = {
r => 1,
t => 2,
w => 3,
c => 4,
a => 4,
p => 5,
};
while(<>)
{
chomp;
next if /^\s*#/so;
($type, $path, $branch, $user, $perm) = split /:/, $_;
($user, $perm) = split /!/, $user;
next unless $perm;
$perm = [ sort { $lvl->{$b} cmp $lvl->{$a} } split //, $perm ];
$perm = $perm->[0];
if ($perm eq 't')
{
$perm = 'read,tag,nowrite,nocreate,nocontrol';
}
elsif ($perm eq 'r')
{
$perm = 'read,notag,nowrite,nocreate,nocontrol';
}
elsif ($perm eq 'w')
{
$perm = 'read,tag,write,nocreate,nocontrol';
}
elsif ($perm eq 'c' || $perm eq 'a')
{
$perm = 'read,tag,write,create,nocontrol';
}
elsif ($perm eq 'p')
{
$perm = 'read,tag,write,create,control';
}
print "cvs rchacl$R -a $perm";
print " -u '$user'" if $user ne 'ALL';
print " -r '$branch'" if $branch ne 'ALL';
if ($path ne 'ALL')
{
print " '$path'";
}
else
{
print ' `ls $CVSROOT | grep -v '."'#cvs'`";
}
print "\n";
}

View File

@@ -1,21 +0,0 @@
#!/usr/bin/perl
# Very simple inetd/xinetd "cvsnt rcsfile" service
# Useful for ViewVC as there is an unpleasant non-stable bug, probably somewhere
# inside mod_python or Apache, which SOMETIMES causes cvsnt subprocesses forked
# from mod_python to die. This gives empty diff outputs or different errors like
# "Error: Rlog output ended early. Expected RCS file ..."
# This script removes forking from mod_python code and solves the issue.
# Additional profit of this script is that you can probably browse REMOTE cvs
# repositories, if you expose this service to ViewVC, although it is not tested.
# USAGE: (local) create an inetd service with this script as server listening
# on some port of 127.0.0.1 and put "rcsfile_socket = 127.0.0.1:port"
# into "utilities" section of viewvc.conf
$args = <STDIN>;
$args =~ s/\s+$//so;
@args = $args =~ /\'([^\']+)\'/giso;
# We don't execute shell, so this is mostly safe, not a backdoor :)
exec('/usr/bin/cvsnt', 'rcsfile', @args);

View File

@@ -1,317 +0,0 @@
#!/usr/bin/env python
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# updates SQL database with new commit records
#
# -----------------------------------------------------------------------
#
#########################################################################
# INSTALL-TIME CONFIGURATION
#
# These values will be set during the installation process. During
# development, there will be no 'viewvcinstallpath.py'
#
import viewvcinstallpath
LIBRARY_DIR = viewvcinstallpath.LIBRARY_DIR
CONF_PATHNAME = viewvcinstallpath.CONF_PATHNAME
# Adjust sys.path to include our library directory
import sys
import os
if LIBRARY_DIR:
sys.path.insert(0, LIBRARY_DIR)
else:
sys.path.insert(0, os.path.abspath(os.path.join(sys.argv[0], "../../lib")))
#########################################################################
import os
import getopt
import re
import cvsdb
import viewvc
import vclib.ccvs
DEBUG_FLAG = 0
## output functions
def debug(text):
if DEBUG_FLAG:
if type(text) != (type([])):
text = [text]
for line in text:
line = line.rstrip('\n\r')
print 'DEBUG(viewvc-loginfo):', line
def warning(text):
print 'WARNING(viewvc-loginfo):', text
def error(text):
print 'ERROR(viewvc-loginfo):', text
sys.exit(1)
_re_revisions = re.compile(
r",(?P<old>(?:\d+\.\d+)(?:\.\d+\.\d+)*|NONE)" # comma and first revision
r",(?P<new>(?:\d+\.\d+)(?:\.\d+\.\d+)*|NONE)" # comma and second revision
r"(?:$| )" # space or end of string
)
def Cvs1Dot12ArgParse(args):
"""CVS 1.12 introduced a new loginfo format while provides the various
pieces of interesting version information to the handler script as
individual arguments instead of as a single string."""
if args[1] == '- New directory':
return None, None
elif args[1] == '- Imported sources':
return None, None
else:
directory = args.pop(0)
files = []
while len(args) >= 3:
files.append(args[0:3])
args = args[3:]
return directory, files
def HeuristicArgParse(s, repository):
"""Older versions of CVS (except for CVSNT) do not escape spaces in file
and directory names that are passed to the loginfo handler. Since the input
to loginfo is a space separated string, this can lead to ambiguities. This
function attempts to guess intelligently which spaces are separators and
which are part of file or directory names. It disambiguates spaces in
filenames from the separator spaces between files by assuming that every
space which is preceded by two well-formed revision numbers is in fact a
separator. It disambiguates the first separator space from spaces in the
directory name by choosing the longest possible directory name that
actually exists in the repository"""
if (s[-16:] == ' - New directory'
or s[:26] == ' - New directory,NONE,NONE'):
return None, None
if (s[-19:] == ' - Imported sources'
or s[-29:] == ' - Imported sources,NONE,NONE'):
return None, None
file_data_list = []
start = 0
while 1:
m = _re_revisions.search(s, start)
if start == 0:
if m is None:
error('Argument "%s" does not contain any revision numbers' \
% s)
directory, filename = FindLongestDirectory(s[:m.start()],
repository)
if directory is None:
error('Argument "%s" does not start with a valid directory' \
% s)
debug('Directory name is "%s"' % directory)
else:
if m is None:
warning('Failed to interpret past position %i in the loginfo '
'argument, leftover string is "%s"' \
% start, pos[start:])
filename = s[start:m.start()]
old_version, new_version = m.group('old', 'new')
file_data_list.append((filename, old_version, new_version))
debug('File "%s", old revision %s, new revision %s'
% (filename, old_version, new_version))
start = m.end()
if start == len(s): break
return directory, file_data_list
def FindLongestDirectory(s, repository):
"""Splits the first part of the argument string into a directory name
and a file name, either of which may contain spaces. Returns the longest
possible directory name that actually exists"""
parts = s.split()
for i in range(len(parts)-1, 0, -1):
directory = ' '.join(parts[:i])
filename = ' '.join(parts[i:])
if os.path.isdir(os.path.join(repository, directory)):
return directory, filename
return None, None
_re_cvsnt_revisions = re.compile(
r"(?P<filename>.*)" # comma and first revision
r",(?P<old>(?:\d+\.\d+)(?:\.\d+\.\d+)*|NONE)" # comma and first revision
r",(?P<new>(?:\d+\.\d+)(?:\.\d+\.\d+)*|NONE)" # comma and second revision
r"$" # end of string
)
def CvsNtArgParse(s, repository):
"""CVSNT escapes all spaces in filenames and directory names with
backslashes"""
if s[-18:] == r' -\ New\ directory':
return None, None
if s[-21:] == r' -\ Imported\ sources':
return None, None
file_data_list = []
directory, pos = NextFile(s)
debug('Directory name is "%s"' % directory)
while 1:
fileinfo, pos = NextFile(s, pos)
if fileinfo is None:
break
m = _re_cvsnt_revisions.match(fileinfo)
if m is None:
warning('Can\'t parse file information in "%s"' % fileinfo)
continue
file_data = m.group('filename', 'old', 'new')
file_data_list.append(file_data)
debug('File "%s", old revision %s, new revision %s' % file_data)
return directory, file_data_list
def NextFile(s, pos = 0):
escaped = 0
ret = ''
i = pos
while i < len(s):
c = s[i]
if escaped:
ret += c
escaped = 0
elif c == '\\':
escaped = 1
elif c == ' ':
return ret, i + 1
else:
ret += c
i += 1
return ret or None, i
def ProcessLoginfo(rootpath, directory, files):
cfg = viewvc.load_config(CONF_PATHNAME)
db = cvsdb.ConnectDatabase(cfg)
repository = vclib.ccvs.CVSRepository(None, rootpath, None,
cfg.utilities, 0)
# split up the directory components
dirpath = filter(None, os.path.normpath(directory).split(os.sep))
## build a list of Commit objects
commit_list = []
for filename, old_version, new_version in files:
filepath = dirpath + [filename]
## XXX: this is nasty: in the case of a removed file, we are not
## given enough information to find it in the rlog output!
## So instead, we rlog everything in the removed file, and
## add any commits not already in the database
if new_version == 'NONE':
commits = cvsdb.GetUnrecordedCommitList(repository, filepath, db)
else:
commits = cvsdb.GetCommitListFromRCSFile(repository, filepath,
new_version)
commit_list.extend(commits)
## add to the database
db.AddCommitList(commit_list)
## MAIN
if __name__ == '__main__':
## get the repository from the environment
try:
repository = os.environ['CVSROOT']
except KeyError:
error('CVSROOT not in environment')
debug('Repository name is "%s"' % repository)
## parse arguments
argc = len(sys.argv)
debug('Got %d arguments:' % (argc))
debug(map(lambda x: ' ' + x, sys.argv))
# if we have more than 3 arguments, we are likely using the
# newer loginfo format introduced in CVS 1.12:
#
# ALL <path>/bin/loginfo-handler %p %{sVv}
if argc > 3:
directory, files = Cvs1Dot12ArgParse(sys.argv[1:])
else:
if len(sys.argv) > 1:
# the first argument should contain file version information
arg = sys.argv[1]
else:
# if there are no arguments, read version information from
# first line of input like old versions of ViewCVS did
arg = sys.stdin.readline().rstrip()
if len(sys.argv) > 2:
# if there is a second argument it indicates which parser
# should be used to interpret the version information
if sys.argv[2] == 'cvs':
fun = HeuristicArgParse
elif sys.argv[2] == 'cvsnt':
fun = CvsNtArgParse
else:
error('Bad arguments')
else:
# if there is no second argument, guess which parser to use based
# on the operating system. Since CVSNT now runs on Windows and
# Linux, the guess isn't necessarily correct
if sys.platform == "win32":
fun = CvsNtArgParse
else:
fun = HeuristicArgParse
directory, files = fun(arg, repository)
debug('Discarded from stdin:')
debug(map(lambda x: ' ' + x, sys.stdin.readlines())) # consume stdin
repository = cvsdb.CleanRepository(repository)
debug('Repository: %s' % (repository))
debug('Directory: %s' % (directory))
debug('Files: %s' % (str(files)))
if files is None:
debug('Not a checkin, nothing to do')
else:
ProcessLoginfo(repository, directory, files)
sys.exit(0)

View File

@@ -1,363 +0,0 @@
#!/usr/bin/env python
# -*-python-*-
#
# Copyright (C) 1999-2009 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# administrative program for CVSdb; creates a clean database in
# MySQL 3.22 or later
#
# -----------------------------------------------------------------------
import os, sys, string
import popen2
import getopt
## ------------------------------------------------------------------------
## Stuff common to all schemas
##
DATABASE_SCRIPT_COMMON="""\
DROP DATABASE IF EXISTS <dbname>;
CREATE DATABASE <dbname>;
USE <dbname>;
"""
## ------------------------------------------------------------------------
## Version 0: The original, Bonsai-compatible schema.
##
DATABASE_SCRIPT_VERSION_0="""\
DROP TABLE IF EXISTS branches;
CREATE TABLE branches (
id mediumint(9) NOT NULL auto_increment,
branch varchar(64) binary DEFAULT '' NOT NULL,
PRIMARY KEY (id),
UNIQUE branch (branch)
) TYPE=InnoDB;
DROP TABLE IF EXISTS checkins;
CREATE TABLE checkins (
id int NOT NULL AUTO_INCREMENT PRIMARY KEY,
type enum('Change','Add','Remove'),
ci_when datetime DEFAULT '0000-00-00 00:00:00' NOT NULL,
whoid mediumint(9) DEFAULT '0' NOT NULL,
repositoryid mediumint(9) DEFAULT '0' NOT NULL,
dirid mediumint(9) DEFAULT '0' NOT NULL,
fileid mediumint(9) DEFAULT '0' NOT NULL,
revision varchar(32) binary DEFAULT '' NOT NULL,
stickytag varchar(255) binary DEFAULT '' NOT NULL,
branchid mediumint(9) DEFAULT '0' NOT NULL,
addedlines int(11) DEFAULT '0' NOT NULL,
removedlines int(11) DEFAULT '0' NOT NULL,
descid mediumint(9),
UNIQUE repositoryid (repositoryid,dirid,fileid,revision),
KEY repositoryid_when (repositoryid,ci_when),
KEY ci_when (ci_when),
KEY whoid (whoid,ci_when),
KEY dirid (dirid),
KEY fileid (fileid),
KEY branchid (branchid),
KEY descid (descid)
) TYPE=InnoDB;
DROP TABLE IF EXISTS descs;
CREATE TABLE descs (
id mediumint(9) NOT NULL auto_increment,
description text,
hash bigint(20) DEFAULT '0' NOT NULL,
PRIMARY KEY (id),
KEY hash (hash),
FULLTEXT KEY description (description)
) TYPE=MyISAM;
DROP TABLE IF EXISTS dirs;
CREATE TABLE dirs (
id mediumint(9) NOT NULL auto_increment,
dir varchar(255) binary DEFAULT '' NOT NULL,
PRIMARY KEY (id),
UNIQUE dir (dir)
) TYPE=InnoDB;
DROP TABLE IF EXISTS files;
CREATE TABLE files (
id mediumint(9) NOT NULL auto_increment,
file varchar(255) binary DEFAULT '' NOT NULL,
PRIMARY KEY (id),
UNIQUE file (file)
) TYPE=InnoDB;
DROP TABLE IF EXISTS people;
CREATE TABLE people (
id mediumint(9) NOT NULL auto_increment,
who varchar(128) binary DEFAULT '' NOT NULL,
PRIMARY KEY (id),
UNIQUE who (who)
) TYPE=InnoDB;
DROP TABLE IF EXISTS repositories;
CREATE TABLE repositories (
id mediumint(9) NOT NULL auto_increment,
repository varchar(64) binary DEFAULT '' NOT NULL,
PRIMARY KEY (id),
UNIQUE repository (repository)
) TYPE=InnoDB;
DROP TABLE IF EXISTS tags;
CREATE TABLE tags (
repositoryid mediumint(9) DEFAULT '0' NOT NULL,
branchid mediumint(9) DEFAULT '0' NOT NULL,
dirid mediumint(9) DEFAULT '0' NOT NULL,
fileid mediumint(9) DEFAULT '0' NOT NULL,
revision varchar(32) binary DEFAULT '' NOT NULL,
UNIQUE repositoryid (repositoryid,dirid,fileid,branchid,revision),
KEY repositoryid_2 (repositoryid),
KEY dirid (dirid),
KEY fileid (fileid),
KEY branchid (branchid)
) TYPE=InnoDB;
DROP TABLE IF EXISTS contents;
CREATE TABLE contents (
id int NOT NULL PRIMARY KEY,
content MEDIUMTEXT NOT NULL DEFAULT ''
) TYPE=MyISAM;
"""
## ------------------------------------------------------------------------
## Version 1: Adds the 'metadata' table. Adds 'descid' index to
## 'checkins' table, and renames that table to 'commits'.
##
DATABASE_SCRIPT_VERSION_1="""\
DROP TABLE IF EXISTS branches;
CREATE TABLE branches (
id mediumint(9) NOT NULL auto_increment,
branch varchar(64) binary DEFAULT '' NOT NULL,
PRIMARY KEY (id),
UNIQUE branch (branch)
) TYPE=InnoDB;
DROP TABLE IF EXISTS commits;
CREATE TABLE commits (
id int NOT NULL AUTO_INCREMENT PRIMARY KEY,
type enum('Change','Add','Remove'),
ci_when datetime DEFAULT '0000-00-00 00:00:00' NOT NULL,
whoid mediumint(9) DEFAULT '0' NOT NULL,
repositoryid mediumint(9) DEFAULT '0' NOT NULL,
dirid mediumint(9) DEFAULT '0' NOT NULL,
fileid mediumint(9) DEFAULT '0' NOT NULL,
revision varchar(32) binary DEFAULT '' NOT NULL,
stickytag varchar(255) binary DEFAULT '' NOT NULL,
branchid mediumint(9) DEFAULT '0' NOT NULL,
addedlines int(11) DEFAULT '0' NOT NULL,
removedlines int(11) DEFAULT '0' NOT NULL,
descid mediumint(9),
UNIQUE repositoryid (repositoryid,dirid,fileid,revision),
KEY repositoryid_when (repositoryid,ci_when),
KEY ci_when (ci_when),
KEY whoid (whoid,ci_when),
KEY dirid (dirid),
KEY fileid (fileid),
KEY branchid (branchid),
KEY descid (descid)
) TYPE=InnoDB;
DROP TABLE IF EXISTS descs;
CREATE TABLE descs (
id mediumint(9) NOT NULL auto_increment,
description text,
hash bigint(20) DEFAULT '0' NOT NULL,
PRIMARY KEY (id),
KEY hash (hash),
FULLTEXT KEY description (description)
) TYPE=MyISAM;
DROP TABLE IF EXISTS dirs;
CREATE TABLE dirs (
id mediumint(9) NOT NULL auto_increment,
dir varchar(255) binary DEFAULT '' NOT NULL,
PRIMARY KEY (id),
UNIQUE dir (dir)
) TYPE=InnoDB;
DROP TABLE IF EXISTS files;
CREATE TABLE files (
id mediumint(9) NOT NULL auto_increment,
file varchar(255) binary DEFAULT '' NOT NULL,
PRIMARY KEY (id),
UNIQUE file (file)
) TYPE=InnoDB;
DROP TABLE IF EXISTS people;
CREATE TABLE people (
id mediumint(9) NOT NULL auto_increment,
who varchar(128) binary DEFAULT '' NOT NULL,
PRIMARY KEY (id),
UNIQUE who (who)
) TYPE=InnoDB;
DROP TABLE IF EXISTS repositories;
CREATE TABLE repositories (
id mediumint(9) NOT NULL auto_increment,
repository varchar(64) binary DEFAULT '' NOT NULL,
PRIMARY KEY (id),
UNIQUE repository (repository)
) TYPE=InnoDB;
DROP TABLE IF EXISTS tags;
CREATE TABLE tags (
repositoryid mediumint(9) DEFAULT '0' NOT NULL,
branchid mediumint(9) DEFAULT '0' NOT NULL,
dirid mediumint(9) DEFAULT '0' NOT NULL,
fileid mediumint(9) DEFAULT '0' NOT NULL,
revision varchar(32) binary DEFAULT '' NOT NULL,
UNIQUE repositoryid (repositoryid,dirid,fileid,branchid,revision),
KEY repositoryid_2 (repositoryid),
KEY dirid (dirid),
KEY fileid (fileid),
KEY branchid (branchid)
) TYPE=InnoDB;
DROP TABLE IF EXISTS metadata;
CREATE TABLE metadata (
name varchar(255) binary DEFAULT '' NOT NULL,
value text,
PRIMARY KEY (name),
UNIQUE name (name)
) TYPE=InnoDB;
INSERT INTO metadata (name, value) VALUES ('version', '1');
DROP TABLE IF EXISTS contents;
CREATE TABLE contents (
id int NOT NULL PRIMARY KEY,
content MEDIUMTEXT NOT NULL DEFAULT ''
) TYPE=MyISAM;
"""
BONSAI_COMPAT="""
WARNING: Creating Bonsai-compatible legacy database version. Some ViewVC
features may not be available, or may not perform especially well.
"""
## ------------------------------------------------------------------------
def usage_and_exit(errmsg=None):
stream = errmsg is None and sys.stdout or sys.stderr
stream.write("""\
Usage: %s [OPTIONS]
This script creates the database and tables in MySQL used by the
ViewVC checkin database. In order to operate correctly, it needs to
know the following: your database server hostname, database user,
database user password, and database name. (You will be prompted for
any of this information that you do not provide via command-line
options.) This script will use the 'mysql' program to create the
database for you. You will then need to set the appropriate
parameters in the [cvsdb] section of your viewvc.conf file.
Options:
--dbname=ARG Use ARG as the ViewVC database name to create.
[Default: ViewVC]
--help Show this usage message.
--hostname=ARG Use ARG as the hostname for the MySQL connection.
[Default: localhost]
--password=ARG Use ARG as the password for the MySQL connection.
--username=ARG Use ARG as the username for the MySQL connection.
--version=ARG Create the database using the schema employed by
version ARG of ViewVC. Valid values are:
[ "1.0" ]
""" % (os.path.basename(sys.argv[0])))
if errmsg is not None:
stream.write("[ERROR] %s.\n" % (errmsg))
sys.exit(1)
sys.exit(0)
if __name__ == "__main__":
try:
# Parse the command-line options, if any.
dbname = version = hostname = username = password = None
opts, args = getopt.getopt(sys.argv[1:], '', [ 'dbname=',
'help',
'hostname=',
'password=',
'username=',
'version=',
])
if len(args) > 0:
usage_and_exit("Unexpected command-line parameters")
for name, value in opts:
if name == '--help':
usage_and_exit(0)
elif name == '--dbname':
dbname = value
elif name == '--hostname':
hostname = value
elif name == '--username':
username = value
elif name == '--password':
password = value
elif name == '--version':
if value in ["1.0"]:
version = value
else:
usage_and_exit("Invalid version specified")
# Prompt for information not provided via command-line options.
if hostname is None:
hostname = raw_input("MySQL Hostname [default: localhost]: ") or ""
if username is None:
username = raw_input("MySQL User: ")
if password is None:
password = raw_input("MySQL Password: ")
if dbname is None:
dbname = raw_input("ViewVC Database Name [default: ViewVC]: ") or "ViewVC"
# Create the database
dscript = string.replace(DATABASE_SCRIPT_COMMON, "<dbname>", dbname)
if version == "1.0":
print BONSAI_COMPAT
dscript = dscript + DATABASE_SCRIPT_VERSION_0
else:
dscript = dscript + DATABASE_SCRIPT_VERSION_1
host_option = hostname and "--host=%s" % (hostname) or ""
if sys.platform == "win32":
cmd = "mysql --user=%s --password=%s %s "\
% (username, password, host_option)
mysql = os.popen(cmd, "w") # popen2.Popen3 is not provided on windows
mysql.write(dscript)
status = mysql.close()
else:
cmd = "{ mysql --user=%s --password=%s %s ; } 2>&1" \
% (username, password, host_option)
pipes = popen2.Popen3(cmd)
pipes.tochild.write(dscript)
pipes.tochild.close()
print pipes.fromchild.read()
status = pipes.wait()
if status:
print "[ERROR] The database did not create sucessfully."
sys.exit(1)
print "Database created successfully. Don't forget to configure the "
print "[cvsdb] section of your viewvc.conf file."
except KeyboardInterrupt:
pass
sys.exit(0)

View File

@@ -1,3 +0,0 @@
AddHandler python-program .py
PythonHandler handler
PythonDebug Off

View File

@@ -1,29 +0,0 @@
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# Mod_Python handler based on mod_python.publisher
#
# -----------------------------------------------------------------------
from mod_python import apache
import os.path
def handler(req):
path, module_name = os.path.split(req.filename)
module_name, module_ext = os.path.splitext(module_name)
# Let it be 500 Internal Server Error in case of import error
module = apache.import_module(module_name, path=[path])
req.add_common_vars()
module.index(req)
return apache.OK

View File

@@ -1,75 +0,0 @@
#!/usr/bin/env python
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# ViewVC: View CVS/SVN repositories via a web browser
#
# -----------------------------------------------------------------------
#
# This is a teeny stub to launch the main ViewVC app. It checks the load
# average, then loads the (precompiled) viewvc.py file and runs it.
#
# -----------------------------------------------------------------------
#
#########################################################################
# INSTALL-TIME CONFIGURATION
#
# These values will be set during the installation process. During
# development, there will be no 'viewvcinstallpath.py'
#
import viewvcinstallpath
LIBRARY_DIR = viewvcinstallpath.LIBRARY_DIR
CONF_PATHNAME = viewvcinstallpath.CONF_PATHNAME
#########################################################################
#
# Adjust sys.path to include our library directory
#
import sys
if LIBRARY_DIR:
sys.path.insert(0, LIBRARY_DIR)
import sapi
import imp
# Import real ViewVC module
fp, pathname, description = imp.find_module('viewvc', [LIBRARY_DIR])
try:
viewvc = imp.load_module('viewvc', fp, pathname, description)
finally:
if fp:
fp.close()
# Import real ViewVC Query modules
fp, pathname, description = imp.find_module('query', [LIBRARY_DIR])
try:
query = imp.load_module('query', fp, pathname, description)
finally:
if fp:
fp.close()
cfg = viewvc.load_config(CONF_PATHNAME)
def index(req):
server = sapi.ModPythonServer(req)
try:
viewvc_base_url = cfg.query.viewvc_base_url
if viewvc_base_url is None:
viewvc_base_url = "viewvc.py"
query.main(server, cfg, viewvc_base_url)
finally:
server.close()

View File

@@ -1,69 +0,0 @@
#!/usr/bin/env python
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# viewvc: View CVS/SVN repositories via a web browser
#
# -----------------------------------------------------------------------
#
# This is a teeny stub to launch the main ViewVC app. It checks the load
# average, then loads the (precompiled) viewvc.py file and runs it.
#
# -----------------------------------------------------------------------
#
#########################################################################
# INSTALL-TIME CONFIGURATION
#
# These values will be set during the installation process. During
# development, there will be no 'viewvcinstallpath.py'
#
import viewvcinstallpath
LIBRARY_DIR = viewvcinstallpath.LIBRARY_DIR
CONF_PATHNAME = viewvcinstallpath.CONF_PATHNAME
#########################################################################
#
# Adjust sys.path to include our library directory
#
import sys
if LIBRARY_DIR:
sys.path.insert(0, LIBRARY_DIR)
import sapi
import imp
import signal
# Totally ignore SIGPIPE - needed for overcoming popen() errors
def sigpipe(signum, frame):
pass
signal.signal(signal.SIGPIPE, sigpipe)
# Import real ViewVC module
fp, pathname, description = imp.find_module('viewvc', [LIBRARY_DIR])
try:
viewvc = imp.load_module('viewvc', fp, pathname, description)
finally:
if fp:
fp.close()
def index(req):
server = sapi.ModPythonServer(req)
cfg = viewvc.load_config(CONF_PATHNAME, server)
try:
viewvc.main(server, cfg)
finally:
server.close()

View File

@@ -1,57 +0,0 @@
#!/bin/sh
CVSROOTS=$1
VIEWVC_DIR=$2
CVS_USER=$3
CVS_GROUP=$4
CVS_ASYNC=$5
if [ ! "$CVSROOTS" -o ! "$VIEWVC_DIR" -o ! "$CVS_USER" -o ! "$CVS_GROUP" ]; then
echo "USAGE: $0 <CVS_ROOTS> <VIEWVC_DIR> <CVS_USER> <CVS_GROUP> [CVS_ASYNC]"
exit
fi
echo "*** Setting up commit hooks for CVS repositories (job=$CVS_ASYNC)"
chgrp $CVS_GROUP $VIEWVC_DIR
chmod 775 $VIEWVC_DIR
chmod 1777 .
for CVSROOT in $CVSROOTS; do
vcrootname=`basename $CVSROOT`
if [ "$CVS_ASYNC" ]; then
VHOOK="touch $VIEWVC_DIR/.cvs-updated-$vcrootname"
else
VHOOK="$VIEWVC_DIR/bin/cvsdbadmin update $CVSROOT >/dev/null"
fi;
su $CVS_USER -s /bin/sh <<EOSH
CVSROOT=$CVSROOT cvs co CVSROOT
if [ $? -eq 0 ]; then
cd CVSROOT
mv loginfo loginfo.bak
grep -v '/.cvs-updated' <loginfo.bak | grep -v '/bin/cvsdbadmin update' >loginfo
cat >>loginfo <<EOF
ALL $VHOOK
EOF
cvs ci -m 'CVS commit hook for ViewVC' loginfo
cd ..
rm -Rf CVSROOT
fi
EOSH
if [ "$CVS_ASYNC" ]; then
cat >$VIEWVC_DIR/cvshook-$vcrootname <<EOF
#!/bin/sh
if [ -e "$VIEWVC_DIR/.cvs-updated-$vcrootname" ]; then
rm "$VIEWVC_DIR/.cvs-updated-$vcrootname"
$VIEWVC_DIR/bin/cvsdbadmin update $CVSROOT >/dev/null
fi
EOF
chmod 755 $VIEWVC_DIR/cvshook-$vcrootname
cat >/etc/cron.d/viewvc-cvs-$vcrootname <<EOF
*/5 * * * * root $VIEWVC_DIR/cvshook-$vcrootname
EOF
fi
done;
if [ "$CVS_ASYNC" ]; then
echo "*** Reloading cron daemon"
/etc/init.d/cron reload
fi

View File

@@ -1,36 +0,0 @@
#!/bin/sh
SVNROOT=$1
VIEWVC_DIR=$2
SVN_ASYNC=$3
if [ ! "$SVNROOT" -o ! "$VIEWVC_DIR" ]; then
echo "USAGE: $0 <SVNROOT> <VIEWVC_DIR>"
exit
fi
echo "*** Setting up commit hooks for Subversion"
for i in `ls $SVNROOT`; do
if [ -d "$SVNROOT/$i" ]; then
if [ "$SVN_ASYNC" ]; then
cat >"$SVNROOT/$i/hooks/post-commit.tmp" <<EOF
#!/bin/sh
echo "\$1 \$2" >> $VIEWVC_DIR/.svn-updated
EOF
else
cat >"$SVNROOT/$i/hooks/post-commit.tmp" <<EOF
#!/bin/sh
$VIEWVC_DIR/bin/svndbadmin update \$1 \$2
EOF
fi
chmod 755 "$SVNROOT/$i/hooks/post-commit.tmp"
mv "$SVNROOT/$i/hooks/post-commit.tmp" "$SVNROOT/$i/hooks/post-commit"
fi
done;
if [ "$SVN_ASYNC" ]; then
cat >/etc/cron.d/viewvc-svn <<EOF
*/10 * * * * root $VIEWVC_DIR/bin/svnupdate-async.sh "$VIEWVC_DIR"
EOF
/etc/init.d/cron reload
fi

View File

@@ -1,588 +0,0 @@
#!/usr/bin/env python
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# This program originally written by Peter Funk <pf@artcom-gmbh.de>, with
# contributions by Ka-Ping Yee.
#
# -----------------------------------------------------------------------
"""Run "standalone.py -p <port>" to start an HTTP server on a given port
on the local machine to generate ViewVC web pages.
"""
#
# INSTALL-TIME CONFIGURATION
#
# These values will be set during the installation process. During
# development, there will be no 'viewvcinstallpath.py'
#
import viewvcinstallpath
LIBRARY_DIR = viewvcinstallpath.LIBRARY_DIR
CONF_PATHNAME = viewvcinstallpath.CONF_PATHNAME
import sys
import os
import os.path
import stat
import string
import urllib
import rfc822
import socket
import select
import base64
import BaseHTTPServer
if LIBRARY_DIR:
sys.path.insert(0, LIBRARY_DIR)
else:
sys.path.insert(0, os.path.abspath(os.path.join(sys.argv[0], "../../lib")))
import sapi
import viewvc
# The 'crypt' module is only available on Unix platforms. We'll try
# to use 'fcrypt' if it's available (for more information, see
# http://carey.geek.nz/code/python-fcrypt/).
has_crypt = False
try:
import crypt
has_crypt = True
def _check_passwd(user_passwd, real_passwd):
return real_passwd == crypt.crypt(user_passwd, real_passwd[:2])
except ImportError:
try:
import fcrypt
has_crypt = True
def _check_passwd(user_passwd, real_passwd):
return real_passwd == fcrypt.crypt(user_passwd, real_passwd[:2])
except ImportError:
def _check_passwd(user_passwd, real_passwd):
return False
class Options:
port = 49152 # default TCP/IP port used for the server
repositories = {} # use default repositories specified in config
host = sys.platform == 'mac' and '127.0.0.1' or 'localhost'
script_alias = 'viewvc'
config_file = None
htpasswd_file = None
class StandaloneServer(sapi.CgiServer):
"""Custom sapi interface that uses a BaseHTTPRequestHandler HANDLER
to generate output."""
def __init__(self, handler):
sapi.CgiServer.__init__(self, inheritableOut = sys.platform != "win32")
self.handler = handler
def header(self, content_type='text/html', status=None):
if not self.headerSent:
self.headerSent = 1
if status is None:
statusCode = 200
statusText = 'OK'
else:
p = status.find(' ')
if p < 0:
statusCode = int(status)
statusText = ''
else:
statusCode = int(status[:p])
statusText = status[p+1:]
self.handler.send_response(statusCode, statusText)
self.handler.send_header("Content-type", content_type)
for (name, value) in self.headers:
self.handler.send_header(name, value)
self.handler.end_headers()
class NotViewVCLocationException(Exception):
"""The request location was not aimed at ViewVC."""
pass
class AuthenticationException(Exception):
"""Authentication requirements have not been met."""
pass
class ViewVCHTTPRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler):
"""Custom HTTP request handler for ViewVC."""
def do_GET(self):
"""Serve a GET request."""
self.handle_request('GET')
def do_POST(self):
"""Serve a POST request."""
self.handle_request('POST')
def handle_request(self, method):
"""Handle a request of type METHOD."""
try:
self.run_viewvc()
except NotViewVCLocationException:
# If the request was aimed at the server root, but there's a
# non-empty script_alias, automatically redirect to the
# script_alias. Otherwise, just return a 404 and shrug.
if (not self.path or self.path == "/") and options.script_alias:
new_url = self.server.url + options.script_alias + '/'
self.send_response(301, "Moved Permanently")
self.send_header("Content-type", "text/html")
self.send_header("Location", new_url)
self.end_headers()
self.wfile.write("""<html>
<head>
<meta http-equiv="refresh" content="10; url=%s" />
<title>Moved Temporarily</title>
</head>
<body>
<h1>Redirecting to ViewVC</h1>
<p>You will be automatically redirected to <a href="%s">ViewVC</a>.
If this doesn't work, please click on the link above.</p>
</body>
</html>
""" % (new_url, new_url))
else:
self.send_error(404)
except IOError: # ignore IOError: [Errno 32] Broken pipe
pass
except AuthenticationException:
self.send_response(401, "Unauthorized")
self.send_header("WWW-Authenticate", 'Basic realm="ViewVC"')
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write("""<html>
<head>
<title>Authentication failed</title>
</head>
<body>
<h1>Authentication failed</h1>
<p>Authentication has failed. Please retry with the correct username
and password.</p>
</body>
</html>""")
def is_viewvc(self):
"""Check whether self.path is, or is a child of, the ScriptAlias"""
if not options.script_alias:
return 1
if self.path == '/' + options.script_alias:
return 1
alias_len = len(options.script_alias)
if self.path[:alias_len+2] == '/' + options.script_alias + '/':
return 1
if self.path[:alias_len+2] == '/' + options.script_alias + '?':
return 1
return 0
def validate_password(self, htpasswd_file, username, password):
"""Compare USERNAME and PASSWORD against HTPASSWD_FILE."""
try:
lines = open(htpasswd_file, 'r').readlines()
for line in lines:
file_user, file_pass = line.rstrip().split(':', 1)
if username == file_user:
return _check_passwd(password, file_pass)
except:
pass
return False
def run_viewvc(self):
"""Run ViewVC to field a single request."""
### Much of this is adapter from Python's standard library
### module CGIHTTPServer.
# Is this request even aimed at ViewVC? If not, complain.
if not self.is_viewvc():
raise NotViewVCLocationException()
# If htpasswd authentication is enabled, try to authenticate the user.
self.username = None
if options.htpasswd_file:
authn = self.headers.get('authorization')
if not authn:
raise AuthenticationException()
try:
kind, data = authn.split(' ', 1)
if kind == 'Basic':
data = base64.b64decode(data)
username, password = data.split(':', 1)
except:
raise AuthenticationException()
if not self.validate_password(options.htpasswd_file, username, password):
raise AuthenticationException()
self.username = username
# Setup the environment in preparation of executing ViewVC's core code.
env = os.environ
scriptname = options.script_alias and '/' + options.script_alias or ''
viewvc_url = self.server.url[:-1] + scriptname
rest = self.path[len(scriptname):]
i = rest.rfind('?')
if i >= 0:
rest, query = rest[:i], rest[i+1:]
else:
query = ''
# Since we're going to modify the env in the parent, provide empty
# values to override previously set values
for k in env.keys():
if k[:5] == 'HTTP_':
del env[k]
for k in ('QUERY_STRING', 'REMOTE_HOST', 'CONTENT_LENGTH',
'HTTP_USER_AGENT', 'HTTP_COOKIE'):
if env.has_key(k):
env[k] = ""
# XXX Much of the following could be prepared ahead of time!
env['SERVER_SOFTWARE'] = self.version_string()
env['SERVER_NAME'] = self.server.server_name
env['GATEWAY_INTERFACE'] = 'CGI/1.1'
env['SERVER_PROTOCOL'] = self.protocol_version
env['SERVER_PORT'] = str(self.server.server_port)
env['REQUEST_METHOD'] = self.command
uqrest = urllib.unquote(rest)
env['PATH_INFO'] = uqrest
env['SCRIPT_NAME'] = scriptname
if query:
env['QUERY_STRING'] = query
env['HTTP_HOST'] = self.server.address[0]
host = self.address_string()
if host != self.client_address[0]:
env['REMOTE_HOST'] = host
env['REMOTE_ADDR'] = self.client_address[0]
if self.username:
env['REMOTE_USER'] = self.username
if self.headers.typeheader is None:
env['CONTENT_TYPE'] = self.headers.type
else:
env['CONTENT_TYPE'] = self.headers.typeheader
length = self.headers.getheader('content-length')
if length:
env['CONTENT_LENGTH'] = length
accept = []
for line in self.headers.getallmatchingheaders('accept'):
if line[:1] in string.whitespace:
accept.append(line.strip())
else:
accept = accept + line[7:].split(',')
env['HTTP_ACCEPT'] = ','.join(accept)
ua = self.headers.getheader('user-agent')
if ua:
env['HTTP_USER_AGENT'] = ua
modified = self.headers.getheader('if-modified-since')
if modified:
env['HTTP_IF_MODIFIED_SINCE'] = modified
etag = self.headers.getheader('if-none-match')
if etag:
env['HTTP_IF_NONE_MATCH'] = etag
# AUTH_TYPE
# REMOTE_IDENT
# XXX Other HTTP_* headers
# Preserve state, because we execute script in current process:
save_argv = sys.argv
save_stdin = sys.stdin
save_stdout = sys.stdout
save_stderr = sys.stderr
# For external tools like enscript we also need to redirect
# the real stdout file descriptor.
#
# FIXME: This code used to carry the following comment:
#
# (On windows, reassigning the sys.stdout variable is sufficient
# because pipe_cmds makes it the standard output for child
# processes.)
#
# But we no longer use pipe_cmds. So at the very least, the
# comment is stale. Is the code okay, though?
if sys.platform != "win32":
save_realstdout = os.dup(1)
try:
try:
sys.stdout = self.wfile
if sys.platform != "win32":
os.dup2(self.wfile.fileno(), 1)
sys.stdin = self.rfile
viewvc.main(StandaloneServer(self), cfg)
finally:
sys.argv = save_argv
sys.stdin = save_stdin
sys.stdout.flush()
if sys.platform != "win32":
os.dup2(save_realstdout, 1)
os.close(save_realstdout)
sys.stdout = save_stdout
sys.stderr = save_stderr
except SystemExit, status:
self.log_error("ViewVC exit status %s", str(status))
else:
self.log_error("ViewVC exited ok")
class ViewVCHTTPServer(BaseHTTPServer.HTTPServer):
"""Customized HTTP server for ViewVC."""
def __init__(self, host, port, callback):
self.address = (host, port)
self.url = 'http://%s:%d/' % (host, port)
self.callback = callback
BaseHTTPServer.HTTPServer.__init__(self, self.address, self.handler)
def serve_until_quit(self):
self.quit = 0
while not self.quit:
rd, wr, ex = select.select([self.socket.fileno()], [], [], 1)
if rd:
self.handle_request()
def server_activate(self):
BaseHTTPServer.HTTPServer.server_activate(self)
if self.callback:
self.callback(self)
def server_bind(self):
# set SO_REUSEADDR (if available on this platform)
if hasattr(socket, 'SOL_SOCKET') and hasattr(socket, 'SO_REUSEADDR'):
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
BaseHTTPServer.HTTPServer.server_bind(self)
def serve(host, port, callback=None):
"""Start an HTTP server for HOST on PORT. Call CALLBACK function
when the server is ready to serve."""
ViewVCHTTPServer.handler = ViewVCHTTPRequestHandler
try:
# XXX Move this code out of this function.
# Early loading of configuration here. Used to allow tinkering
# with some configuration settings:
handle_config(options.config_file)
if options.repositories:
cfg.general.default_root = "Development"
for repo_name in options.repositories.keys():
repo_path = os.path.normpath(options.repositories[repo_name])
if os.path.exists(os.path.join(repo_path, "CVSROOT", "config")):
cfg.general.cvs_roots[repo_name] = repo_path
elif os.path.exists(os.path.join(repo_path, "format")):
cfg.general.svn_roots[repo_name] = repo_path
elif cfg.general.cvs_roots.has_key("Development") and \
not os.path.isdir(cfg.general.cvs_roots["Development"]):
sys.stderr.write("*** No repository found. Please use the -r option.\n")
sys.stderr.write(" Use --help for more info.\n")
raise KeyboardInterrupt # Hack!
os.close(0) # To avoid problems with shell job control
# always use default docroot location
cfg.options.docroot = None
# if cvsnt isn't found, fall back to rcs
if (cfg.conf_path is None and cfg.utilities.cvsnt):
import popen
cvsnt_works = 0
try:
fp = popen.popen(cfg.utilities.cvsnt, ['--version'], 'rt')
try:
while 1:
line = fp.readline()
if not line:
break
if line.find("Concurrent Versions System (CVSNT)") >= 0:
cvsnt_works = 1
while fp.read(4096):
pass
break
finally:
fp.close()
except:
pass
if not cvsnt_works:
cfg.utilities.cvsnt = None
ViewVCHTTPServer(host, port, callback).serve_until_quit()
except (KeyboardInterrupt, select.error):
pass
print 'server stopped'
def handle_config(config_file):
global cfg
cfg = viewvc.load_config(config_file or CONF_PATHNAME)
def usage():
clean_options = Options()
cmd = os.path.basename(sys.argv[0])
port = clean_options.port
host = clean_options.host
script_alias = clean_options.script_alias
sys.stderr.write("""Usage: %(cmd)s [OPTIONS]
Run a simple, standalone HTTP server configured to serve up ViewVC requests.
Options:
--config-file=FILE (-c) Read configuration options from FILE. If not
specified, ViewVC will look for a configuration
file in its installation tree, falling back to
built-in default values.
--daemon (-d) Background the server process.
--help Show this usage message and exit.
--host=HOSTNAME (-h) Listen on HOSTNAME. Required for access from a
remote machine. [default: %(host)s]
--htpasswd-file=FILE Authenticate incoming requests, validating against
against FILE, which is an Apache HTTP Server
htpasswd file. (CRYPT only; no DIGEST support.)
--port=PORT (-p) Listen on PORT. [default: %(port)d]
--repository=PATH (-r) Serve the Subversion or CVS repository located
at PATH. This option may be used more than once.
--script-alias=PATH (-s) Use PATH as the virtual script location (similar
to Apache HTTP Server's ScriptAlias directive).
For example, "--script-alias=repo/view" will serve
ViewVC at "http://HOSTNAME:PORT/repo/view".
[default: %(script_alias)s]
""" % locals())
sys.exit(0)
def badusage(errstr):
cmd = os.path.basename(sys.argv[0])
sys.stderr.write("ERROR: %s\n\n"
"Try '%s --help' for detailed usage information.\n"
% (errstr, cmd))
sys.exit(1)
def main(argv):
"""Command-line interface (looks at argv to decide what to do)."""
import getopt
short_opts = ''.join(['c:',
'd',
'h:',
'p:',
'r:',
's:',
])
long_opts = ['daemon',
'config-file=',
'help',
'host=',
'htpasswd-file=',
'port=',
'repository=',
'script-alias=',
]
opt_daemon = False
opt_host = None
opt_port = None
opt_htpasswd_file = None
opt_config_file = None
opt_script_alias = None
opt_repositories = []
# Parse command-line options.
try:
opts, args = getopt.getopt(argv[1:], short_opts, long_opts)
for opt, val in opts:
if opt in ['--help']:
usage()
elif opt in ['-r', '--repository']: # may be used more than once
opt_repositories.append(val)
elif opt in ['-d', '--daemon']:
opt_daemon = 1
elif opt in ['-p', '--port']:
opt_port = val
elif opt in ['-h', '--host']:
opt_host = val
elif opt in ['-s', '--script-alias']:
opt_script_alias = val
elif opt in ['-c', '--config-file']:
opt_config_file = val
elif opt in ['--htpasswd-file']:
opt_htpasswd_file = val
except getopt.error, err:
badusage(str(err))
# Validate options that need validating.
class BadUsage(Exception): pass
try:
if opt_port is not None:
try:
options.port = int(opt_port)
except ValueError:
raise BadUsage("Port '%s' is not a valid port number" % (opt_port))
if not options.port:
raise BadUsage("You must supply a valid port.")
if opt_htpasswd_file is not None:
if not os.path.isfile(opt_htpasswd_file):
raise BadUsage("'%s' does not appear to be a valid htpasswd file."
% (opt_htpasswd_file))
if not has_crypt:
raise BadUsage("Unable to locate suitable `crypt' module for use "
"with --htpasswd-file option. If your Python "
"distribution does not include this module (as is "
"the case on many non-Unix platforms), consider "
"installing the `fcrypt' module instead (see "
"http://carey.geek.nz/code/python-fcrypt/).")
options.htpasswd_file = opt_htpasswd_file
if opt_config_file is not None:
if not os.path.isfile(opt_config_file):
raise BadUsage("'%s' does not appear to be a valid configuration file."
% (opt_config_file))
options.config_file = opt_config_file
if opt_host is not None:
options.host = opt_host
if opt_script_alias is not None:
options.script_alias = '/'.join(filter(None, opt_script_alias.split('/')))
for repository in opt_repositories:
if not options.repositories.has_key('Development'):
rootname = 'Development'
else:
rootname = 'Repository%d' % (len(options.repositories.keys()) + 1)
options.repositories[rootname] = repository
except BadUsage, err:
badusage(str(err))
# Fork if we're in daemon mode.
if opt_daemon:
pid = os.fork()
if pid != 0:
sys.exit()
# Finaly, start the server.
def ready(server):
print 'server ready at %s%s' % (server.url, options.script_alias)
serve(options.host, options.port, ready)
if __name__ == '__main__':
options = Options()
main(sys.argv)

View File

@@ -1,545 +0,0 @@
#!/usr/bin/env python
# -*-python-*-
#
# Copyright (C) 2004-2013 The ViewCVS Group. All Rights Reserved.
# Copyright (C) 2004-2007 James Henstridge
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# administrative program for loading Subversion revision information
# into the checkin database. It can be used to add a single revision
# to the database, or rebuild/update all revisions.
#
# To add all the checkins from a Subversion repository to the checkin
# database, run the following:
# /path/to/svndbadmin rebuild /path/to/repo
#
# This script can also be called from the Subversion post-commit hook,
# something like this:
# REPOS="$1"
# REV="$2"
# /path/to/svndbadmin update "$REPOS" "$REV"
#
# If you allow changes to revision properties in your repository, you
# might also want to set up something similar in the
# post-revprop-change hook using "update" with the --force option to
# keep the checkin database consistent with the repository.
#
# -----------------------------------------------------------------------
#
#########################################################################
# INSTALL-TIME CONFIGURATION
#
# These values will be set during the installation process. During
# development, there will be no 'viewvcinstallpath.py'
#
import viewvcinstallpath
LIBRARY_DIR = viewvcinstallpath.LIBRARY_DIR
CONF_PATHNAME = viewvcinstallpath.CONF_PATHNAME
# Adjust sys.path to include our library directory
import sys
import os
if LIBRARY_DIR:
sys.path.insert(0, LIBRARY_DIR)
else:
sys.path.insert(0, os.path.abspath(os.path.join(sys.argv[0], "../../lib")))
#########################################################################
import os
import string
import socket
import select
import re
import mimetypes
import time
import svn.core
import svn.repos
import svn.fs
import svn.delta
import cvsdb
import viewvc
import vclib
from viewvcmagic import ContentMagic
class SvnRepo:
"""Class used to manage a connection to a SVN repository."""
def __init__(self, path, index_content = None, tika_client = None, guesser = None,
svn_ignore_mimetype = False, verbose = False):
self.path = path
self.repo = svn.repos.svn_repos_open(path)
self.fs = svn.repos.svn_repos_fs(self.repo)
self.rev_max = svn.fs.youngest_rev(self.fs)
self.index_content = index_content
self.tika_client = tika_client
self.guesser = guesser
self.verbose = verbose
self.svn_ignore_mimetype = svn_ignore_mimetype
def __getitem__(self, rev):
if rev is None:
rev = self.rev_max
elif rev < 0:
rev = rev + self.rev_max + 1
assert 0 <= rev <= self.rev_max
rev = SvnRev(self, rev)
return rev
_re_diff_change_command = re.compile('^(\d+)(?:,(\d+))?([acd])(\d+)(?:,(\d+))?')
class StupidBufferedReader:
def __init__(self, fp, buffer = 262144):
self.fp = fp
self.bufsize = buffer
self.buffer = ''
self.eof = False
def __iter__(self):
return self
def next(self):
if self.eof:
raise StopIteration
return self.readline()
def readline(self):
if self.eof:
return ''
p = self.buffer.find('\n')
while p < 0:
b = self.fp.read(self.bufsize)
if not len(b):
r = self.buffer
self.buffer = ''
self.eof = True
return r
self.buffer = self.buffer + b
p = self.buffer.find('\n')
r = self.buffer[0:p+1]
self.buffer = self.buffer[p+1:]
return r
def _get_diff_counts(diff_fp):
"""Calculate the plus/minus counts by parsing the output of a
normal diff. The reasons for choosing Normal diff format are:
- the output is short, so should be quicker to parse.
- only the change commands need be parsed to calculate the counts.
- All file data is prefixed, so won't be mistaken for a change
command.
This code is based on the description of the format found in the
GNU diff manual."""
plus, minus = 0, 0
for line in diff_fp:
match = re.match(_re_diff_change_command, line)
if match:
# size of first range
if match.group(2):
count1 = int(match.group(2)) - int(match.group(1)) + 1
else:
count1 = 1
cmd = match.group(3)
# size of second range
if match.group(5):
count2 = int(match.group(5)) - int(match.group(4)) + 1
else:
count2 = 1
if cmd == 'a':
# LaR - insert after line L of file1 range R of file2
plus = plus + count2
elif cmd == 'c':
# FcT - replace range F of file1 with range T of file2
minus = minus + count1
plus = plus + count2
elif cmd == 'd':
# RdL - remove range R of file1, which would have been
# at line L of file2
minus = minus + count1
return plus, minus
class TikaClient:
# Create tika client
def __init__(self, tika_server, mime_types, verbose):
self.tika_server = tika_server
self.mime_types = mime_types
self.verbose = verbose
self.addr = tika_server.split(':')
# Split address
if len(self.addr) != 2:
raise Exception('tika_server value is incorrect: \''+tika_server+'\', please use \'host:port\' format')
self.addr = (self.addr[0], int(self.addr[1]))
# Build regexp for MIME types
m = re.split('\s+', mime_types.strip())
self.mime_regexp = re.compile('|'.join('^'+re.escape(i).replace('\\*', '.*')+'$' for i in m))
# Extract text content from file using Tika which runs in server mode
def get_text(self, filename, mime_type, log_filename):
if not self.mime_regexp.match(mime_type):
# Tika can't handle this mime type, return nothing
return ''
fd = None
s = None
text = ''
fsize = 0
try:
# Read original file
fd = open(filename, 'rb')
data = fd.read()
fsize = len(data)
if not fsize:
return ''
# Connect to Tika
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(self.addr)
s.setblocking(0)
sockfd = s.fileno()
# Tika is somewhat delicate about network IO, so:
# Read and write using poll(2) system call
p = select.poll()
p.register(sockfd)
while 1:
fds = p.poll()
if not fds:
break
(pollfd, event) = fds[0]
if event & select.POLLIN:
# Exception or empty data means EOF...
try: part = os.read(sockfd, 65536)
except: break
if not part: break
text += part
if event & select.POLLOUT:
if not len(data):
# Shutdown output and forget about POLLOUT
s.shutdown(socket.SHUT_WR)
p.modify(sockfd, select.POLLIN)
else:
# Write and consume some data
l = os.write(sockfd, data)
data = data[l:]
if len(text) == 0:
raise Exception('Empty response from Tika server')
if self.verbose:
print "Extracted %d bytes from %s (%s) of size %d" % (len(text), log_filename, mime_type, fsize)
except Exception, e:
if self.verbose:
print "Error extracting text from %s (%s) of size %d: %s" % (log_filename, mime_type, fsize, str(e))
finally:
if fd: fd.close()
if s: s.close()
return text
class SvnRev:
"""Class used to hold information about a particular revision of
the repository."""
def __init__(self, repo, rev):
self.repo = repo
self.rev = rev
self.rev_roots = {} # cache of revision roots
# revision properties ...
revprops = svn.fs.revision_proplist(repo.fs, rev)
self.author = str(revprops.get(svn.core.SVN_PROP_REVISION_AUTHOR,''))
self.date = str(revprops.get(svn.core.SVN_PROP_REVISION_DATE, ''))
self.log = str(revprops.get(svn.core.SVN_PROP_REVISION_LOG, ''))
# convert the date string to seconds since epoch ...
try:
self.date = svn.core.svn_time_from_cstring(self.date) / 1000000
except:
self.date = None
# get a root for the current revisions
fsroot = self._get_root_for_rev(rev)
# find changes in the revision
editor = svn.repos.ChangeCollector(repo.fs, fsroot)
e_ptr, e_baton = svn.delta.make_editor(editor)
svn.repos.svn_repos_replay(fsroot, e_ptr, e_baton)
self.changes = []
changes_hash = {}
for path, change in editor.changes.items():
# skip non-file changes
if change.item_kind != svn.core.svn_node_file:
continue
# deal with the change types we handle
action = None
base_root = None
base_path = change.base_path
if change.base_path:
base_root = self._get_root_for_rev(change.base_rev)
# figure out what kind of change this is, and get a diff
# object for it. note that prior to 1.4 Subversion's
# bindings didn't give us change.action, but that's okay
# because back then deleted paths always had a change.path
# of None.
if hasattr(change, 'action') \
and change.action == svn.repos.CHANGE_ACTION_DELETE:
action = 'remove'
elif not change.path:
action = 'remove'
elif change.added:
action = 'add'
else:
action = 'change'
if action == 'remove':
diffobj = svn.fs.FileDiff(base_root, change.base_path, None, None, None, ['-b', '-B'])
else:
diffobj = svn.fs.FileDiff(base_root, change.base_path, fsroot, change.path, None, ['-b', '-B'])
diff_fp = diffobj.get_pipe()
diff_fp = StupidBufferedReader(diff_fp)
plus, minus = _get_diff_counts(diff_fp)
# CustIS Bug 50473: a workaround for svnlib behaviour in file movements (FILE1 -> FILE2 + FILE1 -> null)
if change.base_path:
if not change.path and change.base_path in changes_hash:
minus = 0
elif change.path:
changes_hash[change.base_path] = change.path
content = ''
mime = ''
# need to check if binary file's content changed when copying,
# if not, don't extract it, just get it from previous revision later
if repo.index_content and action != 'remove' and change.path and (not change.base_path
or svn.fs.contents_changed(
base_root and base_root or None,
base_root and change.base_path or None,
fsroot, change.path
)):
props = svn.fs.node_proplist(fsroot, change.path)
if not repo.svn_ignore_mimetype:
mime = props.get('svn:mime-type', None)
else:
mime = None
mime = repo.guesser.guess_mime(
mime,
os.path.basename(change.path),
diffobj.tempfile2
)
# Read and guess charset by ourselves for text files
if mime and mime.startswith('text/') or (mime.startswith('application/') and mime.endswith('xml')):
try:
fd = open(diffobj.tempfile2, 'rb')
content = fd.read()
fd.close()
except: pass
# Guess charset
if content:
content, charset = repo.guesser.guess_charset(content)
if charset:
content = content.encode('utf-8')
if repo.verbose:
print 'Guessed %s for %s' % (charset, change.path)
elif repo.verbose:
print 'Failed to guess charset for %s, not indexing' % (change.path, )
# Try to extract content using Tika from binary documents
elif repo.tika_client:
content = repo.tika_client.get_text(diffobj.tempfile2, mime, change.path)
self.changes.append((path, action, plus, minus, content, mime))
def _get_root_for_rev(self, rev):
"""Fetch a revision root from a cache of such, or a fresh root
(which is then cached for later use."""
if not self.rev_roots.has_key(rev):
self.rev_roots[rev] = svn.fs.revision_root(self.repo.fs, rev)
return self.rev_roots[rev]
def handle_revision(db, command, repo, rev, verbose, force=0):
"""Adds a particular revision of the repository to the checkin database."""
revision = repo[rev]
committed = 0
if verbose: print "Building commit info for revision %d..." % (rev),
if not revision.changes:
if verbose: print "skipped (no changes)."
return
for (path, action, plus, minus, content, mime) in revision.changes:
directory, file = os.path.split(path)
commit = cvsdb.CreateCommit()
commit.SetRepository(repo.path)
commit.SetDirectory(directory)
commit.SetFile(file)
commit.SetRevision(str(rev))
commit.SetAuthor(revision.author)
commit.SetDescription(revision.log)
commit.SetTime(revision.date)
commit.SetPlusCount(plus)
commit.SetMinusCount(minus)
commit.SetBranch(None)
commit.SetContent(content)
commit.SetMimeType(mime)
if action == 'add':
commit.SetTypeAdd()
elif action == 'remove':
commit.SetTypeRemove()
elif action == 'change':
commit.SetTypeChange()
if command == 'update':
result = db.CheckCommit(commit)
if result and not force:
continue # already recorded
# commit to database
db.AddCommit(commit)
committed = 1
if verbose:
if committed:
print "done."
else:
print "skipped (already recorded)."
def main(command, repository, revs=[], verbose=0, force=0):
cfg = viewvc.load_config(CONF_PATHNAME)
db = cvsdb.ConnectDatabase(cfg)
repository = os.path.realpath(repository)
# Purge what must be purged.
if command in ('rebuild', 'purge'):
if verbose:
print "Purging commit info for repository root `%s'" % repository
try:
db.PurgeRepository(repository)
except cvsdb.UnknownRepositoryError, e:
if command == 'purge':
sys.stderr.write("ERROR: " + str(e) + "\n")
sys.exit(1)
tika_client = None
if cfg.utilities.tika_server:
tika_client = TikaClient(cfg.utilities.tika_server, cfg.utilities.tika_mime_types, verbose)
repo = SvnRepo(
path = repository,
index_content = cfg.cvsdb.index_content,
tika_client = tika_client,
guesser = cfg.guesser(),
svn_ignore_mimetype = cfg.options.svn_ignore_mimetype,
verbose = verbose,
)
# Record what must be recorded.
if command == 'rebuild' or (command == 'update' and not revs):
for rev in range(repo.rev_max+1):
handle_revision(db, command, repo, rev, verbose, force)
elif command == 'update':
if revs[0] is None:
revs[0] = repo.rev_max
if revs[1] is None:
revs[1] = repo.rev_max
revs.sort()
for rev in range(revs[0], revs[1]+1):
handle_revision(db, command, repo, rev, verbose, force)
def _rev2int(r):
if r == 'HEAD':
r = None
else:
r = int(r)
if r < 0:
raise ValueError, "invalid revision '%d'" % (r)
return r
def usage():
cmd = os.path.basename(sys.argv[0])
sys.stderr.write(
"""Administer the ViewVC checkins database data for the Subversion repository
located at REPOS-PATH.
Usage: 1. %s [-v] rebuild REPOS-PATH
2. %s [-v] update REPOS-PATH [REV[:REV2]] [--force]
3. %s [-v] purge REPOS-PATH
1. Rebuild the commit database information for the repository located
at REPOS-PATH across all revisions, after first purging
information specific to that repository (if any).
2. Update the commit database information for the repository located
at REPOS-PATH across all revisions or, optionally, only for the
specified revision REV (or revision range REV:REV2). This is just
like rebuilding, except that, unless --force is specified, no
commit information will be stored for commits already present in
the database. If a range is specified, the revisions will be
processed in ascending order, and you may specify "HEAD" to
indicate "the youngest revision currently in the repository".
3. Purge information specific to the repository located at REPOS-PATH
from the database.
Use the -v flag to cause this script to give progress information as it works.
""" % (cmd, cmd, cmd))
sys.exit(1)
if __name__ == '__main__':
verbose = 0
force = 0
args = sys.argv
try:
index = args.index('-v')
verbose = 1
del args[index]
except ValueError:
pass
try:
index = args.index('--force')
force = 1
del args[index]
except ValueError:
pass
if len(args) < 3:
usage()
command = args[1].lower()
if command not in ('rebuild', 'update', 'purge'):
sys.stderr.write('ERROR: unknown command %s\n' % command)
usage()
revs = []
if len(sys.argv) > 3:
if command == 'rebuild':
sys.stderr.write('ERROR: rebuild no longer accepts a revision '
'number argument. Usage update --force.')
usage()
elif command != 'update':
usage()
try:
revs = map(lambda x: _rev2int(x), sys.argv[3].split(':'))
if len(revs) > 2:
raise ValueError, "too many revisions in range"
if len(revs) == 1:
revs.append(revs[0])
except ValueError:
sys.stderr.write('ERROR: invalid revision specification "%s"\n' \
% sys.argv[3])
usage()
else:
rev = None
try:
repository = vclib.svn.canonicalize_rootpath(args[2])
repository = cvsdb.CleanRepository(os.path.abspath(repository))
main(command, repository, revs, verbose, force)
except KeyboardInterrupt:
print
print '** break **'
sys.exit(0)

View File

@@ -1,50 +0,0 @@
#!/usr/bin/perl
# Скрипт для обновления SVN репозиториев svndbadmin-ом
# Берёт номера ревизий и имена репозиториев из перечисленных файлов или STDIN,
# группирует их по номерам и выводит список команд, необходимых для обновления
use strict;
# первый аргумент - путь к svndbadmin
my $svndbadmin = shift @ARGV
|| die "USAGE: $0 <path_to_svndbadmin> FILES...";
# считываем названия репозиториев и номера ревизий из файла
my $tou = {};
my ($repos, $rev);
while (<>)
{
s/^\s+//so;
s/\s+$//so;
($repos, $rev) = split /\s+/, $_;
$tou->{$repos}->{$rev} = 1;
}
# превращаем номера ревизий в диапазоны ревизий
my ($i, $j, $r, $nr);
foreach $repos (keys %$tou)
{
$rev = [ sort keys %{$tou->{$repos}} ];
$nr = [];
$j = 0;
for $i (1..@$rev)
{
if ($i > $#$rev || $rev->[$i]-$rev->[$j] > $i-$j)
{
$r = $rev->[$j];
$r .= ":".$rev->[$i-1] if $i-$j > 1;
push @$nr, $r;
$j = $i;
}
}
$tou->{$repos} = $nr;
}
# выводим список команд для выполнения
foreach $repos (keys %$tou)
{
foreach (@{$tou->{$repos}})
{
print "$svndbadmin update $repos $_\n";
}
}

View File

@@ -1,8 +0,0 @@
#!/bin/sh
VIEWVC_DIR=$1
test -f "$VIEWVC_DIR/.svn-updating" -o ! -f "$VIEWVC_DIR/.svn-updated" && exit 0
mv "$VIEWVC_DIR/.svn-updated" "$VIEWVC_DIR/.svn-updating"
"$VIEWVC_DIR/bin/svnupdate-async" "$VIEWVC_DIR/bin/svndbadmin" "$VIEWVC_DIR/.svn-updating" | sh &> /tmp/svnupdate-async.log
rm "$VIEWVC_DIR/.svn-updating"

View File

@@ -1,54 +0,0 @@
#!/usr/bin/python
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# viewvc: View CVS/SVN repositories via a web browser
#
# -----------------------------------------------------------------------
#
# This is a fcgi entry point for the query ViewVC app. It's appropriate
# for use with mod_fcgid and flup. It defines a single application function
# that is a valid fcgi entry point.
#
# mod_fcgid: http://httpd.apache.org/mod_fcgid/
# flup:
# http://pypi.python.org/pypi/flup
# http://trac.saddi.com/flup
#
# -----------------------------------------------------------------------
import sys, os
LIBRARY_DIR = None
CONF_PATHNAME = None
if LIBRARY_DIR:
sys.path.insert(0, LIBRARY_DIR)
else:
sys.path.insert(0, os.path.abspath(os.path.join(sys.argv[0],
"../../../lib")))
import sapi
import viewvc
import query
from flup.server import fcgi
def application(environ, start_response):
server = sapi.WsgiServer(environ, start_response)
cfg = viewvc.load_config(CONF_PATHNAME, server)
viewvc_base_url = cfg.query.viewvc_base_url
if viewvc_base_url is None:
viewvc_base_url = "viewvc.fcgi"
query.main(server, cfg, viewvc_base_url)
return []
fcgi.WSGIServer(application).run()

View File

@@ -1,45 +0,0 @@
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# viewvc: View CVS/SVN repositories via a web browser
#
# -----------------------------------------------------------------------
#
# This is a wsgi entry point for the query ViewVC app. It's appropriate
# for use with mod_wsgi. It defines a single application function that
# is a valid wsgi entry point.
#
# -----------------------------------------------------------------------
import sys, os
LIBRARY_DIR = None
CONF_PATHNAME = None
if LIBRARY_DIR:
sys.path.insert(0, LIBRARY_DIR)
else:
sys.path.insert(0, os.path.abspath(os.path.join(sys.argv[0],
"../../../lib")))
import sapi
import viewvc
import query
def application(environ, start_response):
server = sapi.WsgiServer(environ, start_response)
cfg = viewvc.load_config(CONF_PATHNAME, server)
viewvc_base_url = cfg.query.viewvc_base_url
if viewvc_base_url is None:
viewvc_base_url = "viewvc.wsgi"
query.main(server, cfg, viewvc_base_url)
return []

View File

@@ -1,50 +0,0 @@
#!/usr/bin/python
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# viewvc: View CVS/SVN repositories via a web browser
#
# -----------------------------------------------------------------------
#
# This is a fcgi entry point for the main ViewVC app. It's appropriate
# for use with mod_fcgid and flup. It defines a single application function
# that is a valid fcgi entry point.
#
# mod_fcgid: http://httpd.apache.org/mod_fcgid/
# flup:
# http://pypi.python.org/pypi/flup
# http://trac.saddi.com/flup
#
# -----------------------------------------------------------------------
import sys, os
LIBRARY_DIR = None
CONF_PATHNAME = None
if LIBRARY_DIR:
sys.path.insert(0, LIBRARY_DIR)
else:
sys.path.insert(0, os.path.abspath(os.path.join(sys.argv[0],
"../../../lib")))
import sapi
import viewvc
from flup.server import fcgi
def application(environ, start_response):
server = sapi.WsgiServer(environ, start_response)
cfg = viewvc.load_config(CONF_PATHNAME, server)
viewvc.main(server, cfg)
return []
fcgi.WSGIServer(application).run()

View File

@@ -1,41 +0,0 @@
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# viewvc: View CVS/SVN repositories via a web browser
#
# -----------------------------------------------------------------------
#
# This is a wsgi entry point for the main ViewVC app. It's appropriate
# for use with mod_wsgi. It defines a single application function that
# is a valid wsgi entry point.
#
# -----------------------------------------------------------------------
import sys, os
LIBRARY_DIR = None
CONF_PATHNAME = None
if LIBRARY_DIR:
sys.path.insert(0, LIBRARY_DIR)
else:
sys.path.insert(0, os.path.abspath(os.path.join(sys.argv[0],
"../../../lib")))
import sapi
import viewvc
def application(environ, start_response):
server = sapi.WsgiServer(environ, start_response)
cfg = viewvc.load_config(CONF_PATHNAME, server)
viewvc.main(server, cfg)
return []

View File

@@ -1,41 +0,0 @@
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# viewvc: View CVS/SVN repositories via a web browser
#
# -----------------------------------------------------------------------
#
# This is a wsgi entry point for the main ViewVC app. It's appropriate
# for use with mod_wsgi. It defines a single application function that
# is a valid wsgi entry point.
#
# -----------------------------------------------------------------------
import sys, os
import viewvcinstallpath
LIBRARY_DIR = viewvcinstallpath.LIBRARY_DIR
CONF_PATHNAME = viewvcinstallpath.CONF_PATHNAME
if LIBRARY_DIR:
sys.path.insert(0, LIBRARY_DIR)
else:
sys.path.insert(0, os.path.abspath(os.path.join(
os.path.dirname(__file__), "../../lib")))
import sapi
import viewvc
def application(environ, start_response):
server = sapi.WsgiServer(environ, start_response)
cfg = viewvc.load_config(CONF_PATHNAME, server)
viewvc.main(server, cfg)
return []

192
cgi/cvsgraph.conf.dist Normal file
View File

@@ -0,0 +1,192 @@
# CvsGraph configuration
#
# - Empty lines and whitespace are ignored.
#
# - Comments start with '#' and everything until
# end of line is ignored.
#
# - Strings are C-style strings in which characters
# may be escaped with '\' and written in octal
# and hex escapes. Note that '\' must be escaped
# if it is to be entered as a character.
#
# - Some strings are expanded with printf like
# conversions which start with '%'. Not all
# are applicable at all times, in which case they
# will expand to noting.
# %c = cvsroot (with trailing '/')
# %C = cvsroot (*without* trailing '/')
# %m = module (with trailing '/')
# %M = module (*without* trailing '/')
# %f = filename without path
# %F = filename without path and with ",v" stripped
# %p = path part of filename (with trailing '/')
# %r = number of revisions
# %b = number of branches
# %% = '%'
# %R = the revision number (e.g. '1.2.4.4')
# %P = previous revision number
# %B = the branch number (e.g. '1.2.4')
# %d = date of revision
# %a = author of revision
# %s = state of revision
# %t = current tag of branch or revision
# %0..%9 = command-line argument -0 .. -9
# ViewCVS currently uses the following two command-line arguments to
# pass URL information to cvsgraph:
# -6 request.amp_query (the query preceeded with '&')
# -7 request.qmark_query (the query preceed with '?')
#
# - Numbers may be entered as octal, decimal or
# hex as in 0117, 79 and 0x4f respectively.
#
# - Fonts are numbered 0..4 (defined as in libgd)
# 0 = tiny
# 1 = small
# 2 = medium (bold)
# 3 = large
# 4 = giant
#
# - Colors are a string like html-type colors in
# the form "#rrggbb" with parts written in hex
# rr = red (00..ff)
# gg = green (00-ff)
# bb = blue (00-ff)
#
# - There are several reserved words besides of the
# feature-keywords. These additional reserved words
# expand to numerical values:
# * false = 0
# * true = 1
# * left = 0
# * center = 1
# * right = 2
# * gif = 0
# * png = 1
# * jpeg = 2
# * tiny = 0
# * small = 1
# * medium = 2
# * large = 3
# * giant = 4
# cvsroot <string>
# The *absolute* base directory where the
# CSV/RCS repository can be found
# cvsmodule <string>
#
cvsroot = "--unused--"; # unused with ViewCVS, will be overridden
cvsmodule = ""; # unused with ViewCVS -- please leave it blank
# color_bg <color>
# The background color of the image
color_bg = "#ffffff";
# date_format <string>
# The strftime(3) format string for date and time
date_format = "%d-%b-%Y %H:%M:%S";
box_shadow = true;
tag_font = medium;
tag_color = "#007000";
rev_font = giant;
rev_color = "#000000";
rev_bgcolor = "#f0f0f0";
rev_separator = 1;
rev_minline = 15;
rev_maxline = 30;
rev_lspace = 5;
rev_rspace = 5;
rev_tspace = 3;
rev_bspace = 3;
rev_text = "%d"; # or "%d\n%a, %s" for author and state too
rev_text_font = tiny;
rev_text_color = "#500020";
# branch_font <number>
# The font of the number and tags
# branch_color <color>
# All branch element's color
# branch_[lrtb]space <number>
# Interior spacing (margin)
# branch_margin <number>
# Exterior spacing
# branch_connect <number>
# Length of the vertical connector
branch_font = medium;
branch_color = "#0000c0";
branch_bgcolor = "#ffffc0";
branch_lspace = 5;
branch_rspace = 5;
branch_tspace = 3;
branch_bspace = 3;
branch_margin = 15;
branch_connect = 8;
# title <string>
# The title string is expanded (see above for details)
# title_[xy] <number>
# Postion of title
# title_font <number>
# The font
# title_align <number>
# 0 = left
# 1 = center
# 2 = right
# title_color <color>
title = "%C: %p%F\nRevisions: %r, Branches: %b";
title_x = 10;
title_y = 5;
title_font = small;
title_align = left;
title_color = "#800000";
# Margins of the image
# Note: the title is outside the margin
margin_top = 35;
margin_bottom = 10;
margin_left = 10;
margin_right = 10;
# Image format(s)
# image_type <number|{gif,jpeg,png}>
# gif (0) = Create gif image
# png (1) = Create png image
# jpeg (2) = Create jpeg image
# Image types are available if they can be found in
# the gd library. Newer versions of gd do not have
# gif anymore. CvsGraph will automatically generate
# png images instead.
# image_quality <number>
# The quality of a jpeg image (1..100)
image_type = png;
image_quality = 75;
# HTML ImageMap generation
# map_name <string>
# The name= attribute in <map name="mapname">...</map>
# map_branch_href <string>
# map_branch_alt <string>
# map_rev_href <string>
# map_rev_alt <string>
# map_diff_href <string>
# map_diff_alt <string>
# These are the href= and alt= attributes in the <area>
# tags of html. The strings are expanded (see above).
map_name = "MyMapName";
map_branch_href = "href=\"%m%F?only_with_tag=%t%8%6\"";
map_branch_alt = "alt=\"%0 %t (%B)\"";
# You might want to experiment with the following setting:
# 1. The default setting will take you to a ViewCVS generated page displaying
# that revision of the file, if you click into a revision box:
map_rev_href = "href=\"%m%F?rev=%R&content-type=text/vnd.viewcvs-markup%6\"";
# 2. This alternative setting will take you to the anchor representing this
# revision on a ViewCVS generated Log page for that file:
# map_rev_href = "href=\"%m%F%7#rev%R\"";
#
map_rev_alt = "alt=\"%1 %t (%R)\"";
map_diff_href = "href=\"%m%F.diff?r1=%P&r2=%R%8%6\"";
map_diff_alt = "alt=\"%2 %P &lt;-&gt; %R\"";

529
cgi/granny.cgi Executable file
View File

@@ -0,0 +1,529 @@
#!/usr/bin/python
# -*- Mode: python -*-
"""
Granny.py - display CVS annotations in HTML
with lines colored by code age in days.
Original Perl version by J. Gabriel Foster (gabe@sgrail.com)
Posted to info-cvs on 02/27/1997
You can still get the original shell archive here:
http://www.geocrawler.com/archives/3/382/1997/2/0/2103824/
Perl modifications for NT by Marc Paquette (marcpa@cam.org)
Python port and CGI modifications by Brian Lenihan (brianl@real.com)
From the original granny.pl README:
--
What is Granny? Why do I care?
Granny is a tool for viewing cvs annotation information graphically. Using
Netscape, granny indicates the age of lines by color. Red lines are new,
blue lines are old. This information can be very useful in determining
what lines are 'hot'. New lines are more likely to contain bugs, and this
is an easy way to visualize that information.
Requirements:
Netscape (version 2.0 or better)
Perl5
CVS 1.9 (1.8 should work, but I have not tried it.)
Installation:
Put granny somewhere in your path. You may need to edit the
first line to point to your perl5 binary and libraries.
What to do:
Run granny just like you would 'cvs annotate' for a single file.
granny thefile.C
To find out who is to blame for that new 'feature'.
granny -U thefile.C
For all your options:
granny -h
Questions, Comments, Assertions?
send e-mail to the author: Gabe Foster (gabe@sgrail.com)
Notes:
I'm not the first person to try this sort of display, I just read about it
in a magazine somewhere and decided that cvs had all the information
I needed. To whomever first had this idea, it's a great one.
Granny is free, please use it as you see fit. I give no warranties.
As a courtesy, I ask that you tell me about any modifications you have made,
and also ask that you acknowledge my work if you use Granny in your own
software.
--
Granny.py:
granny.py [-h][-d days][-i input][-o output][-DUV] file
-h: Get this help display.
-i: Specify the input file. (Use - for stdin.)
-o: Specify the output file. (Use - for stdout.)
-d: Specify the day range for the coloring.\n
-r: Specify the cvs revision of the file.
-D: Display the date the line was last edited.
-U Display the user that last edited the line.
-V: Display the version the line was last edited.
By default, granny.py executes cvs annotate on a FILE and
runs netscape to display the graphic.
It is assumed that cvs and Netscape (for command line version) are
in your path.
If granny.py is placed in the cgi-bin directory of your Web
server, it will act as a CGI script. The working directory
defaults to /usr/tmp, but it can be overridden in the class
constructor:
A = CGIAnnotate(tempdir='/tmp')
Required fields:
root The full path to the cvs root directory.
Name The module/filename of the annotated file.
Optional fields:
rev The cvs revision number to use. (default HEAD).
Set the following fields to display extra info:
showUser Display the user that last edited the line.
showVersion Display version that the line was last edited in.
showDate Display the date the line was last edited.
http://yourserver.yourdomain.com/cgi-bin/granny.py?root=/cvsroot&Name=module/file
TODO:
Add support for determining the MIME type of files and/or a binary check.
- easily done by parsing Apache (mime.conf) or Roxen (extensions) MIME
files.
Consider adding buttons to HTML for optional display fields.
Add support for launching other browsers.
"""
import os
import sys
import string
import re
import time
import getopt
import cStringIO
import tempfile
import traceback
month_num = {
'Jan' : 1, 'Feb' : 2, 'Mar' : 3, 'Apr' : 4, 'May' : 5, 'Jun' : 6,
'Jul' : 7, 'Aug' : 8, 'Sep' : 9, 'Oct' : 10, 'Nov' : 11, 'Dec' : 12
}
class Annotate:
def __init__(self):
self.day_range = 365
self.counter = 0
self.color_table = {}
self.user = {}
self.version = {}
self.rtime = {}
self.source = {}
self.tmp = None
self.tmpfile = None
self.revision = ''
self.showUser = 0
self.showDate = 0
self.showVersion = 0
self.set_today()
def run(self):
try:
self.process_args()
self.parse_raw_annotated_file()
self.write_annotated_html_file()
if self.tmp:
self.display_annotated_html_file()
finally:
if sys.exc_info()[0] is not None:
traceback.print_exc()
self.cleanup()
def cleanup(self):
if self.tmp:
self.tmp.close()
os.unlink(self.tmpfile)
sys.exit(0)
def getoutput(self, cmd):
"""
Get stdin and stderr from cmd and
return exit status and captured output
"""
if os.name == 'nt':
# os.popen is broken on win32, but seems to work so far...
pipe = os.popen('%s 2>&1' % cmd, 'r')
else:
pipe = os.popen('{ %s ; } 2>&1' % cmd, 'r')
text = pipe.read()
sts = pipe.close()
if sts == None:
sts = 0
if text[:-1] == '\n':
text = text[-1:]
return sts, text
def set_today(self):
"""
compute the start of this day
"""
(year,mon,day,hour,min,sec,dow,doy,dst) = time.gmtime(time.time())
self.today = time.mktime((year,mon,day,0,0,0,0,0,0))
def get_today(self):
return self.today
# entify stuff which breaks HTML display
# this was lifted from some Zope Code in
# StructuredText.py
#
# XXX try it with string.replace and run it in the profiler
def html_quote(self,v,
character_entities=((re.compile('&'), '&amp;'),
(re.compile("<"), '&lt;' ),
(re.compile(">"), '&gt;' ),
(re.compile('"'), '&quot;'))):
gsub = re.sub
text=str(v)
for regexp,name in character_entities:
text=gsub(regexp,name,text)
return text
def display_annotated_html_file(self):
if os.name == 'nt':
path = '"C:\\Program Files\\Netscape\\Communicator'\
'\\Program\\Netscape"'
if os.system('%s %s' % (path, self.tmpfile)) != 0:
sys.stderr.write('%s: Unable to start Netscape' % sys.argv[0])
sys.exit(1)
else:
if os.system('netscape -remote openFile\(%s\)' %
self.tmpfile) != 0:
sys.stderr.write('%s: Trying to run netscape, please wait\n' %
sys.argv[0])
if os.system('netscape &') == 0:
for i in range(10):
time.sleep(1)
if os.system('netscape -remote openFile\(%s\)' %
self.tmpfile) == 0:
break
if i == 10:
sys.stderr.write('%s:Unable to start netscape\n' %
sys.argv[0])
else:
sys.stderr.write('%s:Unable to start netscape\n' %
sys.argv[0])
# give Netscape time to read the file
# XXX big files may raise an OSError exception on NT
# if the sleep is too short.
time.sleep(5)
def get_opts(self):
opt_dict = {}
if not len(sys.argv[1:]) > 0:
self.usage()
opts, args = getopt.getopt(sys.argv[1:], 'DUVhi:o:d:r:')
for k,v in opts:
opt_dict[k] = v
return opt_dict
def process_args(self):
opts = self.get_opts()
if opts.has_key('-r'):
self.revision = '-r%s' % opts['-r']
if opts.has_key('-h'):
self.usage(help=1)
if opts.has_key('-i'):
if opts['-i'] != '-':
self.filename = v
infile = open(filename, 'r')
sys.stdin = infile
else:
self.file = sys.stdin
else:
self.filename = sys.argv[len(sys.argv) - 1]
cmd = 'cvs annotate %s %s' % (self.revision, self.filename)
status, text = self.getoutput(cmd)
if status != 0 or text == '':
sys.stderr.write("Can't run cvs annotate on %s\n" %
self.filename)
sys.stderr.write('%s\n' % text)
sys.exit(1)
self.file = cStringIO.StringIO(text)
if opts.has_key('-o'):
if opts['-o'] != '-':
outfile = open(v, 'w')
sys.stdout = outfile
else:
# this could be done without a temp file
target = sys.argv[len(sys.argv) -1]
self.tmpfile = tempfile.mktemp()
self.tmp = open(self.tmpfile, 'w')
sys.stdout = self.tmp
if opts.has_key('-d'):
if opts['-d'] > 0:
self.day_range = opts['-d']
if opts.has_key('-D'):
self.showDate = 1
if opts.has_key('-U'):
self.showUser = 1
if opts.has_key('-V'):
self.showVersion = 1
def parse_raw_annotated_file(self):
ann = re.compile('((\d+\.?)+)\s+\((\w+)\s+(\d\d)-'\
'(\w{3})-(\d\d)\): (.*)')
text = self.file.read()
lines = string.split(text, '\n')
for line in lines:
# Parse an annotate string
m = ann.search(line)
if m:
self.version[self.counter] = m.group(1)
self.user[self.counter] = m.group(3)
oldtime = self.today - time.mktime((
int(m.group(6)),
int(month_num[m.group(5)]),
int(m.group(4)),0,0,0,0,0,0))
self.rtime[self.counter] = oldtime / 86400
self.source[self.counter] = self.html_quote(m.group(7))
else:
self.source[self.counter] = self.html_quote(line)
pass
self.counter = self.counter + 1
def write_annotated_html_file(self):
if os.environ.has_key('SCRIPT_NAME'):
print 'Status: 200 OK\r\n'
print 'Content-type: text/html\r\n\r\n'
print ('<html><head><title>%s</title></head>\n' \
'<body bgcolor="#000000">\n' \
'<font color="#FFFFFF"><H1>File %s</H1>\n' \
'<H3>Code age in days</H2>' % (self.filename, self.filename))
for i in range(self.day_range + 1):
self.color_table[i] = \
self.hsvtorgb(240.0 * i / self.day_range, 1.0, 1.0)
step = self.day_range / 40
if step < 5:
step = 1
while self.day_range/step > 40:
step = step + 1
if step >= 5:
if step != 5:
step = step + 5 - (step % 5)
while self.day_range / step > 20:
step = step + 5
for i in range(self.day_range + 1, step):
print '<font color=%s>%s ' % (self.color_table[i], i),
print '<pre><code>'
for i in range(self.counter):
if self.showUser and self.user.has_key(i):
print '%s%s ' % ('<font color=#FFFFFF>',
string.ljust(self.user[i],10)),
if self.showVersion and self.version.has_key(i):
print '%s%s ' % ('<font color=#FFFFFF>',
string.ljust(self.version[i],6)),
if self.showDate and self.rtime.has_key(i):
(year,mon,day,hour,min,sec,dow,doy,dst) = time.gmtime(
self.today - self.rtime[i] * 86400)
print '<font color=#FFFFFF>%02d/%02d/%4d ' % (mon, day, year),
if self.rtime.get(i, self.day_range) < self.day_range:
fcolor = self.color_table.get(
self.rtime[i],
self.color_table[self.day_range])
else:
fcolor = self.color_table[self.day_range]
print '<font color=%s> %s' % (fcolor, self.source[i])
print ('</code></pre>\n' \
'<font color=#FFFFFF>\n' \
'<H5>Granny original Perl version by' \
'<I>J. Gabriel Foster</I>\n' \
'<ADDRESS><A HREF=\"mailto:gabe@sgrail.com\">'\
'gabe@sgrail.com</A></ADDRESS>\n' \
'Python version by <I>Brian Lenihan</I>\n' \
'<ADDRESS><A HREF=\"mailto:brianl@real.com\">' \
'brianl@real.com</A></ADDRESS>\n' \
'</body></html>')
sys.stdout.flush()
def hsvtorgb(self,h,s,v):
"""
a veritable technicolor spew
"""
if s == 0.0:
r = v; g = v; b = v
else:
if h < 0:
h = h + 360.0
elif h >= 360.0:
h = h - 360.0
h = h / 60.0
i = int(h)
f = h - i
if s > 1.0:
s = 1.0
p = v * (1.0 - s)
q = v * (1.0 - (s * f))
t = v * (1.0 - (s * (1.0 - f)))
if i == 0: r = v; g = t; b = p
elif i == 1: r = q; g = v; b = p
elif i == 2: r = p; g = v; b = t
elif i == 3: r = p; g = q; b = v
elif i == 4: r = t; g = p; b = v
elif i == 5: r = v; g = p; b = q
return '#%02X%02X%02X' % (r * 255 + 0.5, g * 255 + 0.5, b * 255 + 0.5)
def usage(self, help=None):
sys.stderr.write('\nusage: %s %s\n\n' % (
sys.argv[0],
'[-hDUV][-d days][-i input][-o output][-r rev] FILE')
)
if help is not None:
sys.stderr.write(
'-h: Get this help display.\n' \
'-i: Specify the input file. (Use - for stdin.)\n' \
'-o: Specify the output file. (Use - for stdout.)\n' \
'-d: Specify the day range for the coloring.\n' \
'-r: Specify the cvs revision of the file.\n' \
'-D: Display the date the line was last edited.\n' \
'-U Display the user that last edited the line.\n' \
'-V: Display the version the line was last edited.\n\n' \
'By default, %s executes a cvs annotate on a FILE and\n' \
'runs netscape to display the graphical ' \
'annotation\n' % sys.argv[0]
)
sys.exit(0)
class CGIAnnotate(Annotate):
def __init__(self,tempdir='/usr/tmp'):
Annotate.__init__(self)
if os.name == 'nt':
self.tempdir = os.environ.get('TEMP') or os.environ.get('TMP')
else:
# XXX need a sanity check here
self.tempdir = tempdir
os.chdir(self.tempdir)
def process_args(self):
f = cgi.FieldStorage()
cvsroot = f['root'].value
if f.has_key('showUser'):
self.showUser = 1
if f.has_key('showDate'):
self.showDate = 1
if f.has_key('showVersion'):
self.showVersion = 1
if f.has_key('rev'):
self.revision = '-r%s' % f['rev'].value
path = f['Name'].value
module = os.path.dirname(path)
self.workingdir = 'ann.%s' % os.getpid()
self.filename = os.path.basename(path)
os.mkdir(self.workingdir)
os.chdir(os.path.join(self.tempdir, self.workingdir))
os.system('cvs -d %s co %s' % (cvsroot, path))
os.chdir(module)
cmd = 'cvs annotate %s %s' % (self.revision, self.filename)
status, text = self.getoutput(cmd)
if status != 0 or text == '':
text = "Can't run cvs annotate on %s\n" % path
self.file = cStringIO.StringIO(text)
def cleanup(self):
os.chdir(self.tempdir)
os.system('rm -rf %s' % self.workingdir)
def display_annotated_html_file(self):
pass
def usage(self):
pass
if __name__ == '__main__':
if os.environ.has_key('SCRIPT_NAME'):
import cgi
A = CGIAnnotate()
else:
A = Annotate()
A.run()

51
cgi/query.cgi Executable file
View File

@@ -0,0 +1,51 @@
#!/usr/bin/python
# -*-python-*-
#
# Copyright (C) 1999-2001 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewCVS
# distribution or at http://viewcvs.sourceforge.net/license-1.html.
#
# Contact information:
# Greg Stein, PO Box 760, Palo Alto, CA, 94302
# gstein@lyra.org, http://viewcvs.sourceforge.net/
#
# -----------------------------------------------------------------------
#
# query.cgi: View CVS commit database by web browser
#
# -----------------------------------------------------------------------
#
# This is a teeny stub to launch the main ViewCVS app. It checks the load
# average, then loads the (precompiled) viewcvs.py file and runs it.
#
# -----------------------------------------------------------------------
#
#########################################################################
#
# INSTALL-TIME CONFIGURATION
#
# These values will be set during the installation process. During
# development, they will remain None.
#
LIBRARY_DIR = None
#########################################################################
#
# Adjust sys.path to include our library directory
#
import sys
if LIBRARY_DIR:
sys.path.insert(0, LIBRARY_DIR)
else:
sys.path[:0] = ['../lib'] # any other places to look?
#########################################################################
import query
query.run_cgi()

45
bin/asp/viewvc.asp → cgi/viewcvs.cgi Normal file → Executable file
View File

@@ -1,38 +1,37 @@
<%@ LANGUAGE = Python %>
<%
#!/usr/bin/python
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
# Copyright (C) 1999-2001 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
# the LICENSE.html file which can be found at the top level of the ViewCVS
# distribution or at http://viewcvs.sourceforge.net/license-1.html.
#
# For more information, visit http://viewvc.org/
# Contact information:
# Greg Stein, PO Box 760, Palo Alto, CA, 94302
# gstein@lyra.org, http://viewcvs.sourceforge.net/
#
# -----------------------------------------------------------------------
#
# viewvc: View CVS/SVN repositories via a web browser
# viewcvs: View CVS repositories via a web browser
#
# -----------------------------------------------------------------------
#
# This is a teeny stub to launch the main ViewVC app. It checks the load
# average, then loads the (precompiled) viewvc.py file and runs it.
# This is a teeny stub to launch the main ViewCVS app. It checks the load
# average, then loads the (precompiled) viewcvs.py file and runs it.
#
# -----------------------------------------------------------------------
#
#########################################################################
#
# INSTALL-TIME CONFIGURATION
#
# These values will be set during the installation process. During
# development, there will be no 'viewvcinstallpath.py'
# development, they will remain None.
#
import viewvcinstallpath
LIBRARY_DIR = viewvcinstallpath.LIBRARY_DIR
CONF_PATHNAME = viewvcinstallpath.CONF_PATHNAME
LIBRARY_DIR = None
#########################################################################
#
@@ -42,8 +41,9 @@ CONF_PATHNAME = viewvcinstallpath.CONF_PATHNAME
import sys
if LIBRARY_DIR:
if not LIBRARY_DIR in sys.path:
sys.path.insert(0, LIBRARY_DIR)
sys.path.insert(0, LIBRARY_DIR)
else:
sys.path[:0] = ['../lib'] # any other places to look?
#########################################################################
@@ -52,14 +52,5 @@ if LIBRARY_DIR:
#########################################################################
# go do the work
import sapi
import viewvc
server = sapi.AspServer(Server, Request, Response, Application)
try:
cfg = viewvc.load_config(CONF_PATHNAME, server)
viewvc.main(server, cfg)
finally:
s.close()
%>
import viewcvs
viewcvs.run_cgi()

460
cgi/viewcvs.conf.dist Normal file
View File

@@ -0,0 +1,460 @@
#---------------------------------------------------------------------------
#
# Configuration file for ViewCVS
#
# Information on ViewCVS is located at the following web site:
# http://viewcvs.sourceforge.net/
#
#---------------------------------------------------------------------------
#
# BASIC CONFIGURATION
#
# For correct operation, you will probably need to change the following
# configuration variables:
#
# cvs_roots
# default_root
# rcs_path
# mime_types_file
#
# It is usually desirable to change the following variables:
#
# address
# main_title
# forbidden
#
# use_enscript
# use_cvsgraph
#
# For Python source colorization:
#
# py2html_path
#
# If your icons are in a special location:
#
# icons
#
# Also, review the .ezt templates in the templates/ directory to adjust them
# for your particular site.
#
#
# FORMAT INFORMATION
#
# This file is delineated by sections, specified in [brackets]. Within each
# section, are a number of configuration settings. These settings take the
# form of: name = value. Values may be continued on the following line by
# indenting the continued line.
#
# WARNING: indentation *always* means continuation. name=value lines should
# always start in column zero.
#
# Comments should always start in column zero, and are identified with "#".
#
# Certain configuration settings may have multiple values. These should be
# separated by a comma. The settings where this is allowed are noted below.
#
# Any other setting that requires special syntax is noted at that setting.
#
#---------------------------------------------------------------------------
[general]
#
# This setting specifies each of the CVS roots on your system and assigns
# names to them. Each root should be given by a "name: path" value. Multiple
# roots should be separated by commas.
#
cvs_roots =
Development : /home/cvsroot
# this is the name of the default CVS root.
default_root = Development
# uncomment if the RCS binaries are not on the standard path
#rcs_path = /usr/bin/
#
# This is a pathname to a MIME types file to help viewcvs to guess the
# correct MIME type on checkout.
#
# If you are having problems with the default guess on the MIME type, then
# uncomment this option and point it at a MIME type file.
#
# For example, you can use the mime.types from apache here:
#mime_types_file = /usr/local/apache/conf/mime.types
# This address is shown in the footer of the generated pages.
# It must be replaced with the address of the local CVS maintainer.
address = <a href="mailto:cvs-admin@insert.your.domain.here">No CVS admin address has been configured</a>
# this title is used on the main entry page
main_title = CVS Repository
#
# This should contain a list of modules in the repository that should not be
# displayed (by default or by explicit path specification).
#
# This configuration can be a simple list of modules, or it can get quite
# complex:
#
# *) The "!" can be used before a module to explicitly state that it
# is NOT forbidden. Whenever this form is seen, then all modules will
# be forbidden unless one of the "!" modules match.
#
# *) Shell-style "glob" expressions may be used. "*" will match any
# sequence of zero or more characters, "?" will match any single
# character, "[seq]" will match any character in seq, and "[!seq]"
# will match any character not in seq.
#
# *) Tests are performed in sequence. The first match will terminate the
# testing. This allows for more complex allow/deny patterns.
#
# Tests are case-sensitive.
#
forbidden =
# Some examples:
#
# Disallow "example" but allow all others:
# forbidden = example
#
# Disallow "example1" and "example2" but allow all others:
# forbidden = example1, example2
#
# Allow *only* "example1" and "example2":
# forbidden = !example1, !example2
#
# Forbid modules starting with "x":
# forbidden = x*
#
# Allow modules starting with "x" but no others:
# forbidden = !x*
#
# Allow "xml", forbid other modules starting with "x", and allow the rest:
# forbidden = !xml, x*, !*
#
#
# This option provides a mechanism for custom key/value pairs to be
# available to templates. These are stored in key/value files (KV files).
#
# Pathnames to the KV files are listed here, specified as absolute paths
# or relative to this configuration file. The kV files follow the same
# format as this configuration file. It may have multiple, user-defined
# sections, and user-defined options in those sections. These are all
# placed into a structure available to the templates as:
#
# kv.SECTION.OPTION
#
# Note that an option name can be dotted. For example:
#
# [my_images]
# logos.small = /images/small-logo.png
# logos.big = /images/big-logo.png
#
# Templates can use these with a directive like: [kv.my_images.logos.small]
#
# Note that sections across multiple files will be merged. If two files
# have a [my_images] section, then the options will be merged together.
# If two files have the same option name in a section, then one will
# overwrite the other (it is unspecified regarding which "wins").
#
# To further categorize the KV files, and how the values are provided to
# the templates, a KV file name may be annotated with an additional level
# of dotted naming. For example:
#
# kv_files = [asf]kv/images.conf
#
# Assuming the same section as above, the template would refer to an image
# using [kv.asf.my_images.logos.small]
#
# Lastly, it is possible to use %lang% in the filenames to specify a
# substitution of the selected language-tag.
#
kv_files =
# example:
# kv_files = kv/file1.conf, kv/file2.conf, [i18n]kv/%lang%_data.conf
#
#
# The languages available to ViewCVS. There are several i18n mechanisms
# available:
#
# 1) using key/value extension system and reading KV files based on
# the selected language
# 2) GNU gettext to substitute text in the templates
# 3) using different templates, based on the selected language
#
# ### NOTE: at the moment, the GNU gettext style is not implemented
#
# This option is a comma-separated list of language-tag values. The first
# language-tag listed is the default language, and will be used if an
# Accept-Language header is not present in the request, or none of the
# user's requested languages are available. If there are ties on the
# selection of a language, then the first to appear in the list is chosen.
#
languages = en-us
# other examples:
#
# languages = en-us, de
# languages = en-us, en-gb, de
# languages = de, fr, en-us
#
#---------------------------------------------------------------------------
[templates]
#
# The templates are specified relative to the configuration file. Absolute
# paths may be used, if you want to keep these elsewhere.
#
# If %lang% occurs in the pathname, then the selected language will be
# substituted.
#
# Note: the selected language is defined by the "languages" item in the
# [general] section, and based on the request's Accept-Language
# header.
#
query = templates/query.ezt
footer = templates/footer.ezt
diff = templates/diff.ezt
graph = templates/graph.ezt
annotate = templates/annotate.ezt
markup = templates/markup.ezt
directory = templates/directory.ezt
# For an alternate form, where the first column displays a revision number
# and brings you to the log view (and the filename displays the HEAD), then
# you may use this template:
# directory = templates/dir_alternate.ezt
log = templates/log.ezt
# For a log view where the revisions are displayed in a table, you may
# want to try this template:
# log = templates/log_table.ezt
#---------------------------------------------------------------------------
[cvsdb]
#host = localhost
#database_name = ViewCVS
#user =
#passwd =
#readonly_user =
#readonly_passwd =
#row_limit = 1000
#---------------------------------------------------------------------------
[options]
### DOC
# sort_by: File sort order
# file Sort by filename
# rev Sort by revision number
# date Sort by commit date
# author Sort by author
# log Sort by log message
sort_by = file
# hide_attic: Hide or show the contents of the Attic subdirectory
# 1 Hide dead files inside Attic subdir
# 0 Show the files which are inside the Attic subdir
hide_attic = 1
# log_sort: Sort order for CVS logs
# date Sort revisions by date
# rev Sort revision by revision number
# cvs Don't sort them. Same order as CVS/RCS shows them.
log_sort = date
# diff_format: Default diff format
# h Human readable
# u Unified diff
# c Context diff
# s Side by side
# l Long human readable (more context)
diff_format = h
# hide_cvsroot: Don't show the CVSROOT directory
# 1 Hide CVSROOT directory
# 0 Show CVSROOT directory
hide_cvsroot = 1
# set to 1 to make lines break at spaces,
# set to 0 to make no-break lines,
# set to a positive integer to make the lines cut at that length
hr_breakable = 1
# give out function names in human readable diffs
# this just makes sense if we have C-files, otherwise
# diff's heuristic doesn't work well ..
# ( '-p' option to diff)
hr_funout = 0
# ignore whitespaces for human readable diffs
# (indendation and stuff ..)
# ( '-w' option to diff)
hr_ignore_white = 1
# ignore diffs which are caused by
# keyword-substitution like $Id - Stuff
# ( '-kk' option to rcsdiff)
hr_ignore_keyword_subst = 1
# allow annotation of files.
allow_annotate = 1
# allow pretty-printed version of files
allow_markup = 1
# allow compression with gzip of output if the Browser accepts it
# (HTTP_ACCEPT_ENCODING=gzip)
# [make sure to have gzip in the path]
allow_compress = 1
# If you have files which automatically refers to other files
# (such as HTML) then this allows you to browse the checked
# out files as if outside CVS.
checkout_magic = 1
# Show last changelog message for sub directories
# The current implementation makes many assumptions and may show the
# incorrect file at some times. The main assumption is that the last
# modified file has the newest filedate. But some CVS operations
# touches the file without even when a new version is't checked in,
# and TAG based browsing essientially puts this out of order, unless
# the last checkin was on the same tag as you are viewing.
# Enable this if you like the feature, but don't rely on correct results.
show_subdir_lastmod = 0
# show a portion of the most recent log entry in directory listings
show_logs = 1
# Show CVS log when viewing file contents
show_log_in_markup = 1
# == Configuration defaults ==
# Defaults for configuration variables that shouldn't need
# to be configured..
#
# If you want to use Marc-Andrew Lemburg's py2html (and Just van Rossum's
# PyFontify) to colorize Python files, then you may need to change this
# variable to point to their directory location.
#
# This directory AND the standard Python path will be searched.
#
py2html_path = .
#py2html_path = /usr/local/lib/python1.5/site-python
# the length to which the most recent log entry should be truncated when
# shown in the directory view
short_log_len = 80
diff_font_face = Helvetica,Arial
diff_font_size = -1
# should we use 'enscript' for syntax coloring?
use_enscript = 0
#
# if the enscript program is not on the path, set this value
#
enscript_path =
# enscript_path = /usr/bin/
#
# ViewCVS has its own set of mappings from filename extensions and filenames
# to languages. If the language is not supported by enscript, then it can
# be listed here to disable the use of enscript.
#
disable_enscript_lang =
# disable_enscript_lang = perl, cpp
#
# ViewCVS can generate tarball from a repository on the fly.
#
allow_tar = 0
# allow_tar = 1
#
# Use CvsGraph. See http://www.akhphd.au.dk/~bertho/cvsgraph/ for
# documentation and download.
#
use_cvsgraph = 0
# use_cvsgraph = 1
#
# if the cvsgraph program is not on the path, set this value
#
cvsgraph_path =
# cvsgraph_path = /usr/local/bin/
#
# Location of the customized cvsgraph configuration file.
# You will need an absolute pathname here:
#
cvsgraph_conf = <VIEWCVS_INSTALL_DIRECTORY>/cvsgraph.conf
#
# Set to enable regular expression search of all files in a directory
#
# WARNING:
#
# Enabling this option can consume HUGE amounts of server time. A
# "checkout" must be performed on *each* file in a directory, and
# the result needs to be searched for a match against the regular
# expression.
#
#
# SECURITY WARNING: Denial Of Service
#
# Since a user can enter the regular expression, it is possible for
# them to enter an expression with many alternatives and a lot of
# backtracking. Executing that search over thousands of lines over
# dozens of files can easily tie up a server for a long period of
# time.
#
# This option should only be used on sites with trusted users. It is
# highly inadvisable to use this on a public site.
#
use_re_search = 0
# use_re_search = 1
#---------------------------------------------------------------------------
[vhosts]
### DOC
# vhost1 = glob1, glob2
# vhost2 = glob3, glob4
# [vhost1-section]
# option = value
# [vhost1-othersection]
# option = value
# [vhost2-section]
# option = value
#
# Here is an example:
#
# [vhosts]
# lyra = *lyra.org
#
# [lyra-general]
# forbidden = hideme
#
# [lyra-options]
# show_logs = 0
#
# Note that "lyra" is the "canonical" name for all hosts in the lyra.org
# domain. This canonical name is then used within the additional, vhost-
# specific sections to override specific values in the common sections.
#
#---------------------------------------------------------------------------

View File

@@ -1,394 +0,0 @@
# CvsGraph configuration
#
# - Empty lines and whitespace are ignored.
#
# - Comments start with '#' and everything until
# end of line is ignored.
#
# - Strings are C-style strings in which characters
# may be escaped with '\' and written in octal
# and hex escapes. Note that '\' must be escaped
# if it is to be entered as a character.
#
# - Some strings are expanded with printf like
# conversions which start with '%'. Not all
# are applicable at all times, in which case they
# will expand to nothing.
# %c = cvsroot (with trailing '/')
# %C = cvsroot (*without* trailing '/')
# %m = module (with trailing '/')
# %M = module (*without* trailing '/')
# %f = filename without path
# %F = filename without path and with ",v" stripped
# %p = path part of filename (with trailing '/')
# %r = number of revisions
# %b = number of branches
# %% = '%'
# %R = the revision number (e.g. '1.2.4.4')
# %P = previous revision number
# %B = the branch number (e.g. '1.2.4')
# %d = date of revision
# %a = author of revision
# %s = state of revision
# %t = current tag of branch or revision
# %0..%9 = command-line argument -0 .. -9
# %l = HTMLized log entry of the revision
# NOTE: %l is obsolete. See %(%) and cvsgraph.conf(5) for
# more details.
# %L = log entry of revision
# The log entry expansion takes an optional argument to
# specify maximum length of the expansion like %L[25].
# %(...%) = HTMLize the string within the parenthesis.
# ViewVC currently uses the following four command-line arguments to
# pass URL information to cvsgraph:
# -3 link to current file's log page
# -4 link to current file's checkout page minus "rev" parameter
# -5 link to current file's diff page minus "r1" and "r2" parameters
# -6 link to current directory page minus "pathrev" parameter
#
# - Numbers may be entered as octal, decimal or
# hex as in 0117, 79 and 0x4f respectively.
#
# - Fonts are numbered 0..4 (defined as in libgd)
# 0 = tiny
# 1 = small
# 2 = medium (bold)
# 3 = large
# 4 = giant
#
# - Colors are a string like HTML type colors in
# the form "#rrggbb" with parts written in hex
# rr = red (00..ff)
# gg = green (00-ff)
# bb = blue (00-ff)
#
# - There are several reserved words besides of the
# feature-keywords. These additional reserved words
# expand to numerical values:
# * false = 0
# * true = 1
# * not = -1
# * left = 0
# * center = 1
# * right = 2
# * gif = 0
# * png = 1
# * jpeg = 2
# * tiny = 0
# * small = 1
# * medium = 2
# * large = 3
# * giant = 4
#
# - Booleans have three possible arguments: true, false
# and not. `Not' means inverse of what it was (logical
# negation) and is represented by the value -1.
# For the configuration file that means that the default
# value is negated.
#
# cvsroot <string>
# The *absolute* base directory where the
# CVS/RCS repository can be found
# cvsmodule <string>
#
cvsroot = "--unused--"; # unused with ViewVC, will be overridden
cvsmodule = ""; # unused with ViewVC -- please leave it blank
# color_bg <color>
# The background color of the image
# transparent_bg <boolean>
# Make color_bg the transparent color (only useful with PNG)
color_bg = "#ffffff";
transparent_bg = false;
# date_format <string>
# The strftime(3) format string for date and time
date_format = "%d-%b-%Y %H:%M:%S";
# box_shadow <boolean>
# Add a shadow around the boxes
# upside_down <boolean>
# Reverse the order of the revisions
# left_right <boolean>
# Draw the image left to right instead of top down,
# or right to left is upside_down is set simultaneously.
# strip_untagged <boolean>
# Remove all untagged revisions except the first, last and tagged ones
# strip_first_rev <boolean>
# Also remove the first revision if untagged
# auto_stretch <boolean>
# Try to reformat the tree to minimize image size
# use_ttf <boolean>
# Use TrueType fonts for text
# anti_alias <boolean>
# Enable pretty TrueType anti-alias drawing
# thick_lines <number>
# Draw all connector lines thicker (range: 1..11)
box_shadow = true;
upside_down = false;
left_right = false;
strip_untagged = false;
strip_first_rev = false;
#auto_stretch = true; # not yet stable.
use_ttf = false;
anti_alias = true;
thick_lines = 1;
# msg_color <color>
# Sets the error/warning message color
# msg_font <number>
# msg_ttfont <string>
# msg_ttsize <float>
# Sets the error/warning message font
msg_color = "#800000";
msg_font = medium;
msg_ttfont = "/dos/windows/fonts/ariali.ttf";
msg_ttsize = 11.0;
# parse_logs <boolean>
# Enable the parsing of the *entire* ,v file to read the
# log-entries between revisions. This is necessary for
# the %L expansion to work, but slows down parsing by
# a very large factor. You're warned.
parse_logs = false;
# tag_font <number>
# The font of the tag text
# tag_color <color>
# The color of the tag text
# tag_ignore <string>
# A extended regular expression to exclude certain tags from view.
# See regex(7) for details on the format.
# Note 1: tags matched in merge_from/merge_to are always displayed unless
# tag_ignore_merge is set to true.
# Note 2: normal string rules apply and special characters must be
# escaped.
# tag_ignore_merge <boolean>
# If set to true, allows tag_ignore to also hide merge_from and merge_to
# tags.
# tag_nocase <boolean>
# Ignore the case is tag_ignore expressions
# tag_negate <boolean>
# Negate the matching criteria of tag_ignore. When true, only matching
# tags will be shown.
# Note: tags matched with merge_from/merge_to will still be displayed.
tag_font = medium;
#tag_ttfont = "/dos/windows/fonts/ariali.ttf";
#tag_ttsize = 11.0;
tag_color = "#007000";
#tag_ignore = "(test|alpha)_release";
#tag_ignore_merge = false;
#tag_nocase = false;
#tag_negate = false;
# rev_hidenumber <boolean>
# If set to true no revision numbers will be printed in the graph.
#rev_hidenumber = false;
rev_font = giant;
#rev_ttfont = "/dos/windows/fonts/arial.ttf";
#rev_ttsize = 12.0;
rev_color = "#000000";
rev_bgcolor = "#f0f0f0";
rev_separator = 1;
rev_minline = 15;
rev_maxline = 75;
rev_lspace = 5;
rev_rspace = 5;
rev_tspace = 3;
rev_bspace = 3;
rev_text = "%d"; # or "%d\n%a, %s" for author and state too
rev_text_font = tiny;
#rev_text_ttfont = "/dos/windows/fonts/times.ttf";
#rev_text_ttsize = 9.0;
rev_text_color = "#500020";
rev_maxtags = 25;
# merge_color <color>
# The color of the line connecting merges
# merge_front <boolean>
# If true, draw the merge-lines on top if the image
# merge_nocase <boolean>
# Ignore case in regular expressions
# merge_from <string>
# A regex describing a tag that is used as the merge source
# merge_to <string>
# A regex describing a tag that is the target of the merge
# merge_findall <boolean>
# Try to match all merge_to targets possible. This can result in
# multiple lines originating from one tag.
# merge_arrows <boolean>
# Use arrows to point to the merge destination. Default is true.
# merge_cvsnt <boolean>
# Use CVSNT's mergepoint registration for merges
# merge_cvsnt_color <color>
# The color of the line connecting merges from/to registered
# mergepoints.
# arrow_width <number>
# arrow_length <number>
# Specify the size of the arrows. Default is 3 wide and 12 long.
#
# NOTE:
# - The merge_from is an extended regular expression as described in
# regex(7) and POSIX 1003.2 (see also Single Unix Specification at
# http://www.opengroup.com).
# - The merge_to is an extended regular expression with a twist. All
# subexpressions from the merge_from are expanded into merge_to
# using %[1-9] (in contrast to \[1-9] for backreferences). Care is
# taken to escape the constructed expression.
# - A '$' at the end of the merge_to expression can be important to
# prevent 'near match' references. Normally, you want the destination
# to be a good representation of the source. However, this depends
# on how well you defined the tags in the first place.
#
# Example:
# merge_from = "^f_(.*)";
# merge_to = "^t_%1$";
# tags: f_foo, f_bar, f_foobar, t_foo, t_bar
# result:
# f_foo -> "^t_foo$" -> t_foo
# f_bar -> "^t_bar$" -> t_bar
# f_foobar-> "^t_foobar$" -> <no match>
#
merge_color = "#a000a0";
merge_front = false;
merge_nocase = false;
merge_from = "^f_(.*)";
merge_to = "^t_%1$";
merge_findall = false;
#merge_arrows = true;
#arrow_width = 3;
#arrow_length = 12;
merge_cvsnt = true;
merge_cvsnt_color = "#606000";
# branch_font <number>
# The font of the number and tags
# branch_color <color>
# All branch element's color
# branch_[lrtb]space <number>
# Interior spacing (margin)
# branch_margin <number>
# Exterior spacing
# branch_connect <number>
# Length of the vertical connector
# branch_dupbox <boolean>
# Add the branch-tag also at the bottom/top of the trunk
# branch_fold <boolean>
# Fold empty branches in one box to save space
# branch_foldall <boolean>
# Put all empty branches in one box, even if they
# were interspaced with branches with revisions.
# branch_resort <boolean>
# Resort the branches by the number of revisions to save space
# branch_subtree <string>
# Only show the branch denoted or all branches that sprout
# from the denoted revision. The argument may be a symbolic
# tag. This option you would normally want to set from the
# command line with the -O option.
branch_font = medium;
#branch_ttfont = "/dos/windows/fonts/arialbd.ttf";
#branch_ttsize = 18.0;
branch_tag_color= "#000080";
branch_tag_font = medium;
#branch_tag_ttfont = "/dos/windows/fonts/arialbi.ttf";
#branch_tag_ttsize = 14.0;
branch_color = "#0000c0";
branch_bgcolor = "#ffffc0";
branch_lspace = 5;
branch_rspace = 5;
branch_tspace = 3;
branch_bspace = 3;
branch_margin = 15;
branch_connect = 8;
branch_dupbox = false;
branch_fold = true;
branch_foldall = false;
branch_resort = false;
#branch_subtree = "1.2.4";
# title <string>
# The title string is expanded (see above for details)
# title_[xy] <number>
# Position of title
# title_font <number>
# The font
# title_align <number>
# 0 = left
# 1 = center
# 2 = right
# title_color <color>
title = "%c%p%f\nRevisions: %r, Branches: %b";
title_x = 10;
title_y = 5;
title_font = small;
#title_ttfont = "/dos/windows/fonts/times.ttf";
#title_ttsize = 10.0;
title_align = left;
title_color = "#800000";
# Margins of the image
# Note: the title is outside the margin
margin_top = 35;
margin_bottom = 10;
margin_left = 10;
margin_right = 10;
# Image format(s)
# image_type <number|{gif,jpeg,png}>
# gif (0) = Create gif image
# png (1) = Create png image
# jpeg (2) = Create jpeg image
# Image types are available if they can be found in
# the gd library. Newer versions of gd do not have
# gif anymore. CvsGraph will automatically generate
# png images instead.
# image_quality <number>
# The quality of a jpeg image (1..100)
# image_compress <number>
# Set the compression of a PNG image (gd version >= 2.0.12).
# Values range from -1 to 9 where:
# - -1 default compression (usually 3)
# - 0 no compression
# - 1 lowest level compression
# - ... ...
# - 9 highest level of compression
# image_interlace <boolean>
# Write interlaces PNG/JPEG images for progressive loading.
image_type = png;
image_quality = 75;
image_compress = 3;
image_interlace = true;
# HTML image map generation
# map_name <string>
# The name= attribute in <map name="mapname">...</map>
# map_branch_href <string>
# map_branch_alt <string>
# map_rev_href <string>
# map_rev_alt <string>
# map_diff_href <string>
# map_diff_alt <string>
# map_merge_href <string>
# map_merge_alt <string>
# These are the href= and alt= attributes in the <area>
# tags of HTML. The strings are expanded (see above).
map_name = "MyMapName\" name=\"MyMapName";
map_branch_href = "href=\"%6pathrev=%(%t%)\"";
map_branch_alt = "alt=\"%0 %(%t%) (%B)\"";
# You might want to experiment with the following setting:
# 1. The default setting will take you to a ViewVC generated page displaying
# that revision of the file, if you click into a revision box:
map_rev_href = "href=\"%4rev=%R\"";
# 2. This alternative setting will take you to the anchor representing this
# revision on a ViewVC generated Log page for that file:
# map_rev_href = "href=\"%3#rev%R\"";
#
map_rev_alt = "alt=\"%1 %(%t%) (%R)\"";
map_diff_href = "href=\"%5r1=%P&amp;r2=%R\"";
map_diff_alt = "alt=\"%2 %P &lt;-&gt; %R\"";
map_merge_href = "href=\"%5r1=%P&amp;r2=%R\"";
map_merge_alt = "alt=\"%2 %P &lt;-&gt; %R\"";

View File

@@ -1,32 +0,0 @@
#---------------------------------------------------------------------------
#
# MIME type mapping file for ViewVC
#
# Information on ViewVC is located at the following web site:
# http://viewvc.org/
#
#---------------------------------------------------------------------------
# THE FORMAT OF THIS FILE
#
# This file contains records -- one per line -- of the following format:
#
# MIME_TYPE [EXTENSION [EXTENSION ...]]
#
# where whitespace separates the MIME_TYPE from the EXTENSION(s),
# and the EXTENSIONs from each other.
#
# For example:
#
# text/x-csh csh
# text/x-csrc c
# text/x-diff diff patch
# image/png png
# image/jpeg jpeg jpg jpe
#
# By default, this file is left empty, allowing ViewVC to continue
# consulting it first without overriding the MIME type mappings
# found in more standard mapping files (such as those provided as
# part of the operating system or web server software).
#
#

File diff suppressed because it is too large Load Diff

View File

@@ -1,213 +0,0 @@
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML//EN">
<html>
<head>
<title>ViewVC: 1.1.0 Release Notes</title>
<style>
.h2, .h3 {
padding: 0.25em 0em;
background: white;
}
.warning {
font-style: italic;
}
</style>
</head>
<body>
<h1>ViewVC 1.1.0 Release Notes</h1>
<div class="h2">
<h2 id="introduction">Introduction</h2>
<p>ViewVC 1.1.0 is the superset of all previous ViewVC releases.</p>
</div> <!-- h2 -->
<div class="h2">
<h2 id="compatibility">Compatibility</h2>
<p>Each ViewVC release strives to maintain URL stability with previous
releases, and 1.1.0 is no exception. All URLs considered valid for
previous ViewVC releases should continue to work correctly in this
release, though possibly only via the use of HTTP redirects
(generated by ViewVC itself).</p>
<p>The commits database functionality has changed in ViewVC 1.1.0 in
way that breaks compatibility with prior ViewVC releases, but only
for new database instantiations. ViewVC 1.1.0 will continue to
understand (for both read and write operations) the previous
schema, so you are not required to rebuild your commits database
for ViewVC 1.1.0 compatibility. By default, new commits databases
created using the 1.1.0 version of the <code>make-database</code>
script will use a new database schema that is unreadable by
previous ViewVC versions. However, if you need a database which
can co-exist with a previous ViewVC version, you can use
the <code>--version=1.0</code> option
to <code>make-database</code>.</p>
<p>The ViewVC configuration files and template language have changed
dramatically. See the file <code>docs/upgrading-howto.html</code>
in the release for information on porting existing versions of
those items for use with ViewVC 1.1.0.</p>
</div> <!-- h2 -->
<div class="h2">
<h2 id="compatibility">Features and Fixes</h2>
<div class="h3">
<h3 id="">Extensible path-based authorization w/ Subversion authz support</h3>
<p>In a nutshell, ViewVC is now able to do path-based authorization.
ViewVC 1.0 has a configuration option for naming 'forbidden'
modules, but it is really limited &mdash; it basically just makes a
universal decision about which top-level directories in every
hosted repository should be hidden by ViewVC. People want
more, and specifically requested that ViewVC learn how to honor
Subversion's authz files and semantics. So, along with some other
types of authorization approaches, that's what ViewVC 1.1 can now
do. If you are using mod_authz_svn with Apache today, or
svnserve's built-in authorization support, then you can now point
ViewVC to the same authz configuration file and have it honor the
access rules you've defined for your repositories.</p>
<p>Note that ViewVC does <strong>not</strong> handle authentication,
though. You'll need to configure your web server to demand login
credentials from users, which the web server itself can then hand
off to ViewVC for application against the authorization rules
you've defined.</p>
<p class="warning">WARNING: The root listing view does not consult the
authorization subsystem when deciding what roots to display to a
given user. If you need to protect your root names, consider
disabling it by removing <code>roots</code> from the set of views
listed in the <code>allowed_views</code> configuration option.
<strong>UPDATE: This was fixed in ViewVC 1.1.3.</strong></p>
<p class="warning">WARNING: Support for path-based authorization is
incomplete in the experimental version control backend modules,
including the one that permits display of remote Subversion
repositories. <strong>UPDATE: This was fixed in ViewVC
1.1.15.</strong></p>
</div> <!-- h3 -->
<div class="h3">
<h3 id="">Subversion versioned properties display</h3>
<p>ViewVC 1.1 displays the properties that Subversion lets you store
on files and directories
(<code>svn:mime-type</code>, <code>svn:mergeinfo</code>,
<code>svn:ignore</code>, etc.). Directory properties are shown by
default at the bottom of that directory's entry listing. File
properties are displayed at the bottom of that file's
markup/annotate view.</p>
</div> <!-- h3 -->
<div class="h3">
<h3 id="">Unified markup and annotation views</h3>
<p>The "markup" and "annotate" views in ViewVC now have a unified look
and feel (driven by a single EZT template). Both views support
syntax highlighting and Subversion file property display.</p>
</div> <!-- h3 -->
<div class="h3">
<h3 id="">Unified, hassle-free Pygments-based syntax highlighting</h3>
<p>ViewVC 1.0 does syntax highlighting by working with GNU enscript, or
highlight, or php, or py2html &mdash; all these external tools just
to accomplish a single task. But they all do things in slightly
different ways. And if you configure them wrongly, you get strange
errors. <a href="http://www.pygments.org/">Pygments</a> (which is
also used by <a href="http://trac.edgewall.org/">Trac</a> for
syntax highlighting) is a Python package that requires no
configuration, is easier to use inside ViewVC, and so on. So
ViewVC 1.1 drops support for all those various old integrations,
and just uses Pygments for everything now. This change was about
developer and administrator sanity. There will be complaints, to
be sure, about how various color schemes differ and what file types
now are and aren't understood by the syntax highlighting engine,
but this change should vastly simplify the discussions of such
things.</p>
</div> <!-- h3 -->
<div class="h3">
<h3 id="">Better MIME detection and handling</h3>
<p>ViewVC typically consults a MIME types file to determine what kind
of file a given document is, based on its filename extension
(<code>.jpg</code> = <code>image/jpeg</code>, &hellip;). But
Subversion lets you dictate a file's MIME type using
the <code>svn:mime-type</code> property. ViewVC now recognizes and
honors that property as the preferred source of a file's MIME type.
This can be disabled in the configuration, though, which might be
desirable if many of your Subversion-versioned files carry the
generic <code>application/octet-stream</code> MIME type that
Subversion uses by default for non-human-readable files.</p>
<p>Also, ViewVC now allows you to specify multiple MIME type mapping
files that you'd like it to consult when determine the MIME type of
files based on their extensions. This allows administrators to
easily define their own custom mappings for ViewVC's benefit
without potentially affecting the mappings used by other site
services.</p>
</div> <!-- h3 -->
<div class="h3">
<h3 id="">Support for full content diffs</h3>
<p>ViewVC 1.1 expands the previously existing options of "colored
diff" and "long colored diff" with a new "full colored diff", which
shows the full contents of the changed file (instead of only the 3
or 15 lines of context shown via the older diff display types).</p>
</div> <!-- h3 -->
<div class="h3">
<h3 id="">Support for per-root configuration overrides</h3>
<p>In ViewVC 1.1, you can setup configuration option overrides on a
per-root (per-repository) basis (if you need/care to do so). See
the comments in the <code>viewvc.conf.dist</code> file for more on
how to do this.</p>
</div> <!-- h3 -->
<div class="h3">
<h3 id="">Optional email address obfuscation/mangling</h3>
<p>ViewVC can, when displaying revision metadata, munge strings that
look like email addresses to protect them from screen-scraping
spammers. For example, a log message that says, "Patch by:
cmpilato@red-bean.com" can optionally be displayed by ViewVC using
HTML entity encoding for the characters (a trick that causes no
visible change to the output, but that might confuse
unsophisticated spam bot crawlers) or as "Patch by: cmpilato@..."
(which isn't a complete email address at all, but might be enough
information for the human reading the log message to know who to
blame for the patch).</p>
</div> <!-- h3 -->
<div class="h3">
<h3 id="">Pagination improvements</h3>
<p>The way that ViewVC splits directory and log views across pages has
been reworked. The old way was "Fetch all the information you can
find, then display only one page's worth." The new way is "Fetch
only what you need to display the page requested, plus a little bit
of border information." This provides a large performance
enhancement for the default sort orderings.</p>
</div> <!-- h3 -->
</div> <!-- h2 -->
</body>
</html>

View File

@@ -1,49 +0,0 @@
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML//EN">
<html>
<head>
<title>ViewVC: 1.2.0 Release Notes</title>
<style>
.h2, .h3 {
padding: 0.25em 0em;
background: white;
}
.warning {
font-style: italic;
}
</style>
</head>
<body>
<h1>ViewVC 1.2.0 Release Notes</h1>
<div class="h2">
<h2 id="introduction">Introduction</h2>
<p>ViewVC 1.2.0 is the superset of all previous ViewVC releases.</p>
</div> <!-- h2 -->
<div class="h2">
<h2 id="compatibility">Compatibility</h2>
<p>Each ViewVC release strives to maintain URL stability with previous
releases, and 1.2.0 is no exception. All URLs considered valid for
previous ViewVC releases should continue to work correctly in this
release, though possibly only via the use of HTTP redirects
(generated by ViewVC itself).</p>
</div> <!-- h2 -->
<div class="h2">
<h2 id="compatibility">Features and Fixes</h2>
<div class="h3">
<h3 id=""></h3>
</div> <!-- h3 -->
</div> <!-- h2 -->
</body>
</html>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

153
lib/PyFontify.py Normal file
View File

@@ -0,0 +1,153 @@
"""Module to analyze Python source code; for syntax coloring tools.
Interface:
tags = fontify(pytext, searchfrom, searchto)
The 'pytext' argument is a string containing Python source code.
The (optional) arguments 'searchfrom' and 'searchto' may contain a slice in pytext.
The returned value is a list of tuples, formatted like this:
[('keyword', 0, 6, None), ('keyword', 11, 17, None), ('comment', 23, 53, None), etc. ]
The tuple contents are always like this:
(tag, startindex, endindex, sublist)
tag is one of 'keyword', 'string', 'comment' or 'identifier'
sublist is not used, hence always None.
"""
# Based on FontText.py by Mitchell S. Chapman,
# which was modified by Zachary Roadhouse,
# then un-Tk'd by Just van Rossum.
# Many thanks for regular expression debugging & authoring are due to:
# Tim (the-incredib-ly y'rs) Peters and Cristian Tismer
# So, who owns the copyright? ;-) How about this:
# Copyright 1996-1997:
# Mitchell S. Chapman,
# Zachary Roadhouse,
# Tim Peters,
# Just van Rossum
__version__ = "0.3.1"
import string, regex
# First a little helper, since I don't like to repeat things. (Tismer speaking)
import string
def replace(where, what, with):
return string.join(string.split(where, what), with)
# This list of keywords is taken from ref/node13.html of the
# Python 1.3 HTML documentation. ("access" is intentionally omitted.)
keywordsList = [
"del", "from", "lambda", "return",
"and", "elif", "global", "not", "try",
"break", "else", "if", "or", "while",
"class", "except", "import", "pass",
"continue", "finally", "in", "print",
"def", "for", "is", "raise"]
# Build up a regular expression which will match anything
# interesting, including multi-line triple-quoted strings.
commentPat = "#.*"
pat = "q[^\q\n]*\(\\\\[\000-\377][^\q\n]*\)*q"
quotePat = replace(pat, "q", "'") + "\|" + replace(pat, 'q', '"')
# Way to go, Tim!
pat = """
qqq
[^\\q]*
\(
\( \\\\[\000-\377]
\| q
\( \\\\[\000-\377]
\| [^\\q]
\| q
\( \\\\[\000-\377]
\| [^\\q]
\)
\)
\)
[^\\q]*
\)*
qqq
"""
pat = string.join(string.split(pat), '') # get rid of whitespace
tripleQuotePat = replace(pat, "q", "'") + "\|" + replace(pat, 'q', '"')
# Build up a regular expression which matches all and only
# Python keywords. This will let us skip the uninteresting
# identifier references.
# nonKeyPat identifies characters which may legally precede
# a keyword pattern.
nonKeyPat = "\(^\|[^a-zA-Z0-9_.\"']\)"
keyPat = nonKeyPat + "\("
for keyword in keywordsList:
keyPat = keyPat + keyword + "\|"
keyPat = keyPat[:-2] + "\)" + nonKeyPat
matchPat = keyPat + "\|" + commentPat + "\|" + tripleQuotePat + "\|" + quotePat
matchRE = regex.compile(matchPat)
idKeyPat = "[ \t]*[A-Za-z_][A-Za-z_0-9.]*" # Ident w. leading whitespace.
idRE = regex.compile(idKeyPat)
def fontify(pytext, searchfrom = 0, searchto = None):
if searchto is None:
searchto = len(pytext)
# Cache a few attributes for quicker reference.
search = matchRE.search
group = matchRE.group
idSearch = idRE.search
idGroup = idRE.group
tags = []
tags_append = tags.append
commentTag = 'comment'
stringTag = 'string'
keywordTag = 'keyword'
identifierTag = 'identifier'
start = 0
end = searchfrom
while 1:
start = search(pytext, end)
if start < 0 or start >= searchto:
break # EXIT LOOP
match = group(0)
end = start + len(match)
c = match[0]
if c not in "#'\"":
# Must have matched a keyword.
if start <> searchfrom:
# there's still a redundant char before and after it, strip!
match = match[1:-1]
start = start + 1
else:
# this is the first keyword in the text.
# Only a space at the end.
match = match[:-1]
end = end - 1
tags_append((keywordTag, start, end, None))
# If this was a defining keyword, look ahead to the
# following identifier.
if match in ["def", "class"]:
start = idSearch(pytext, end)
if start == end:
match = idGroup(0)
end = start + len(match)
tags_append((identifierTag, start, end, None))
elif c == "#":
tags_append((commentTag, start, end, None))
else:
tags_append((stringTag, start, end, None))
return tags
def test(path):
f = open(path)
text = f.read()
f.close()
tags = fontify(text)
for tag, start, end, sublist in tags:
print tag, `text[start:end]`

View File

@@ -1,20 +1,24 @@
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
# Copyright (C) 1999-2001 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
# the LICENSE.html file which can be found at the top level of the ViewCVS
# distribution or at http://viewcvs.sourceforge.net/license-1.html.
#
# For more information, visit http://viewvc.org/
# Contact information:
# Greg Stein, PO Box 760, Palo Alto, CA, 94302
# gstein@lyra.org, http://viewcvs.sourceforge.net/
#
# -----------------------------------------------------------------------
#
# accept.py: parse/handle the various Accept headers from the client
#
# -----------------------------------------------------------------------
#
import re
import string
def language(hdr):
@@ -38,8 +42,8 @@ def _parse(hdr, result):
while pos < len(hdr):
name = _re_token.match(hdr, pos)
if not name:
raise AcceptLanguageParseError()
a = result.item_class(name.group(1).lower())
raise AcceptParseError()
a = result.item_class(string.lower(name.group(1)))
pos = name.end()
while 1:
# are we looking at a parameter?
@@ -55,7 +59,7 @@ def _parse(hdr, result):
# the "=" was probably missing
continue
pname = match.group(1).lower()
pname = string.lower(match.group(1))
if pname == 'q' or pname == 'qs':
try:
a.quality = float(match.group(2))
@@ -69,7 +73,7 @@ def _parse(hdr, result):
# bad float literal
pass
elif pname == 'charset':
a.charset = match.group(2).lower()
a.charset = string.lower(match.group(2))
result.append(a)
if hdr[pos:pos+1] == ',':
@@ -107,23 +111,6 @@ class _LanguageRange(_AcceptItem):
return None
class _LanguageSelector:
"""Instances select an available language based on the user's request.
Languages found in the user's request are added to this object with the
append() method (they should be instances of _LanguageRange). After the
languages have been added, then the caller can use select_from() to
determine which user-request language(s) best matches the set of
available languages.
Strictly speaking, this class is pretty close for more than just
language matching. It has been implemented to enable q-value based
matching between requests and availability. Some minor tweaks may be
necessary, but simply using a new 'item_class' should be sufficient
to allow the _parse() function to construct a selector which holds
the appropriate item implementations (e.g. _LanguageRange is the
concrete _AcceptItem class that handles matching of language tags).
"""
item_class = _LanguageRange
def __init__(self):
@@ -133,8 +120,7 @@ class _LanguageSelector:
"""Select one of the available choices based on the request.
Note: if there isn't a match, then the first available choice is
considered the default. Also, if a number of matches are equally
relevant, then the first-requested will be used.
considered the default.
avail is a list of language-tag strings of available languages
"""
@@ -154,8 +140,7 @@ class _LanguageSelector:
qvalue = want.matches(tag)
#print 'have %s. want %s. qvalue=%s' % (tag, want.name, qvalue)
if qvalue is not None and len(want.name) > longest:
# we have a match and it is longer than any we may have had.
# the final qvalue should be from this tag.
# we have a match and it is longer than any we may have had
final = qvalue
longest = len(want.name)
@@ -163,53 +148,27 @@ class _LanguageSelector:
if final:
matches.append((final, tag))
# if there are no matches, then return the default language tag
if not matches:
return avail[0]
# if we have any matches, then look at the highest qvalue
if matches:
matches.sort()
qvalue, tag = matches[-1]
# get the highest qvalue and its corresponding tag
matches.sort()
qvalue, tag = matches[-1]
if len(matches) >= 2 and matches[-2][0] == qvalue:
#print "non-deterministic choice", avail
pass
# if the qvalue is zero, then we have no valid matches. return the
# default language tag.
if not qvalue:
return avail[0]
# if the qvalue is non-zero, then we have a valid match
if qvalue:
return tag
# the qvalue is zero (non-match). drop thru to return the default
# if there are two or more matches, and the second-highest has a
# qvalue equal to the best, then we have multiple "best" options.
# select the one that occurs first in self.requested
if len(matches) >= 2 and matches[-2][0] == qvalue:
# remove non-best matches
while matches[0][0] != qvalue:
del matches[0]
#print "non-deterministic choice", matches
# sequence through self.requested, in order
for want in self.requested:
# try to find this one in our best matches
for qvalue, tag in matches:
if want.matches(tag):
# this requested item is one of the "best" options
### note: this request item could match *other* "best" options,
### so returning *this* one is rather non-deterministic.
### theoretically, we could go further here, and do another
### search based on the ordering in 'avail'. however, note
### that this generally means that we are picking from multiple
### *SUB* languages, so I'm all right with the non-determinism
### at this point. stupid client should send a qvalue if they
### want to refine.
return tag
# NOTREACHED
# return the best match
return tag
# return the default language tag
return avail[0]
def append(self, item):
self.requested.append(item)
class AcceptLanguageParseError(Exception):
class AcceptParseError(Exception):
pass
def _test():
@@ -218,10 +177,6 @@ def _test():
assert s.select_from(['en', 'de']) == 'en'
assert s.select_from(['de', 'en']) == 'en'
# Netscape 4.x and early version of Mozilla may not send a q value
s = language('en, ja')
assert s.select_from(['en', 'ja']) == 'en'
s = language('fr, de;q=0.9, en-gb;q=0.7, en;q=0.6, en-gb-foo;q=0.8')
assert s.select_from(['en']) == 'en'
assert s.select_from(['en-gb-foo']) == 'en-gb-foo'

199
lib/apache_icons.py Normal file
View File

@@ -0,0 +1,199 @@
#! /usr/bin/env python
# This file was automatically generated! DO NOT EDIT!
# Howto regenerate: see ../tools/bin2inline.py
# $Id$
# You have been warned. But if you want to edit, go ahead using your
# favorite editor :-)
## vim:ts=4:et:nowrap
# [Emacs: -*- python -*-]
_ap_icons = {
"/icons/apache_pb.gif" :
'GIF89a\003\001 \000\367\000\000\377\377\377'
'\316\316\316\245\245\245\204\204\204ssskkkZ'
'ZZ!\030\030\377B\030\3771\000\275R\020\336\255'
'\204\357\234B\377\204\000\377\316\030\377\316\000\316\316\306'
'\275\275\3061\000\377c\000\377\234\000\377\357\000\377\347'
'J\357\336{\336\326\245\316\377\000\234\357J\214\377\000'
'c\347\204\234\357Rc\377\000\030\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'
'\000\000\000\000\000\000\000\000\000\000\000\000\000!\371\004'
'\001\000\000\001\000,\000\000\000\000\003\001 \000G\010'
'\377\000\003\010\034H\260\240\301\203\010\023*\\\310\260'
'\241\303\207\020#J\234H\261\242E\206\021\016\034\030'
'0\260\300\200\001\002\002l0\260!\303\006\017(Q'
'&X\331\300A\204\201\007\376\3753\020\363_\004\231'
'\007h\376;\020 \246\001\0032\011\370\024\2603\302'
'\200\231=e\006%03\002P\002\001p\352\374W'
'\020\350L\253\004\254\336\264y\261\253\327\257`\303\026'
'|Iv\203\331\012\033*P\2400a\302?\011J'
'\343\276\225@w\002\333\265\0252P\3100!\203I'
'\277\005P\376;\231\022\001\002\001._\212]\314\270'
'\261\343\307b#Dh0\000A\203\225\011R\012N'
'\371\317\203\001\017f\375\206\366+\332d^\275\245\323'
"f\030\360\240u\353\006\017\032\314\374'\200\243\300\013"
"u'Hh\333\226\356\356\335\272%\\0xa\355"
'\332\341\022/\220^\316\274y\006\015\0065{X\330'
'A3\206\202\0300\257d\270@{\202\203\014\032\210'
'\377o\000\371\253\200\363\347\313/\214\220\236 \373\220'
'\352\343\313\237\357\360\345y\271o\347\266]\233!o'
'\337\275\031L\225\222\001\010(PSQ\002\035\305S'
'T\264\011$\300\201P\305\264\240@L)\025\222S'
'rU\025\327F\016\312\244\030} \2068_\004\010'
'\030\260Rg\240\225d\332r\024T`\327ZnY'
'\205\237\\p\375\366\033[\025\2505\300^\033\000\230'
'\300\000-\271\324P\004\272\265\025\200\005\277M\360\341'
'A\305\031G\301BN"\207\335r\303Y\260\334u'
'\011I\267\220\006\326a\347\035w_\032\024\336x"'
'\226i\346\231h\246\251\346\232l\266\351\246A\017.'
'9\320y\005(\346\200\000\015\340\331PM\002X\305'
'\221Q\006\014`\337O/\031\000\225S\006\220E\200'
'\240\024\032 \200S\360\011@\000|\002E@@V'
'2i\370\317\000\216\006\320g\242\236\032\372\346\250e'
'\032U\023P \355\367_\217*J\247RM\037n'
'\377\005U\205\222)\365\023N\001\034E\033zIM'
'\010\324\001\350=*S\242Vi\352\351\260\001\310('
"'\251\314\252\367\350O\004\024\020\000_\301\3218\227"
'\001\022\030p\227\213\250M0\022a\011\240\310\222x'
'\206!\260\254|[Q\332\354\272"\032\225\222Y\241'
'\251\325\237\213l\3015c~\276\365\266\037\275x\241'
'6\322_&\201\326\300\271\023\345;\201\005\0155\351'
'\344\302\014\033\207%B\0324\367pB\030\270j\261'
'\253\034\030\224\035f\035\200\251\035x\343\205,\362\310'
'\013<\206\247\211\232\015\206\222\212\244\261\314\234qj'
'\3355\301On\331;#P\015\030\020\333k\2559'
" \244\306ENP\020\222ENL\220\302\015'\275"
'\226\321\312\221&\345m\313e|\220\226\025m\354\335'
'\325Xc\006\362\310"\227<\252d\001HfT\003'
'\033!p\000\002)\036\340\301\000\033\014\000\332\333\360'
'\232\205\322\001q\033\345\300\000w\252{d\276F\037'
'\377\204Ap\022 <\020\322O7\304\\\337~\037'
'>\020\007\0277\216R\307^~\274Pw\222\0274'
'&\327\230\213\007\001\273\234w\356\371\347\240\207.\372'
'\350\244\227\3169\301a\203M\321\201\310\212X,A'
'\257\233.;D\246\352-)\243hk\226\200e\003'
'#T\323\261\377L*\227\260\233\022e\323\257\311*'
'\005lRV\361t\340\001/U\030\227\246\312oe'
'\000\363\263go\320\000\032\011\012@\001\005@\205Z'
'\005\004\2304RJ,\355~\030\353\300\032?\253L'
'\222\342$!\374\025j\224)\237\317\356d?\2602'
'q\024\273@\257\363\311V|\242\275\002\036DF\267'
'\252\213\223\374\223\232\317x@)\0360\314\300jB'
'A\000\302OW\327\263\236\244\234\267\2239\015\353S'
'I\311\020\354\360\023\022\343]\317\200(\014[\001\200'
'b\000\217\330L)\276\221\000\214\214\343\257\275\020\000'
'\\\036X\211\002,\203\030\325%\204=\014yTW'
'\377X\230B\024\212\215=/\351Vo^\010\303|'
'\311\320.}\351V\3344\343\300\0346\240\\z\203'
'\314\005rR\304.V*l\271\353\221_\3622\303'
'\266\334k.7z"\031\333\242\032\322\334\020\\\231'
')\014\352(R\227\302%Dix\274\000\342\012r'
'\001\013\370Qp\015\201\000\007\030\347\270\224p\000q'
'\030\350\200";\3405\205Pn%\215$\010\003&'
'I\311JZ\262\222\221\264\310d\334\366.\227\235\246'
'E0b"~jd\243\375\340H-1\243@\001'
'\344V\030\004\260\315\\\022\311\315o\000\231\020\244Q'
'\340\002\270\314\245.\011\227\020\347\370e\217\003\221\316'
' \207IL\016h\240:)\331\034A\254\346\001\310'
'92L\226\013\331\002\246I\315jZ\2231w\273'
'L\034W\026\232\264\250\34649B\245]\012\000\027'
'\031\235Q&\300\341\015o\332&\236\330\300\346g\010'
'!\022p\006B\264\335\240\316\226\273\314g\2244F'
'\363\245\301])K\232Y\010\006\016\211\310\254\031\364'
"j[\023\317%'\231I\365\214\315D+\331\014J"
'H\302\315n.\3474\375\361\313^\310\330\242\276\304'
'\314\0003\273U\\~b\000\361\300S \177K\222'
':y\223$\304\331\022\217\306\321#\037}IS\251'
'\025\204j\011\341\200"\017\2511\357\024\223\003\326|'
'\244\326\304$\315\240.@\231i\222Lm\\\351\235'
'\034\272\252G\025e\245\001\376B\030\222\250\250\000\355'
'\204\215k|\366!\334\000G\217\030\010k\000\302J'
'\326\013,\261p\274\214H\323\374\002V\262\2725\254'
'\034X\016t\010"\314\237\016\363\230\232A*J\241'
'\231\020\241~\207\250\343\241\244Q\257\351E\332\351r'
'\216\3569\354@0\240K`\372M\227\020\031\3500'
"'&Y\273ZV\257\003\201@5\035+\020\315V"
'\363 \203\015-5\013K\332\322\232\366\264\250=\010'
'@\002\004\004\000;',
"/icons/small/back.gif" :
'GIF89a\020\000\020\000\242\377\000!!!'
'111ZZZ\204\204\204\214\214\214\300\300\300\000'
'\000\000\000\000\000!\371\004\001\000\000\005\000,\000\000'
'\000\000\020\000\020\000\000\003FX\272\334\015\300I`'
'L\224\213V\213\213\320\025 t\304f\020]Qn'
'(i\266\303\240\310\312Z\215D,\010\255\022\230\236'
'\20000Z\010L\203\241\320a"*\035 \321\260'
'\267\260\235\204E\307q3\310J\010`p#\001\000'
';',
"/icons/small/dir.gif" :
'GIF89a\020\000\020\000\242\377\000\377\336\255'
'\377\334\256\300\300\300\270\240}WK;+%\035\000'
'\000\000\000\000\000!\371\004\001\000\000\002\000,\000\000'
'\000\000\020\000\020\000\000\003I(b\246\376\316\2241'
'\230\262\320\2002\253\031\001v\001$GQA\025\221'
'\254y6#\333v\360"\337_\255\335dE\350\274'
'\236\341\267\012\372t\204\001\357\370`$e\314\314P'
'\0118\026 \227\251\365\212]\014\277\234n\204\021\026'
'$\000\000;',
"/icons/small/text.gif" :
'GIF89a\020\000\020\000\242\377\000\377\377\377'
'999kkk\214\214\214\300\300\300\316\316\316\347'
'\347\347\000\000\000!\371\004\001\000\000\004\000,\000\000'
'\000\000\020\000\020\000\000\003EH\272\334\276\343\025@'
"k1\261\315J'\326\\W\024D\246LA\232\002"
'j\300xFlP\212\360\262me\330\013,\307\024'
'\336\011\327Z\011\011\276\337\354\210*\032oJ\031\200'
'I,No!\316\261$\350z\275\256E\002\000;',
}
def serve_icon(pathname, fp):
if _ap_icons.has_key(pathname):
fp.write(_ap_icons[pathname])
else:
raise OSError # icon not found

View File

@@ -1,14 +1,16 @@
#!/usr/bin/env python
#!/usr/local/bin/python
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
# Copyright (C) 2000-2001 The ViewCVS Group. All Rights Reserved.
# Copyright (C) 2000 Curt Hagenlocher <curt@hagenlocher.org>
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
# the LICENSE.html file which can be found at the top level of the ViewCVS
# distribution or at http://viewcvs.sourceforge.net/license-1.html.
#
# For more information, visit http://viewvc.org/
# Contact information:
# Greg Stein, PO Box 760, Palo Alto, CA, 94302
# gstein@lyra.org, http://viewcvs.sourceforge.net/
#
# -----------------------------------------------------------------------
#
@@ -17,6 +19,10 @@
#
# -----------------------------------------------------------------------
#
# This software is being maintained as part of the ViewCVS project.
# Information is available at:
# http://viewcvs.sourceforge.net/
#
# This file is based on the cvsblame.pl portion of the Bonsai CVS tool,
# developed by Steve Lamm for Netscape Communications Corporation. More
# information about Bonsai can be found at
@@ -25,119 +31,542 @@
# cvsblame.pl, in turn, was based on Scott Furman's cvsblame script
#
# -----------------------------------------------------------------------
#
import string
import sys
import os
import re
import time
import math
import cgi
import rcsparse
class CVSParser(rcsparse.Sink):
# Precompiled regular expressions
trunk_rev = re.compile('^[0-9]+\\.[0-9]+$')
last_branch = re.compile('(.*)\\.[0-9]+')
is_branch = re.compile('(.*)\\.0\\.([0-9]+)')
d_command = re.compile('^d(\d+)\\s(\\d+)')
a_command = re.compile('^a(\d+)\\s(\\d+)')
SECONDS_PER_DAY = 86400
def __init__(self):
self.Reset()
def Reset(self):
self.last_revision = {}
self.prev_revision = {}
self.revision_date = {}
self.revision_author = {}
self.revision_branches = {}
self.next_delta = {}
self.prev_delta = {}
self.tag_revision = {}
self.revision_symbolic_name = {}
self.timestamp = {}
self.revision_ctime = {}
self.revision_age = {}
self.revision_log = {}
self.revision_deltatext = {}
self.revision_map = []
self.lines_added = {}
self.lines_removed = {}
# Map a tag to a numerical revision number. The tag can be a symbolic
# branch tag, a symbolic revision tag, or an ordinary numerical
# revision number.
def map_tag_to_revision(self, tag_or_revision):
try:
revision = self.tag_revision[tag_or_revision]
match = self.is_branch.match(revision)
if match:
branch = match.group(1) + '.' + match.group(2)
if self.last_revision.get(branch):
return self.last_revision[branch]
else:
return match.group(1)
else:
return revision
except:
return ''
# Construct an ordered list of ancestor revisions to the given
# revision, starting with the immediate ancestor and going back
# to the primordial revision (1.1).
#
# Note: The generated path does not traverse the tree the same way
# that the individual revision deltas do. In particular,
# the path traverses the tree "backwards" on branches.
def ancestor_revisions(self, revision):
ancestors = []
revision = self.prev_revision.get(revision)
while revision:
ancestors.append(revision)
revision = self.prev_revision.get(revision)
return ancestors
# Split deltatext specified by rev to each line.
def deltatext_split(self, rev):
lines = string.split(self.revision_deltatext[rev], '\n')
if lines[-1] == '':
del lines[-1]
return lines
# Extract the given revision from the digested RCS file.
# (Essentially the equivalent of cvs up -rXXX)
def extract_revision(self, revision):
path = []
add_lines_remaining = 0
start_line = 0
count = 0
while revision:
path.append(revision)
revision = self.prev_delta.get(revision)
path.reverse()
path = path[1:] # Get rid of head revision
text = self.deltatext_split(self.head_revision)
# Iterate, applying deltas to previous revision
for revision in path:
adjust = 0
diffs = self.deltatext_split(revision)
self.lines_added[revision] = 0
self.lines_removed[revision] = 0
lines_added_now = 0
lines_removed_now = 0
for command in diffs:
dmatch = self.d_command.match(command)
amatch = self.a_command.match(command)
if add_lines_remaining > 0:
# Insertion lines from a prior "a" command
text.insert(start_line + adjust, command)
add_lines_remaining = add_lines_remaining - 1
adjust = adjust + 1
elif dmatch:
# "d" - Delete command
start_line = string.atoi(dmatch.group(1))
count = string.atoi(dmatch.group(2))
begin = start_line + adjust - 1
del text[begin:begin + count]
adjust = adjust - count
lines_removed_now = lines_removed_now + count
elif amatch:
# "a" - Add command
start_line = string.atoi(amatch.group(1))
count = string.atoi(amatch.group(2))
add_lines_remaining = count
lines_added_now = lines_added_now + count
else:
raise RuntimeError, 'Error parsing diff commands'
self.lines_added[revision] = self.lines_added[revision] + lines_added_now
self.lines_removed[revision] = self.lines_removed[revision] + lines_removed_now
return text
def set_head_revision(self, revision):
self.head_revision = revision
def set_principal_branch(self, branch_name):
self.principal_branch = branch_name
def define_tag(self, name, revision):
# Create an associate array that maps from tag name to
# revision number and vice-versa.
self.tag_revision[name] = revision
### actually, this is a bit bogus... a rev can have multiple names
self.revision_symbolic_name[revision] = name
def set_comment(self, comment):
self.file_description = comment
def set_description(self, description):
self.rcs_file_description = description
# Construct dicts that represent the topology of the RCS tree
# and other arrays that contain info about individual revisions.
#
# The following dicts are created, keyed by revision number:
# self.revision_date -- e.g. "96.02.23.00.21.52"
# self.timestamp -- seconds since 12:00 AM, Jan 1, 1970 GMT
# self.revision_author -- e.g. "tom"
# self.revision_branches -- descendant branch revisions, separated by spaces,
# e.g. "1.21.4.1 1.21.2.6.1"
# self.prev_revision -- revision number of previous *ancestor* in RCS tree.
# Traversal of this array occurs in the direction
# of the primordial (1.1) revision.
# self.prev_delta -- revision number of previous revision which forms
# the basis for the edit commands in this revision.
# This causes the tree to be traversed towards the
# trunk when on a branch, and towards the latest trunk
# revision when on the trunk.
# self.next_delta -- revision number of next "delta". Inverts prev_delta.
#
# Also creates self.last_revision, keyed by a branch revision number, which
# indicates the latest revision on a given branch,
# e.g. self.last_revision{"1.2.8"} == 1.2.8.5
def define_revision(self, revision, timestamp, author, state,
branches, next):
self.tag_revision[revision] = revision
branch = self.last_branch.match(revision).group(1)
self.last_revision[branch] = revision
#self.revision_date[revision] = date
self.timestamp[revision] = timestamp
# Pretty print the date string
ltime = time.localtime(self.timestamp[revision])
formatted_date = time.strftime("%d %b %Y %H:%M", ltime)
self.revision_ctime[revision] = formatted_date
# Save age
self.revision_age[revision] = ((time.time() - self.timestamp[revision])
/ self.SECONDS_PER_DAY)
# save author
self.revision_author[revision] = author
# ignore the state
# process the branch information
branch_text = ''
for branch in branches:
self.prev_revision[branch] = revision
self.next_delta[revision] = branch
self.prev_delta[branch] = revision
branch_text = branch_text + branch + ''
self.revision_branches[revision] = branch_text
# process the "next revision" information
if next:
self.next_delta[revision] = next
self.prev_delta[next] = revision
is_trunk_revision = self.trunk_rev.match(revision) is not None
if is_trunk_revision:
self.prev_revision[revision] = next
else:
self.prev_revision[next] = revision
# Construct associative arrays containing info about individual revisions.
#
# The following associative arrays are created, keyed by revision number:
# revision_log -- log message
# revision_deltatext -- Either the complete text of the revision,
# in the case of the head revision, or the
# encoded delta between this revision and another.
# The delta is either with respect to the successor
# revision if this revision is on the trunk or
# relative to its immediate predecessor if this
# revision is on a branch.
def set_revision_info(self, revision, log, text):
self.revision_log[revision] = log
self.revision_deltatext[revision] = text
def parse_cvs_file(self, rcs_pathname, opt_rev = None, opt_m_timestamp = None):
# Args in: opt_rev - requested revision
# opt_m - time since modified
# Args out: revision_map
# timestamp
# revision_deltatext
# CheckHidden(rcs_pathname);
try:
rcsfile = open(rcs_pathname, 'r')
except:
raise RuntimeError, ('error: %s appeared to be under CVS control, ' +
'but the RCS file is inaccessible.') % rcs_pathname
rcsparse.Parser().parse(rcsfile, self)
rcsfile.close()
if opt_rev in [None, '', 'HEAD']:
# Explicitly specified topmost revision in tree
revision = self.head_revision
else:
# Symbolic tag or specific revision number specified.
revision = self.map_tag_to_revision(opt_rev)
if revision == '':
raise RuntimeError, 'error: -r: No such revision: ' + opt_rev
# The primordial revision is not always 1.1! Go find it.
primordial = revision
while self.prev_revision.get(primordial):
primordial = self.prev_revision[primordial]
# Don't display file at all, if -m option is specified and no
# changes have been made in the specified file.
if opt_m_timestamp and self.timestamp[revision] < opt_m_timestamp:
return ''
# Figure out how many lines were in the primordial, i.e. version 1.1,
# check-in by moving backward in time from the head revision to the
# first revision.
line_count = 0
if self.revision_deltatext.get(self.head_revision):
tmp_array = self.deltatext_split(self.head_revision)
line_count = len(tmp_array)
skip = 0
rev = self.prev_revision.get(self.head_revision)
while rev:
diffs = self.deltatext_split(rev)
for command in diffs:
dmatch = self.d_command.match(command)
amatch = self.a_command.match(command)
if skip > 0:
# Skip insertion lines from a prior "a" command
skip = skip - 1
elif dmatch:
# "d" - Delete command
start_line = string.atoi(dmatch.group(1))
count = string.atoi(dmatch.group(2))
line_count = line_count - count
elif amatch:
# "a" - Add command
start_line = string.atoi(amatch.group(1))
count = string.atoi(amatch.group(2))
skip = count;
line_count = line_count + count
else:
raise RuntimeError, 'error: illegal RCS file'
rev = self.prev_revision.get(rev)
# Now, play the delta edit commands *backwards* from the primordial
# revision forward, but rather than applying the deltas to the text of
# each revision, apply the changes to an array of revision numbers.
# This creates a "revision map" -- an array where each element
# represents a line of text in the given revision but contains only
# the revision number in which the line was introduced rather than
# the line text itself.
#
# Note: These are backward deltas for revisions on the trunk and
# forward deltas for branch revisions.
# Create initial revision map for primordial version.
self.revision_map = [primordial] * line_count
ancestors = [revision, ] + self.ancestor_revisions(revision)
ancestors = ancestors[:-1] # Remove "1.1"
last_revision = primordial
ancestors.reverse()
for revision in ancestors:
is_trunk_revision = self.trunk_rev.match(revision) is not None
if is_trunk_revision:
diffs = self.deltatext_split(last_revision)
# Revisions on the trunk specify deltas that transform a
# revision into an earlier revision, so invert the translation
# of the 'diff' commands.
for command in diffs:
if skip > 0:
skip = skip - 1
else:
dmatch = self.d_command.match(command)
amatch = self.a_command.match(command)
if dmatch:
start_line = string.atoi(dmatch.group(1))
count = string.atoi(dmatch.group(2))
temp = []
while count > 0:
temp.append(revision)
count = count - 1
self.revision_map = (self.revision_map[:start_line - 1] +
temp + self.revision_map[start_line - 1:])
elif amatch:
start_line = string.atoi(amatch.group(1))
count = string.atoi(amatch.group(2))
del self.revision_map[start_line:start_line + count]
skip = count
else:
raise RuntimeError, 'Error parsing diff commands'
else:
# Revisions on a branch are arranged backwards from those on
# the trunk. They specify deltas that transform a revision
# into a later revision.
adjust = 0
diffs = self.deltatext_split(revision)
for command in diffs:
if skip > 0:
skip = skip - 1
else:
dmatch = self.d_command.match(command)
amatch = self.a_command.match(command)
if dmatch:
start_line = string.atoi(dmatch.group(1))
count = string.atoi(dmatch.group(2))
del self.revision_map[start_line + adjust - 1:start_line + adjust - 1 + count]
adjust = adjust - count
elif amatch:
start_line = string.atoi(amatch.group(1))
count = string.atoi(amatch.group(2))
skip = count
temp = []
while count > 0:
temp.append(revision)
count = count - 1
self.revision_map = (self.revision_map[:start_line + adjust] +
temp + self.revision_map[start_line + adjust:])
adjust = adjust + skip
else:
raise RuntimeError, 'Error parsing diff commands'
last_revision = revision
return revision
from common import _item
import vclib
import sapi
re_includes = re.compile('\\#(\\s*)include(\\s*)"(.*?)"')
re_filename = re.compile('(.*[\\\\/])?(.+)')
def link_includes(text, repos, path_parts, include_url):
def link_includes(text, root, rcs_path, sticky = None):
match = re_includes.match(text)
if match:
incfile = match.group(3)
include_path_parts = path_parts[:-1]
for part in filter(None, incfile.split('/')):
if part == "..":
if not include_path_parts:
# nothing left to pop; don't bother marking up this include.
return text
include_path_parts.pop()
elif part and part != ".":
include_path_parts.append(part)
include_path = None
try:
if repos.itemtype(include_path_parts, None) == vclib.FILE:
include_path = '/'.join(include_path_parts)
except vclib.ItemNotFound:
pass
if include_path:
return '#%sinclude%s<a href="%s">"%s"</a>' % \
(match.group(1), match.group(2),
include_url.replace('/WHERE/', include_path), incfile)
for rel_path in ('', 'Attic', '..'):
trial_root = os.path.join(rcs_path, rel_path)
file = os.path.join(root, trial_root)
file = os.path.normpath(os.path.join(file, incfile + ',v'))
if os.access(file, os.F_OK):
url = os.path.join(rel_path, incfile)
if sticky:
url = url + '?' + sticky
return '#%sinclude%s"<a href="%s">%s</a>"' % \
(match.group(1), match.group(2), url, incfile)
return text
def make_html(root, rcs_path, opt_rev = None, sticky = None):
filename = os.path.join(root, rcs_path)
parser = CVSParser()
revision = parser.parse_cvs_file(filename, opt_rev)
count = len(parser.revision_map)
text = parser.extract_revision(revision)
if len(text) != count:
raise RuntimeError, 'Internal consistency error'
class HTMLBlameSource:
"""Wrapper around a the object by the vclib.annotate() which does
HTML escaping, diff URL generation, and #include linking."""
def __init__(self, repos, path_parts, diff_url, include_url, opt_rev=None):
self.repos = repos
self.path_parts = path_parts
self.diff_url = diff_url
self.include_url = include_url
self.annotation, self.revision = self.repos.annotate(path_parts, opt_rev,
True)
match = re_filename.match(rcs_path)
if not match:
raise RuntimeError, 'Unable to parse filename'
file_head = match.group(1)
file_tail = match.group(2)
def __getitem__(self, idx):
item = self.annotation.__getitem__(idx)
diff_url = None
if item.prev_rev:
diff_url = '%sr1=%s&amp;r2=%s' % (self.diff_url, item.prev_rev, item.rev)
thisline = link_includes(sapi.escape(item.text), self.repos,
self.path_parts, self.include_url)
return _item(text=thisline, line_number=item.line_number,
rev=item.rev, prev_rev=item.prev_rev,
diff_url=diff_url, date=item.date, author=item.author)
open_table_tag = '<table border=0 cellpadding=0 cellspacing=0 width="100%">'
startOfRow = '<tr><td colspan=3%s><pre>'
endOfRow = '</td></tr>'
print open_table_tag + (startOfRow % '')
def blame(repos, path_parts, diff_url, include_url, opt_rev=None):
source = HTMLBlameSource(repos, path_parts, diff_url, include_url, opt_rev)
return source, source.revision
def make_html(root, rcs_path):
import vclib.ccvs.blame
bs = vclib.ccvs.blame.BlameSource(os.path.join(root, rcs_path))
if count == 0:
count = 1
line_num_width = int(math.log10(count)) + 1
revision_width = 3
author_width = 5
line = 0
usedlog = {}
usedlog[revision] = 1
old_revision = 0
row_color = 'ffffff'
row_color = ''
lines_in_table = 0
inMark = 0
rev_count = 0
align = ' style="text-align: %s;"'
for revision in parser.revision_map:
thisline = text[line]
line = line + 1
usedlog[revision] = 1
line_in_table = lines_in_table + 1
sys.stdout.write('<table cellpadding="2" cellspacing="2" style="font-family: monospace; whitespace: pre;">\n')
for line_data in bs:
revision = line_data.rev
thisline = line_data.text
line = line_data.line_number
author = line_data.author
prev_rev = line_data.prev_rev
# Escape HTML meta-characters
thisline = cgi.escape(thisline)
# Add a link to traverse to included files
if 1: # opt_includes
thisline = link_includes(thisline, root, file_head, sticky)
output = ''
# Highlight lines
#mark_cmd;
#if (defined($mark_cmd = $mark_line{$line}) and mark_cmd != 'end':
# output = output + endOfRow + '<tr><td bgcolor=LIGHTGREEN width="100%"><pre>'
# inMark = 1
if old_revision != revision and line != 1:
if row_color == 'ffffff':
row_color = 'e7e7e7'
if row_color == '':
row_color = ' bgcolor="#e7e7e7"'
else:
row_color = 'ffffff'
row_color = ''
sys.stdout.write('<tr id="l%d" style="background-color: #%s; vertical-align: center;">' % (line, row_color))
sys.stdout.write('<td%s>%d</td>' % (align % 'right', line))
if not inMark:
if lines_in_table > 100:
output = output + endOfRow + '</table>' + open_table_tag + (startOfRow % row_color)
lines_in_table = 0
else:
output = output + endOfRow + (startOfRow % row_color)
elif lines_in_table > 200 and not inMark:
output = output + endOfRow + '</table>' + open_table_tag + (startOfRow % row_color)
lines_in_table = 0
output = output + "<a name=%d></a>" % (line, )
if 1: # opt_line_nums
output = output + ('%%%dd' % (line_num_width, )) % (line, )
if old_revision != revision or rev_count > 20:
sys.stdout.write('<td%s>%s</td>' % (align % 'right', author or '&nbsp;'))
sys.stdout.write('<td%s>%s</td>' % (align % 'left', revision))
revision_width = max(revision_width, len(revision))
if parser.prev_revision.get(revision):
fname = file_tail[:-2] # strip the ",v"
url = '%s.diff?r1=%s&amp;r2=%s' % \
(fname, parser.prev_revision[revision], revision)
if sticky:
url = url + '&' + sticky
output = output + ' <a href="%s"' % (url, )
if 0: # use_layers
output = output + " onmouseover='return log(event,\"%s\",\"%s\");'" % (
parser.prev_revision[revision], revision)
output = output + ">"
else:
output = output + " "
parser.prev_revision[revision] = ''
author = parser.revision_author[revision]
# $author =~ s/%.*$//;
author_width = max(author_width, len(author))
output = output + ('%%-%ds ' % (author_width, )) % (author, )
output = output + revision
if parser.prev_revision.get(revision):
output = output + '</a>'
output = output + (' ' * (revision_width - len(revision) + 1))
old_revision = revision
rev_count = 0
else:
sys.stdout.write('<td>&nbsp;</td><td>&nbsp;</td>')
output = output + ' ' + (' ' * (author_width + revision_width))
rev_count = rev_count + 1
sys.stdout.write('<td%s>%s</td></tr>\n' % (align % 'left', thisline.rstrip() or '&nbsp;'))
sys.stdout.write('</table>\n')
output = output + thisline
# Close the highlighted section
#if (defined $mark_cmd and mark_cmd != 'begin'):
# chop($output)
# output = output + endOfRow + (startOfRow % row_color)
# inMark = 0
print output
print endOfRow + '</table>'
def main():
import sys
if len(sys.argv) != 3:
print 'USAGE: %s cvsroot rcs-file' % sys.argv[0]
sys.exit(1)

View File

@@ -1,60 +0,0 @@
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# common: common definitions for the viewvc library
#
# -----------------------------------------------------------------------
# Special type indicators for diff header processing and idiff return codes
_RCSDIFF_IS_BINARY = 'binary-diff'
_RCSDIFF_ERROR = 'error'
_RCSDIFF_NO_CHANGES = "no-changes"
class _item:
def __init__(self, **kw):
vars(self).update(kw)
class TemplateData:
"""A custom dictionary-like object that allows one-time definition
of keys, and only value fetches and changes, and key deletions,
thereafter.
EZT doesn't require the use of this special class -- a normal
dict-type data dictionary works fine. But use of this class will
assist those who want the data sent to their templates to have a
consistent set of keys."""
def __init__(self, initial_data={}):
self._items = initial_data
def __getitem__(self, key):
return self._items.__getitem__(key)
def __setitem__(self, key, item):
assert self._items.has_key(key)
return self._items.__setitem__(key, item)
def __delitem__(self, key):
return self._items.__delitem__(key)
def keys(self):
return self._items.keys()
def merge(self, template_data):
"""Merge the data in TemplataData instance TEMPLATA_DATA into this
instance. Avoid the temptation to use this conditionally in your
code -- it rather defeats the purpose of this class."""
assert isinstance(template_data, TemplateData)
self._items.update(template_data._items)

68
lib/compat.py Normal file
View File

@@ -0,0 +1,68 @@
# -*- Mode: python -*-
#
# Copyright (C) 2000-2001 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewCVS
# distribution or at http://viewcvs.sourceforge.net/license-1.html.
#
# Contact information:
# Greg Stein, PO Box 760, Palo Alto, CA, 94302
# gstein@lyra.org, http://viewcvs.sourceforge.net/
#
# -----------------------------------------------------------------------
#
# compat.py: compatibility functions for operation across Python 1.5.x
#
# -----------------------------------------------------------------------
#
import urllib
import string
import time
import re
import os
#
# urllib.urlencode() is new to Python 1.5.2
#
try:
urlencode = urllib.urlencode
except AttributeError:
def urlencode(dict):
"Encode a dictionary as application/x-url-form-encoded."
if not dict:
return ''
quote = urllib.quote_plus
keyvalue = [ ]
for key, value in dict.items():
keyvalue.append(quote(key) + '=' + quote(str(value)))
return string.join(keyvalue, '&')
#
# time.strptime() is new to Python 1.5.2
#
if hasattr(time, 'strptime'):
def cvs_strptime(timestr):
'Parse a CVS-style date/time value.'
return time.strptime(timestr, '%Y/%m/%d %H:%M:%S')
else:
_re_rev_date = re.compile('([0-9]{4})/([0-9][0-9])/([0-9][0-9]) '
'([0-9][0-9]):([0-9][0-9]):([0-9][0-9])')
def cvs_strptime(timestr):
'Parse a CVS-style date/time value.'
matches = _re_rev_date.match(timestr).groups()
return tuple(map(int, matches)) + (0, 1, 0)
#
# os.makedirs() is new to Python 1.5.2
#
try:
makedirs = os.makedirs
except AttributeError:
def makedirs(path, mode=0777):
head, tail = os.path.split(path)
if head and tail and not os.path.exists(head):
makedirs(head, mode)
os.mkdir(path, mode)

View File

@@ -1,194 +1,82 @@
# -*-python-*-
# -*- Mode: python -*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
# Copyright (C) 2000-2001 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
# the LICENSE.html file which can be found at the top level of the ViewCVS
# distribution or at http://viewcvs.sourceforge.net/license-1.html.
#
# For more information, visit http://viewvc.org/
# Contact information:
# Greg Stein, PO Box 760, Palo Alto, CA, 94302
# gstein@lyra.org, http://viewcvs.sourceforge.net/
#
# -----------------------------------------------------------------------
#
# config.py: configuration utilities
#
# -----------------------------------------------------------------------
#
import sys
import os
import string
import ConfigParser
import fnmatch
import vclib
import vclib.ccvs
import vclib.svn
import cvsdb
import viewvc
import copy
from viewvcmagic import ContentMagic
#########################################################################
#
# CONFIGURATION
# -------------
#
# There are three forms of configuration:
#
# 1. edit the viewvc.conf created by the viewvc-install(er)
# 2. as (1), but delete all unchanged entries from viewvc.conf
# 3. do not use viewvc.conf and just edit the defaults in this file
# 1) edit the viewcvs.conf created by the viewcvs-install(er)
# 2) as (1), but delete all unchanged entries from viewcvs.conf
# 3) do not use viewcvs.conf and just edit the defaults in this file
#
# Most users will want to use (1), but there are slight speed advantages
# to the other two options. Note that viewvc.conf values are a bit easier
# to the other two options. Note that viewcvs.conf values are a bit easier
# to work with since it is raw text, rather than python literal values.
#
#
# A WORD ABOUT OPTION LAYERING/OVERRIDES
# --------------------------------------
#
# ViewVC has three "layers" of configuration options:
#
# 1. base configuration options - very basic configuration bits
# found in sections like 'general', 'options', etc.
# 2. vhost overrides - these options overlay/override the base
# configuration on a per-vhost basis.
# 3. root overrides - these options overlay/override the base
# configuration and vhost overrides on a per-root basis.
#
# Here's a diagram of the valid overlays/overrides:
#
# PER-ROOT PER-VHOST BASE
#
# ,-----------. ,-----------.
# | vhost-*/ | | |
# | general | --> | general |
# | | | |
# `-----------' `-----------'
# ,-----------. ,-----------. ,-----------.
# | root-*/ | | vhost-*/ | | |
# | options | --> | options | --> | options |
# | | | | | |
# `-----------' `-----------' `-----------'
# ,-----------. ,-----------. ,-----------.
# | root-*/ | | vhost-*/ | | |
# | templates | --> | templates | --> | templates |
# | | | | | |
# `-----------' `-----------' `-----------'
# ,-----------. ,-----------. ,-----------.
# | root-*/ | | vhost-*/ | | |
# | utilities | --> | utilities | --> | utilities |
# | | | | | |
# `-----------' `-----------' `-----------'
# ,-----------. ,-----------.
# | vhost-*/ | | |
# | cvsdb | --> | cvsdb |
# | | | |
# `-----------' `-----------'
# ,-----------. ,-----------. ,-----------.
# | root-*/ | | vhost-*/ | | |
# | authz-* | --> | authz-* | --> | authz-* |
# | | | | | |
# `-----------' `-----------' `-----------'
# ,-----------.
# | |
# | vhosts |
# | |
# `-----------'
# ,-----------.
# | |
# | query |
# | |
# `-----------'
#
# ### TODO: Figure out what this all means for the 'kv' stuff.
#
#########################################################################
class Config:
_base_sections = (
# Base configuration sections.
'authz-*',
'cvsdb',
'general',
'options',
'query',
'templates',
'utilities',
)
_force_multi_value = (
# Configuration values with multiple, comma-separated values.
'allowed_views',
'binary_mime_types',
'custom_log_formatting',
'cvs_roots',
'kv_files',
'languages',
'mime_types_files',
'root_parents',
'svn_roots',
)
_allowed_overrides = {
# Mapping of override types to allowed overridable sections.
'vhost' : ('authz-*',
'cvsdb',
'general',
'options',
'templates',
'utilities',
),
'root' : ('authz-*',
'options',
'templates',
'utilities',
)
}
_sections = ('general', 'options', 'cvsdb', 'templates')
_force_multi_value = ('cvs_roots', 'forbidden', 'disable_enscript_lang',
'languages', 'kv_files')
def __init__(self):
self.__guesser = None
self.__root_configs = {}
self.__parent = None
self.root_options_overlayed = 0
for section in self._base_sections:
if section[-1] == '*':
continue
for section in self._sections:
setattr(self, section, _sub_config())
def load_config(self, pathname, vhost=None):
"""Load the configuration file at PATHNAME, applying configuration
settings there as overrides to the built-in default values. If
VHOST is provided, also process the configuration overrides
specific to that virtual host."""
self.conf_path = os.path.isfile(pathname) and pathname or None
def load_config(self, fname, vhost=None):
this_dir = os.path.dirname(sys.argv[0])
pathname = os.path.join(this_dir, fname)
self.base = os.path.dirname(pathname)
self.parser = ConfigParser.ConfigParser()
self.parser.optionxform = lambda x: x # don't case-normalize option names.
self.parser.read(self.conf_path or [])
for section in self.parser.sections():
if self._is_allowed_section(section, self._base_sections):
self._process_section(self.parser, section, section)
if vhost and self.parser.has_section('vhosts'):
self._process_vhost(self.parser, vhost)
parser = ConfigParser.ConfigParser()
parser.read(pathname)
for section in self._sections:
if parser.has_section(section):
self._process_section(parser, section, section)
if vhost and parser.has_section('vhosts'):
self._process_vhost(parser, vhost)
def load_kv_files(self, language):
"""Process the key/value (kv) files specified in the
configuration, merging their values into the configuration as
dotted heirarchical items."""
kv = _sub_config()
for fname in self.general.kv_files:
if fname[0] == '[':
idx = fname.index(']')
parts = fname[1:idx].split('.')
fname = fname[idx+1:].strip()
idx = string.index(fname, ']')
parts = string.split(fname[1:idx], '.')
fname = string.strip(fname[idx+1:])
else:
parts = [ ]
fname = fname.replace('%lang%', language)
fname = string.replace(fname, '%lang%', language)
parser = ConfigParser.ConfigParser()
parser.optionxform = lambda x: x # don't case-normalize option names.
parser.read(os.path.join(self.base, fname))
for section in parser.sections():
for option in parser.options(section):
@@ -205,319 +93,134 @@ class Config:
return kv
def path(self, path):
"""Return PATH relative to the config file directory."""
return os.path.join(self.base, path)
def _process_section(self, parser, section, subcfg_name):
if not hasattr(self, subcfg_name):
setattr(self, subcfg_name, _sub_config())
sc = getattr(self, subcfg_name)
for opt in parser.options(section):
value = parser.get(section, opt)
if opt in self._force_multi_value:
value = map(lambda x: x.strip(), filter(None, value.split(',')))
value = map(string.strip, filter(None, string.split(value, ',')))
else:
try:
value = int(value)
except ValueError:
pass
### FIXME: This feels like unnecessary depth of knowledge for a
### semi-generic configuration object.
if opt == 'cvs_roots' or opt == 'svn_roots':
value = _parse_roots(opt, value)
if opt == 'cvs_roots':
roots = { }
for root in value:
name, path = map(string.strip, string.split(root, ':'))
roots[name] = path
value = roots
setattr(sc, opt, value)
def _is_allowed_section(self, section, allowed_sections):
"""Return 1 iff SECTION is an allowed section, defined as being
explicitly present in the ALLOWED_SECTIONS list or present in the
form 'someprefix-*' in that list."""
for allowed_section in allowed_sections:
if allowed_section[-1] == '*':
if _startswith(section, allowed_section[:-1]):
return 1
elif allowed_section == section:
return 1
return 0
def _is_allowed_override(self, sectype, secspec, section):
"""Test if SECTION is an allowed override section for sections of
type SECTYPE ('vhosts' or 'root', currently) and type-specifier
SECSPEC (a rootname or vhostname, currently). If it is, return
the overridden base section name. If it's not an override section
at all, return None. And if it's an override section but not an
allowed one, raise IllegalOverrideSection."""
cv = '%s-%s/' % (sectype, secspec)
lcv = len(cv)
if section[:lcv] != cv:
return None
base_section = section[lcv:]
if self._is_allowed_section(base_section,
self._allowed_overrides[sectype]):
return base_section
raise IllegalOverrideSection(sectype, section)
def _process_vhost(self, parser, vhost):
# Find a vhost name for this VHOST, if any (else, we've nothing to do).
canon_vhost = self._find_canon_vhost(parser, vhost)
if not canon_vhost:
# none of the vhost sections matched
return
# Overlay any option sections associated with this vhost name.
cv = canon_vhost + '-'
lcv = len(cv)
for section in parser.sections():
base_section = self._is_allowed_override('vhost', canon_vhost, section)
if base_section:
self._process_section(parser, section, base_section)
if section[:lcv] == cv:
self._process_section(parser, section, section[lcv:])
def _find_canon_vhost(self, parser, vhost):
vhost = vhost.lower().split(':')[0] # lower-case, no port
vhost = string.lower(vhost)
for canon_vhost in parser.options('vhosts'):
value = parser.get('vhosts', canon_vhost)
patterns = map(lambda x: x.lower().strip(),
filter(None, value.split(',')))
patterns = map(string.lower, map(string.strip,
filter(None, string.split(value, ','))))
for pat in patterns:
if fnmatch.fnmatchcase(vhost, pat):
return canon_vhost
return None
def get_root_config(self, rootname):
"""Get configuration object with per-root overrides for 'rootname'"""
if self.__parent:
return self.__parent.get_root_config(rootname)
elif rootname in self.__root_configs:
return self.__root_configs[rootname]
__guesser = self.__guesser
__root_configs = self.__root_configs
parser = self.parser
self.parser = None
self.__guesser = None
self.__root_configs = None
sub = copy.deepcopy(self)
sub.parser = parser
self.parser = parser
self.__guesser = __guesser
self.__root_configs = __root_configs
self.__root_configs[rootname] = sub
sub.__parent = self
sub.overlay_root_options(rootname)
return sub
def get_parent_config(self):
"""Get the parent non-overridden config."""
if self.__parent:
return self.__parent
return self
def overlay_root_options(self, rootname):
"""Overlay per-root options for ROOTNAME atop the existing option
set. This is a destructive change to the configuration."""
did_overlay = 0
if not self.conf_path:
return
for section in self.parser.sections():
base_section = self._is_allowed_override('root', rootname, section)
if base_section:
# We can currently only deal with root overlays happening
# once, so check that we've not yet done any overlaying of
# per-root options.
assert(self.root_options_overlayed == 0)
self._process_section(self.parser, section, base_section)
did_overlay = 1
# If we actually did any overlaying, remember this fact so we
# don't do it again later.
if did_overlay:
self.root_options_overlayed = 1
def _get_parser_items(self, parser, section):
"""Basically implement ConfigParser.items() for pre-Python-2.3 versions."""
try:
return self.parser.items(section)
except AttributeError:
d = {}
for option in parser.options(section):
d[option] = parser.get(section, option)
return d.items()
def get_authorizer_params(self, authorizer=None):
"""Return a dictionary of parameter names and values which belong
to the configured authorizer (or AUTHORIZER, if provided)."""
params = {}
if authorizer is None:
authorizer = self.options.authorizer
if authorizer:
authz_section = 'authz-%s' % (self.options.authorizer)
if hasattr(self, authz_section):
sub_config = getattr(self, authz_section)
for attr in dir(sub_config):
params[attr] = getattr(sub_config, attr)
params['__config'] = self
return params
def guesser(self):
if not self.__guesser:
self.__guesser = ContentMagic(self.options.encodings)
return self.__guesser
def set_defaults(self):
"Set some default values in the configuration."
self.general.cvs_roots = { }
self.general.svn_roots = { }
self.general.root_parents = []
self.general.default_root = ''
self.general.mime_types_files = ["mimetypes.conf"]
self.general.address = ''
self.general.cvs_roots = {
# user-visible-name : path
"Development" : "/home/cvsroot",
}
self.general.default_root = "Development"
self.general.rcs_path = ''
self.general.mime_types_file = ''
self.general.address = '<a href="mailto:user@insert.your.domain.here">No CVS admin address has been configured</a>'
self.general.main_title = 'CVS Repository'
self.general.forbidden = ()
self.general.kv_files = [ ]
self.general.languages = ['en-us']
self.utilities.rcs_dir = ''
if sys.platform == "win32":
self.utilities.cvsnt = 'cvs'
else:
self.utilities.cvsnt = None
self.utilities.rcsfile_socket = ''
self.utilities.svn = ''
self.utilities.diff = ''
self.utilities.cvsgraph = ''
self.utilities.tika_server = ''
self.utilities.tika_mime_types = ''
self.templates.directory = 'templates/directory.ezt'
self.templates.log = 'templates/log.ezt'
self.templates.query = 'templates/query.ezt'
self.templates.footer = 'templates/footer.ezt'
self.templates.diff = 'templates/diff.ezt'
self.templates.graph = 'templates/graph.ezt'
self.templates.annotate = 'templates/annotate.ezt'
self.templates.markup = 'templates/markup.ezt'
self.cvsdb.enabled = 0
self.cvsdb.host = ''
self.cvsdb.database_name = ''
self.cvsdb.user = ''
self.cvsdb.passwd = ''
self.cvsdb.readonly_user = ''
self.cvsdb.readonly_passwd = ''
self.cvsdb.row_limit = 1000
self.options.root_as_url_component = 1
self.options.checkout_magic = 0
self.options.allowed_views = ['annotate', 'diff', 'markup', 'roots']
self.options.authorizer = None
self.options.mangle_email_addresses = 0
self.options.custom_log_formatting = []
self.options.default_file_view = "log"
self.options.binary_mime_types = []
self.options.http_expiration_time = 600
self.options.generate_etags = 1
self.options.svn_ignore_mimetype = 0
self.options.svn_config_dir = None
self.options.max_filesize_kbytes = 512
self.options.use_rcsparse = 0
self.options.sort_by = 'file'
self.options.sort_group_dirs = 1
self.options.hide_attic = 1
self.options.hide_errorful_entries = 0
self.options.log_sort = 'date'
self.options.diff_format = 'h'
self.options.hide_cvsroot = 1
self.options.hr_breakable = 1
self.options.hr_funout = 1
self.options.hr_ignore_white = 0
self.options.hr_ignore_white = 1
self.options.hr_ignore_keyword_subst = 1
self.options.hr_intraline = 0
self.options.allow_compress = 0
self.options.template_dir = "templates/default"
self.options.docroot = None
self.options.allow_annotate = 1
self.options.allow_markup = 1
self.options.allow_compress = 1
self.options.checkout_magic = 1
self.options.show_subdir_lastmod = 0
self.options.show_roots_lastmod = 0
self.options.show_logs = 1
self.options.show_log_in_markup = 1
self.options.cross_copies = 1
self.options.use_localtime = 0
self.options.iso8601_timestamps = 0
self.options.py2html_path = '.'
self.options.short_log_len = 80
self.options.enable_syntax_coloration = 1
self.options.tabsize = 8
self.options.detect_encoding = 0
self.options.diff_font_face = 'Helvetica,Arial'
self.options.diff_font_size = -1
self.options.use_enscript = 0
self.options.enscript_path = ''
self.options.disable_enscript_lang = ()
self.options.allow_tar = 0
self.options.use_cvsgraph = 0
self.options.cvsgraph_conf = "cvsgraph.conf"
self.options.allowed_cvsgraph_useropts = []
self.options.cvsgraph_path = ''
self.options.cvsgraph_conf = "<VIEWCVS_INSTALL_DIRECTORY>/cvsgraph.conf"
self.options.use_re_search = 0
self.options.dir_pagesize = 0
self.options.log_pagesize = 0
self.options.log_pagesextra = 3
self.options.limit_changes = 100
self.options.cvs_ondisk_charset = 'cp1251'
self.options.encodings = 'cp1251:iso-8859-1'
self.templates.diff = None
self.templates.directory = None
self.templates.error = None
self.templates.file = None
self.templates.graph = None
self.templates.log = None
self.templates.query = None
self.templates.query_form = None
self.templates.query_results = None
self.templates.roots = None
self.cvsdb.enabled = 0
self.cvsdb.index_content = 0
self.cvsdb.enable_snippets = 1
self.cvsdb.content_max_size = 0
self.cvsdb.host = ''
self.cvsdb.port = 3306
self.cvsdb.socket = ''
self.cvsdb.database_name = ''
self.cvsdb.user = ''
self.cvsdb.passwd = ''
self.cvsdb.readonly_user = ''
self.cvsdb.readonly_passwd = ''
self.cvsdb.row_limit = 1000
self.cvsdb.rss_row_limit = 100
self.cvsdb.check_database_for_root = 0
self.cvsdb.fulltext_min_relevance = 0.2
self.cvsdb.sphinx_host = ''
self.cvsdb.sphinx_port = 3307
self.cvsdb.sphinx_socket = ''
self.cvsdb.sphinx_index = ''
self.cvsdb.sphinx_preformatted_mime = 'text/(?!html|xml).*'
self.cvsdb.sphinx_snippet_options = \
'around: 15\n'\
'limit: 200\n'\
'before_match: <span style="color:red">\n'\
'after_match: </span>\n'\
'chunk_separator: ... \n\n'
self.query.viewvc_base_url = None
def _startswith(somestr, substr):
return somestr[:len(substr)] == substr
def _parse_roots(config_name, config_value):
roots = { }
for root in config_value:
try:
name, path = root.split(':', 1)
except:
raise MalformedRoot(config_name, root)
roots[name.strip()] = path.strip()
return roots
class ViewVCConfigurationError(Exception):
pass
class IllegalOverrideSection(ViewVCConfigurationError):
def __init__(self, override_type, section_name):
self.section_name = section_name
self.override_type = override_type
def __str__(self):
return "malformed configuration: illegal %s override section: %s" \
% (self.override_type, self.section_name)
class MalformedRoot(ViewVCConfigurationError):
def __init__(self, config_name, value_given):
Exception.__init__(self, config_name, value_given)
self.config_name = config_name
self.value_given = value_given
def __str__(self):
return "malformed configuration: '%s' uses invalid syntax: %s" \
% (self.config_name, self.value_given)
def is_forbidden(self, module):
if not module:
return 0
default = 0
for pat in self.general.forbidden:
if pat[0] == '!':
default = 1
if fnmatch.fnmatchcase(module, pat[1:]):
return 0
elif fnmatch.fnmatchcase(module, pat):
return 1
return default
class _sub_config:
pass
if not hasattr(sys, 'hexversion'):
# Python 1.5 or 1.5.1. fix the syntax for ConfigParser options.
import regex
ConfigParser.option_cre = regex.compile('^\([-A-Za-z0-9._]+\)\(:\|['
+ string.whitespace
+ ']*=\)\(.*\)$')

File diff suppressed because it is too large Load Diff

View File

@@ -1,71 +1,71 @@
# -*-python-*-
# -*- Mode: python -*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
# Copyright (C) 2000-2001 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
# the LICENSE.html file which can be found at the top level of the ViewCVS
# distribution or at http://viewcvs.sourceforge.net/license-1.html.
#
# For more information, visit http://viewvc.org/
# Contact information:
# Greg Stein, PO Box 760, Palo Alto, CA, 94302
# gstein@lyra.org, http://viewcvs.sourceforge.net/
#
# -----------------------------------------------------------------------
#
import sys
import time
import types
import re
import calendar
import MySQLdb
# set to 1 to store commit times in UTC, or 0 to use the ViewVC machine's
# local timezone. Using UTC is recommended because it ensures that the
# database will remain valid even if it is moved to another machine or the host
# computer's time zone is changed. UTC also avoids the ambiguity associated
# with daylight saving time (for example if a computer in New York recorded the
# local time 2002/10/27 1:30 am, there would be no way to tell whether the
# actual time was recorded before or after clocks were rolled back). Use local
# times for compatibility with databases used by ViewCVS 0.92 and earlier
# versions.
utc_time = 1
def DateTimeFromTicks(ticks):
"""Return a MySQL DATETIME value from a unix timestamp"""
dbi_error = "dbi error"
if utc_time:
t = time.gmtime(ticks)
else:
t = time.localtime(ticks)
return "%04d-%02d-%02d %02d:%02d:%02d" % t[:6]
_re_datetime = re.compile('([0-9]{4})-([0-9][0-9])-([0-9][0-9]) '
'([0-9][0-9]):([0-9][0-9]):([0-9][0-9])')
## make some checks in MySQLdb
_no_datetime = """\
ERROR: Your version of MySQLdb requires the mxDateTime module
for the Timestamp() and TimestampFromTicks() methods.
You will need to install mxDateTime to use the ViewCVS
database.
"""
def TicksFromDateTime(datetime):
"""Return a unix timestamp from a MySQL DATETIME value"""
if not hasattr(MySQLdb, "Timestamp") or \
not hasattr(MySQLdb, "TimestampFromTicks"):
sys.stderr.write(_no_datetime)
sys.exit(1)
if type(datetime) == types.StringType:
# datetime is a MySQL DATETIME string
matches = _re_datetime.match(datetime).groups()
t = tuple(map(int, matches)) + (0, 0, 0)
elif hasattr(datetime, "timetuple"):
# datetime is a Python >=2.3 datetime.DateTime object
t = datetime.timetuple()
else:
# datetime is an eGenix mx.DateTime object
t = datetime.tuple()
if utc_time:
return calendar.timegm(t)
else:
return time.mktime(t[:8] + (-1,))
class Cursor:
def __init__(self, mysql_cursor):
self.__cursor = mysql_cursor
def execute(self, *args):
apply(self.__cursor.execute, args)
def fetchone(self):
try:
row = self.__cursor.fetchone()
except IndexError:
row = None
return row
def connect(host, port, socket, user, passwd, db, charset = 'utf8'):
return MySQLdb.connect(
host = host,
port = port,
unix_socket = socket,
user = user,
passwd = passwd,
db = db,
charset = charset,
use_unicode = charset == 'utf8')
class Connection:
def __init__(self, host, user, passwd, db):
self.__mysql = MySQLdb.connect(
host=host, user=user, passwd=passwd, db=db)
def cursor(self):
return Cursor(self.__mysql.cursor())
def Timestamp(year, month, date, hour, minute, second):
return MySQLdb.Timestamp(year, month, date, hour, minute, second)
def TimestampFromTicks(ticks):
return MySQLdb.TimestampFromTicks(ticks)
def connect(host, user, passwd, db):
return Connection(host, user, passwd, db)

View File

@@ -1,37 +1,22 @@
# -*-python-*-
# -*- Mode: python -*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
# Copyright (C) 2000-2001 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
# the LICENSE.html file which can be found at the top level of the ViewCVS
# distribution or at http://viewcvs.sourceforge.net/license-1.html.
#
# For more information, visit http://viewvc.org/
# Contact information:
# Greg Stein, PO Box 760, Palo Alto, CA, 94302
# gstein@lyra.org, http://viewcvs.sourceforge.net/
#
# -----------------------------------------------------------------------
#
# Note: a t_start/t_end pair consumes about 0.00005 seconds on a P3/700.
# the lambda form (when debugging is disabled) should be even faster.
#
import sys
# Set to non-zero to track and print processing times
SHOW_TIMES = 0
# Set to non-zero to display child process info
SHOW_CHILD_PROCESSES = 0
# Set to a server-side path to force the tarball view to generate the
# tarball as a file on the server, instead of transmitting the data
# back to the browser. This enables easy display of error
# considitions in the browser, as well as tarball inspection on the
# server. NOTE: The file will be a TAR archive, *not* gzip-compressed.
TARFILE_PATH = ''
if SHOW_TIMES:
if 0:
import time
@@ -48,156 +33,10 @@ if SHOW_TIMES:
else:
_times[which] = t
def t_dump(out):
out.write('<div>')
names = _times.keys()
names.sort()
for name in names:
out.write('%s: %.6fs<br/>\n' % (name, _times[name]))
out.write('</div>')
def dump():
for name, value in _times.items():
print '%s: %.6f<br>' % (name, value)
else:
t_start = t_end = t_dump = lambda *args: None
class ViewVCException:
def __init__(self, msg, status=None):
self.msg = msg
self.status = status
def __str__(self):
if self.status:
return '%s: %s' % (self.status, self.msg)
return "ViewVC Unrecoverable Error: %s" % self.msg
def PrintException(server, exc_data):
status = exc_data['status']
msg = exc_data['msg']
tb = exc_data['stacktrace']
server.header(status=status)
server.write("<h3>An Exception Has Occurred</h3>\n")
s = ''
if msg:
s = '<p><pre>%s</pre></p>' % server.escape(msg)
if status:
s = s + ('<h4>HTTP Response Status</h4>\n<p><pre>\n%s</pre></p><hr />\n'
% status)
server.write(s)
server.write("<h4>Python Traceback</h4>\n<p><pre>")
server.write(server.escape(tb))
server.write("</pre></p>\n")
def GetExceptionData():
# capture the exception before doing anything else
exc_type, exc, exc_tb = sys.exc_info()
exc_dict = {
'status' : None,
'msg' : None,
'stacktrace' : None,
}
try:
import traceback, string
if isinstance(exc, ViewVCException):
exc_dict['msg'] = exc.msg
exc_dict['status'] = exc.status
tb = string.join(traceback.format_exception(exc_type, exc, exc_tb), '')
exc_dict['stacktrace'] = tb
finally:
# prevent circular reference. sys.exc_info documentation warns
# "Assigning the traceback return value to a local variable in a function
# that is handling an exception will cause a circular reference..."
# This is all based on 'exc_tb', and we're now done with it. Toss it.
del exc_tb
return exc_dict
if SHOW_CHILD_PROCESSES:
class Process:
def __init__(self, command, inStream, outStream, errStream):
self.command = command
self.debugIn = inStream
self.debugOut = outStream
self.debugErr = errStream
import sapi
if not sapi.server is None:
if not sapi.server.pageGlobals.has_key('processes'):
sapi.server.pageGlobals['processes'] = [self]
else:
sapi.server.pageGlobals['processes'].append(self)
def DumpChildren(server):
import os
if not server.pageGlobals.has_key('processes'):
return
server.header()
lastOut = None
i = 0
for k in server.pageGlobals['processes']:
i = i + 1
server.write("<table>\n")
server.write("<tr><td colspan=\"2\">Child Process%i</td></tr>" % i)
server.write("<tr>\n <td style=\"vertical-align:top\">Command Line</td> <td><pre>")
server.write(server.escape(k.command))
server.write("</pre></td>\n</tr>\n")
server.write("<tr>\n <td style=\"vertical-align:top\">Standard In:</td> <td>")
if k.debugIn is lastOut and not lastOut is None:
server.write("<em>Output from process %i</em>" % (i - 1))
elif k.debugIn:
server.write("<pre>")
server.write(server.escape(k.debugIn.getvalue()))
server.write("</pre>")
server.write("</td>\n</tr>\n")
if k.debugOut is k.debugErr:
server.write("<tr>\n <td style=\"vertical-align:top\">Standard Out & Error:</td> <td><pre>")
if k.debugOut:
server.write(server.escape(k.debugOut.getvalue()))
server.write("</pre></td>\n</tr>\n")
else:
server.write("<tr>\n <td style=\"vertical-align:top\">Standard Out:</td> <td><pre>")
if k.debugOut:
server.write(server.escape(k.debugOut.getvalue()))
server.write("</pre></td>\n</tr>\n")
server.write("<tr>\n <td style=\"vertical-align:top\">Standard Error:</td> <td><pre>")
if k.debugErr:
server.write(server.escape(k.debugErr.getvalue()))
server.write("</pre></td>\n</tr>\n")
server.write("</table>\n")
server.flush()
lastOut = k.debugOut
server.write("<table>\n")
server.write("<tr><td colspan=\"2\">Environment Variables</td></tr>")
for k, v in os.environ.items():
server.write("<tr>\n <td style=\"vertical-align:top\"><pre>")
server.write(server.escape(k))
server.write("</pre></td>\n <td style=\"vertical-align:top\"><pre>")
server.write(server.escape(v))
server.write("</pre></td>\n</tr>")
server.write("</table>")
else:
def DumpChildren(server):
pass
t_start = t_end = dump = lambda *args: None

View File

@@ -1,19 +1,18 @@
#!/usr/bin/env python
"""ezt.py -- easy templating
ezt templates are simply text files in whatever format you so desire
(such as XML, HTML, etc.) which contain directives sprinkled
throughout. With these directives it is possible to generate the
dynamic content from the ezt templates.
ezt templates are very similar to standard HTML files. But additionaly
they contain directives sprinkled in between. With these directives
it possible to generate the dynamic content from the ezt templates.
These directives are enclosed in square brackets. If you are a
C-programmer, you might be familar with the #ifdef directives of the C
preprocessor 'cpp'. ezt provides a similar concept. Additionally EZT
has a 'for' directive, which allows it to iterate (repeat) certain
These directives are enclosed in square brackets. If you are a
C-programmer, you might be familar with the #ifdef directives of the
C preprocessor 'cpp'. ezt provides a similar concept for HTML. Additionally
EZT has a 'for' directive, which allows to iterate (repeat) certain
subsections of the template according to sequence of data items
provided by the application.
The final rendering is performed by the method generate() of the Template
The HTML rendering is performed by the method generate() of the Template
class. Building template instances can either be done using external
EZT files (convention: use the suffix .ezt for such files):
@@ -27,7 +26,7 @@ a EZT template string:
... <title>[title_string]</title></head>
... <body><h1>[title_string]</h1>
... [for a_sequence] <p>[a_sequence]</p>
... [end] <hr />
... [end] <hr>
... The [person] is [if-any state]in[else]out[end].
... </body>
... </html>
@@ -48,32 +47,11 @@ with the output fileobject to the templates generate method:
<p>list item 1</p>
<p>list item 2</p>
<p>another element</p>
<hr />
<hr>
The doctor is out.
</body>
</html>
Template syntax error reporting should be improved. Currently it is
very sparse (template line numbers would be nice):
>>> Template().parse("[if-any where] foo [else] bar [end unexpected args]")
Traceback (innermost last):
File "<stdin>", line 1, in ?
File "ezt.py", line 220, in parse
self.program = self._parse(text)
File "ezt.py", line 275, in _parse
raise ArgCountSyntaxError(str(args[1:]))
ArgCountSyntaxError: ['unexpected', 'args']
>>> Template().parse("[if unmatched_end]foo[end]")
Traceback (innermost last):
File "<stdin>", line 1, in ?
File "ezt.py", line 206, in parse
self.program = self._parse(text)
File "ezt.py", line 266, in _parse
raise UnmatchedEndError()
UnmatchedEndError
Directives
==========
@@ -81,58 +59,19 @@ Directives
or attributes of objects contained in the data dictionary given to the
.generate() method.
Qualified names
---------------
Qualified names have two basic forms: a variable reference, or a string
constant. References are a name from the data dictionary with optional
dotted attributes (where each intermediary is an object with attributes,
of course).
Examples:
[varname]
[ob.attr]
["string"]
Simple directives
-----------------
[QUAL_NAME]
This directive is simply replaced by the value of the qualified name.
If the value is a number it's converted to a string before being
outputted. If it is None, nothing is outputted. If it is a python file
object (i.e. any object with a "read" method), it's contents are
outputted. If it is a callback function (any callable python object
is assumed to be a callback function), it is invoked and passed an EZT
Context object as an argument.
[QUAL_NAME QUAL_NAME ...]
If the first value is a callback function, it is invoked with an EZT
Context object as a first argument, and the rest of the values as
additional arguments.
Otherwise, the first value defines a substitution format, specifying
constant text and indices of the additional arguments. The arguments
are substituted and the result is inserted into the output stream.
Example:
["abc %0 def %1 ghi %0" foo bar.baz]
Note that the first value can be any type of qualified name -- a string
constant or a variable reference. Use %% to substitute a percent sign.
Argument indices are 0-based.
This directive is simply replaced by the value of identifier from the data
dictionary. QUAL_NAME might be a dotted qualified name refering to some
instance attribute of objects contained in the dats dictionary.
Numbers are converted to string though.
[include "filename"] or [include QUAL_NAME]
This directive is replaced by content of the named include file. Note
that a string constant is more efficient -- the target file is compiled
inline. In the variable form, the target file is compiled and executed
at runtime.
This directive is replaced by content of the named include file.
Block directives
----------------
@@ -140,68 +79,40 @@ Directives
[for QUAL_NAME] ... [end]
The text within the [for ...] directive and the corresponding [end]
is repeated for each element in the sequence referred to by the
qualified name in the for directive. Within the for block this
identifiers now refers to the actual item indexed by this loop
iteration.
is repeated for each element in the sequence referred to by the qualified
name in the for directive. Within the for block this identifiers now
refers to the actual item indexed by this loop iteration.
[if-any QUAL_NAME [QUAL_NAME2 ...]] ... [else] ... [end]
[if-any QUAL_NAME] ... [else] ... [end]
Test if any QUAL_NAME value is not None or an empty string or list.
The [else] clause is optional. CAUTION: Numeric values are
converted to string, so if QUAL_NAME refers to a numeric value 0,
the then-clause is substituted!
Test if the value QUAL_NAME is not None or an empty string or list.
The [else] clause is optional. CAUTION: Numeric values are converted to string,
so if QUAL_NAME refers to a numeric value 0, the then-clause is
substituted!
[if-index INDEX_FROM_FOR odd] ... [else] ... [end]
[if-index INDEX_FROM_FOR even] ... [else] ... [end]
[if-index INDEX_FROM_FOR first] ... [else] ... [end]
[if-index INDEX_FROM_FOR last] ... [else] ... [end]
[if-index INDEX_FROM_FOR NUMBER] ... [else] ... [end]
[if-index odd] ... [else] ... [end]
[if-index even] ... [else] ... [end]
[if-index first] ... [else] ... [end]
[if-index last] ... [else] ... [end]
[if-index NUMBER] ... [else] ... [end]
These five directives work similar to [if-any], but are only useful
within a [for ...]-block (see above). The odd/even directives are
for example useful to choose different background colors for
adjacent rows in a table. Similar the first/last directives might
be used to remove certain parts (for example "Diff to previous"
doesn't make sense, if there is no previous).
These five directives work similar to [if-any], but are only useful
within a [for ...]-block (see above). The odd/even directives are
for example useful to choose different background colors for adjacent rows
in a table. Similar the first/last directives might be used to
remove certain parts (for example "Diff to previous" doesn't make sense,
if there is no previous).
[is QUAL_NAME STRING] ... [else] ... [end]
[is QUAL_NAME QUAL_NAME] ... [else] ... [end]
The [is ...] directive is similar to the other conditional
directives above. But it allows to compare two value references or
a value reference with some constant string.
[define VARIABLE] ... [end]
The [define ...] directive allows you to create and modify template
variables from within the template itself. Essentially, any data
between inside the [define ...] and its matching [end] will be
expanded using the other template parsing and output generation
rules, and then stored as a string value assigned to the variable
VARIABLE. The new (or changed) variable is then available for use
with other mechanisms such as [is ...] or [if-any ...], as long as
they appear later in the template.
[format STRING] ... [end]
The format directive controls how the values substituted into
templates are escaped before they are put into the output stream. It
has no effect on the literal text of the templates, only the output
from [QUAL_NAME ...] directives. STRING can be one of "raw" "html"
"xml" or "uri". The "raw" mode leaves the output unaltered; the "html"
and "xml" modes escape special characters using entity escapes (like
&quot; and &gt;); the "uri" mode escapes characters using hexadecimal
escape sequences (like %20 and %7e).
[format CALLBACK]
The [is ...] directive is similar to the other conditional directives
above. But it allows to compare two value references or a value reference
with some constant string.
Python applications using EZT can provide custom formatters as callback
variables. "[format CALLBACK][QUAL_NAME][end]" is in most cases
equivalent to "[CALLBACK QUAL_NAME]"
"""
#
# Copyright (C) 2001-2007 Greg Stein. All Rights Reserved.
# Copyright (C) 2001 Greg Stein. All Rights Reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
@@ -228,28 +139,15 @@ Directives
#
#
# This software is maintained by Greg and is available at:
# http://svn.webdav.org/repos/projects/ezt/trunk/
# http://viewcvs.sourceforge.net/
# it is also used by the following projects:
# http://edna.sourceforge.net/
#
import string
import re
from types import StringType, IntType, FloatType, LongType, TupleType
from types import StringType, IntType, FloatType
import os
import cgi
import urllib
try:
import cStringIO
except ImportError:
import StringIO
cStringIO = StringIO
#
# Formatting types
#
FORMAT_RAW = 'raw'
FORMAT_HTML = 'html'
FORMAT_XML = 'xml'
FORMAT_URI = 'uri'
#
# This regular expression matches three alternatives:
@@ -272,7 +170,7 @@ _re_parse = re.compile(r'\[(%s(?: +%s)*)\]|(\[\[\])|\[#[^\]]*\]' % (_item, _item
_re_args = re.compile(r'"(?:[^\\"]|\\.)*"|[-\w.]+')
# block commands and their argument counts
_block_cmd_specs = { 'if-index':2, 'for':1, 'is':2, 'define':1, 'format':1 }
_block_cmd_specs = { 'if-any':1, 'if-index':2, 'for':1, 'is':2 }
_block_cmds = _block_cmd_specs.keys()
# two regular expresssions for compressing whitespace. the first is used to
@@ -289,47 +187,36 @@ _re_subst = re.compile('%(%|[0-9]+)')
class Template:
def __init__(self, fname=None, compress_whitespace=1,
base_format=FORMAT_RAW):
self.compress_whitespace = compress_whitespace
def __init__(self, fname=None):
if fname:
self.parse_file(fname, base_format)
self.parse_file(fname)
def parse_file(self, fname, base_format=FORMAT_RAW):
"fname -> a string object with pathname of file containg an EZT template."
self.parse(_FileReader(fname), base_format)
def parse(self, text_or_reader, base_format=FORMAT_RAW):
"""Parse the template specified by text_or_reader.
The argument should be a string containing the template, or it should
specify a subclass of ezt.Reader which can read templates. The base
format for printing values is given by base_format.
def parse_file(self, fname):
"""fname -> a string object with pathname of file containg an EZT template.
"""
if not isinstance(text_or_reader, Reader):
# assume the argument is a plain text string
text_or_reader = _TextReader(text_or_reader)
self.program = self._parse_file(fname)
self.program = self._parse(text_or_reader, base_format=base_format)
def parse(self, text):
"""text -> a string object containing the HTML template.
parse the template program into: (TEXT DIRECTIVE BRACKET)* TEXT
DIRECTIVE will be '[directive]' or None
BRACKET will be '[[]' or None
"""
self.program = self._parse(text)
def generate(self, fp, data):
if hasattr(data, '__getitem__') or callable(getattr(data, 'keys', None)):
# a dictionary-like object was passed. convert it to an
# attribute-based object.
class _data_ob:
def __init__(self, d):
vars(self).update(d)
data = _data_ob(data)
ctx = Context(fp)
ctx = _context()
ctx.data = data
ctx.for_iterators = { }
ctx.defines = { }
self._execute(self.program, ctx)
ctx.for_index = { }
self._execute(self.program, fp, ctx)
def _parse(self, reader, for_names=None, file_args=(), base_format=None):
"""text -> string object containing the template.
def _parse_file(self, fname, for_names=None, file_args=()):
return self._parse(open(fname, "rt").read(), for_names, file_args,
os.path.dirname(fname))
def _parse(self, text, for_names=None, file_args=(), base=None):
"""text -> string object containing the HTML template.
This is a private helper function doing the real work for method parse.
It returns the parsed template as a 'program'. This program is a sequence
@@ -338,16 +225,12 @@ class Template:
Note: comment directives [# ...] are automatically dropped by _re_parse.
"""
# parse the template program into: (TEXT DIRECTIVE BRACKET)* TEXT
parts = _re_parse.split(reader.text)
parts = _re_parse.split(text)
program = [ ]
stack = [ ]
if not for_names:
for_names = [ ]
if base_format:
program.append((self._cmd_format, _formatters[base_format]))
for_names = [ ]
for i in range(len(parts)):
piece = parts[i]
@@ -355,8 +238,7 @@ class Template:
if which == 0:
# TEXT. append if non-empty.
if piece:
if self.compress_whitespace:
piece = _re_whitespace.sub(' ', _re_newline.sub('\n', piece))
piece = _re_whitespace.sub(' ', _re_newline.sub('\n', piece))
program.append(piece)
elif which == 2:
# BRACKET directive. append '[' if present.
@@ -368,7 +250,7 @@ class Template:
cmd = args[0]
if cmd == 'else':
if len(args) > 1:
raise ArgCountSyntaxError(str(args[1:]))
raise ArgCountSyntaxError()
### check: don't allow for 'for' cmd
idx = stack[-1][1]
true_section = program[idx:]
@@ -376,141 +258,133 @@ class Template:
stack[-1][3] = true_section
elif cmd == 'end':
if len(args) > 1:
raise ArgCountSyntaxError(str(args[1:]))
raise ArgCountSyntaxError()
# note: true-section may be None
try:
cmd, idx, args, true_section = stack.pop()
except IndexError:
raise UnmatchedEndError()
cmd, idx, args, true_section = stack.pop()
else_section = program[idx:]
if cmd == 'format':
program.append((self._cmd_end_format, None))
else:
func = getattr(self, '_cmd_' + re.sub('-', '_', cmd))
program[idx:] = [ (func, (args, true_section, else_section)) ]
if cmd == 'for':
for_names.pop()
func = getattr(self, '_cmd_' + re.sub('-', '_', cmd))
program[idx:] = [ (func, (args, true_section, else_section)) ]
if cmd == 'for':
for_names.pop()
elif cmd in _block_cmds:
if len(args) > _block_cmd_specs[cmd] + 1:
raise ArgCountSyntaxError(str(args[1:]))
### this assumes arg1 is always a ref unless cmd is 'define'
if cmd != 'define':
args[1] = _prepare_ref(args[1], for_names, file_args)
raise ArgCountSyntaxError()
### this assumes arg1 is always a ref
args[1] = _prepare_ref(args[1], for_names, file_args)
# handle arg2 for the 'is' command
if cmd == 'is':
args[2] = _prepare_ref(args[2], for_names, file_args)
elif cmd == 'for':
for_names.append(args[1][0]) # append the refname
elif cmd == 'format':
if args[1][0]:
# argument is a variable reference
formatter = args[1]
else:
# argument is a string constant referring to built-in formatter
formatter = _formatters.get(args[1][1])
if not formatter:
raise UnknownFormatConstantError(str(args[1:]))
program.append((self._cmd_format, formatter))
for_names.append(args[1][0])
# remember the cmd, current pos, args, and a section placeholder
stack.append([cmd, len(program), args[1:], None])
elif cmd == 'include':
if args[1][0] == '"':
include_filename = args[1][1:-1]
if base:
include_filename = os.path.join(base, include_filename)
f_args = [ ]
for arg in args[2:]:
f_args.append(_prepare_ref(arg, for_names, file_args))
program.extend(self._parse(reader.read_other(include_filename),
for_names, f_args))
program.extend(self._parse_file(include_filename, for_names,
f_args))
else:
if len(args) != 2:
raise ArgCountSyntaxError(str(args))
raise ArgCountSyntaxError()
program.append((self._cmd_include,
(_prepare_ref(args[1], for_names, file_args),
reader)))
elif cmd == 'if-any':
f_args = [ ]
for arg in args[1:]:
f_args.append(_prepare_ref(arg, for_names, file_args))
stack.append(['if-any', len(program), f_args, None])
base)))
else:
# implied PRINT command
f_args = [ ]
for arg in args:
f_args.append(_prepare_ref(arg, for_names, file_args))
program.append((self._cmd_print, f_args))
if len(args) > 1:
f_args = [ ]
for arg in args:
f_args.append(_prepare_ref(arg, for_names, file_args))
program.append((self._cmd_format, (f_args[0], f_args[1:])))
else:
program.append((self._cmd_print,
_prepare_ref(args[0], for_names, file_args)))
if stack:
### would be nice to say which blocks...
raise UnclosedBlocksError()
return program
def _execute(self, program, ctx):
def _execute(self, program, fp, ctx):
"""This private helper function takes a 'program' sequence as created
by the method '_parse' and executes it step by step. strings are written
to the file object 'fp' and functions are called.
"""
for step in program:
if isinstance(step, StringType):
ctx.fp.write(step)
fp.write(step)
else:
step[0](step[1], ctx)
step[0](step[1], fp, ctx)
def _cmd_print(self, valrefs, ctx):
value = _get_value(valrefs[0], ctx)
args = map(lambda valref, ctx=ctx: _get_value(valref, ctx), valrefs[1:])
try:
_write_value(value, args, ctx)
except TypeError:
raise Exception("Unprintable value type for '%s'" % (str(valrefs[0][0])))
def _cmd_print(self, valref, fp, ctx):
value = _get_value(valref, ctx)
def _cmd_format(self, formatter, ctx):
if type(formatter) is TupleType:
formatter = _get_value(formatter, ctx)
ctx.formatters.append(formatter)
# if the value has a 'read' attribute, then it is a stream: copy it
if hasattr(value, 'read'):
while 1:
chunk = value.read(16384)
if not chunk:
break
fp.write(chunk)
else:
fp.write(value)
def _cmd_end_format(self, valref, ctx):
ctx.formatters.pop()
def _cmd_format(self, (valref, args), fp, ctx):
fmt = _get_value(valref, ctx)
parts = _re_subst.split(fmt)
for i in range(len(parts)):
piece = parts[i]
if i%2 == 1 and piece != '%':
idx = int(piece)
if idx < len(args):
piece = _get_value(args[idx], ctx)
else:
piece = '<undef>'
fp.write(piece)
def _cmd_include(self, (valref, reader), ctx):
def _cmd_include(self, (valref, base), fp, ctx):
fname = _get_value(valref, ctx)
if base:
fname = os.path.join(base, fname)
### note: we don't have the set of for_names to pass into this parse.
### I don't think there is anything to do but document it.
self._execute(self._parse(reader.read_other(fname)), ctx)
self._execute(self._parse_file(fname), fp, ctx)
def _cmd_if_any(self, args, ctx):
"If any value is a non-empty string or non-empty list, then T else F."
(valrefs, t_section, f_section) = args
value = 0
for valref in valrefs:
if _get_value(valref, ctx):
value = 1
break
self._do_if(value, t_section, f_section, ctx)
def _cmd_if_any(self, args, fp, ctx):
"If the value is a non-empty string or non-empty list, then T else F."
((valref,), t_section, f_section) = args
value = _get_value(valref, ctx)
self._do_if(value, t_section, f_section, fp, ctx)
def _cmd_if_index(self, args, ctx):
def _cmd_if_index(self, args, fp, ctx):
((valref, value), t_section, f_section) = args
iterator = ctx.for_iterators[valref[0]]
list, idx = ctx.for_index[valref[0]]
if value == 'even':
value = iterator.index % 2 == 0
value = idx % 2 == 0
elif value == 'odd':
value = iterator.index % 2 == 1
value = idx % 2 == 1
elif value == 'first':
value = iterator.index == 0
value = idx == 0
elif value == 'last':
value = iterator.is_last()
value = idx == len(list)-1
else:
value = iterator.index == int(value)
self._do_if(value, t_section, f_section, ctx)
value = idx == int(value)
self._do_if(value, t_section, f_section, fp, ctx)
def _cmd_is(self, args, ctx):
def _cmd_is(self, args, fp, ctx):
((left_ref, right_ref), t_section, f_section) = args
value = _get_value(right_ref, ctx)
value = string.lower(_get_value(left_ref, ctx)) == string.lower(value)
self._do_if(value, t_section, f_section, ctx)
self._do_if(value, t_section, f_section, fp, ctx)
def _do_if(self, value, t_section, f_section, ctx):
def _do_if(self, value, t_section, f_section, fp, ctx):
if t_section is None:
t_section = f_section
f_section = None
@@ -519,28 +393,19 @@ class Template:
else:
section = f_section
if section is not None:
self._execute(section, ctx)
self._execute(section, fp, ctx)
def _cmd_for(self, args, ctx):
def _cmd_for(self, args, fp, ctx):
((valref,), unused, section) = args
list = _get_value(valref, ctx)
if isinstance(list, StringType):
raise NeedSequenceError("The value of '%s' is not a sequence"
% (valref[0]))
raise NeedSequenceError()
refname = valref[0]
ctx.for_iterators[refname] = iterator = _iter(list)
for unused in iterator:
self._execute(section, ctx)
del ctx.for_iterators[refname]
def _cmd_define(self, args, ctx):
((name,), unused, section) = args
origfp = ctx.fp
ctx.fp = cStringIO.StringIO()
if section is not None:
self._execute(section, ctx)
ctx.defines[name] = ctx.fp.getvalue()
ctx.fp = origfp
ctx.for_index[refname] = idx = [ list, 0 ]
for item in list:
self._execute(section, fp, ctx)
idx[1] = idx[1] + 1
del ctx.for_index[refname]
def boolean(value):
"Return a value suitable for [if-any bool_var] usage in a template."
@@ -553,48 +418,34 @@ def _prepare_ref(refname, for_names, file_args):
"""refname -> a string containing a dotted identifier. example:"foo.bar.bang"
for_names -> a list of active for sequences.
Returns a `value reference', a 3-tuple made out of (refname, start, rest),
Returns a `value reference', a 3-Tupel made out of (refname, start, rest),
for fast access later.
"""
# is the reference a string constant?
if refname[0] == '"':
return None, refname[1:-1], None
parts = string.split(refname, '.')
start = parts[0]
rest = parts[1:]
# if this is an include-argument, then just return the prepared ref
if start[:3] == 'arg':
if refname[:3] == 'arg':
try:
idx = int(start[3:])
idx = int(refname[3:])
except ValueError:
pass
else:
if idx < len(file_args):
orig_refname, start, more_rest = file_args[idx]
if more_rest is None:
# the include-argument was a string constant
return None, start, None
# prepend the argument's "rest" for our further processing
rest[:0] = more_rest
# rewrite the refname to ensure that any potential 'for' processing
# has the correct name
### this can make it hard for debugging include files since we lose
### the 'argNNN' names
if not rest:
return start, start, [ ]
refname = start + '.' + string.join(rest, '.')
if for_names:
# From last to first part, check if this reference is part of a for loop
for i in range(len(parts), 0, -1):
name = string.join(parts[:i], '.')
if name in for_names:
return refname, name, parts[i:]
return file_args[idx]
parts = string.split(refname, '.')
start = parts[0]
rest = parts[1:]
while rest and (start in for_names):
# check if the next part is also a "for name"
name = start + '.' + rest[0]
if name in for_names:
start = name
del rest[0]
else:
break
return refname, start, rest
def _get_value((refname, start, rest), ctx):
@@ -608,14 +459,11 @@ def _get_value((refname, start, rest), ctx):
if rest is None:
# it was a string constant
return start
# get the starting object
if ctx.for_iterators.has_key(start):
ob = ctx.for_iterators[start].last_item
elif ctx.defines.has_key(start):
ob = ctx.defines[start]
elif hasattr(ctx.data, start):
ob = getattr(ctx.data, start)
if ctx.for_index.has_key(start):
list, idx = ctx.for_index[start]
ob = list[idx]
elif ctx.data.has_key(start):
ob = ctx.data[start]
else:
raise UnknownReference(refname)
@@ -627,9 +475,7 @@ def _get_value((refname, start, rest), ctx):
raise UnknownReference(refname)
# make sure we return a string instead of some various Python types
if isinstance(ob, IntType) \
or isinstance(ob, LongType) \
or isinstance(ob, FloatType):
if isinstance(ob, IntType) or isinstance(ob, FloatType):
return str(ob)
if ob is None:
return ''
@@ -637,202 +483,20 @@ def _get_value((refname, start, rest), ctx):
# string or a sequence
return ob
def _print_formatted(formatters, ctx, chunk):
# print chunk to ctx.fp after running it sequentially through formatters
for formatter in formatters:
chunk = formatter(chunk)
ctx.fp.write(chunk)
def _write_value(value, args, ctx):
# value is a callback function, generates its own output
if callable(value):
apply(value, [ctx] + list(args))
return
# squirrel away formatters in case one of them recursively calls
# _write_value() -- we'll use them (in reverse order) to format our
# output.
formatters = ctx.formatters[:]
formatters.reverse()
try:
# if the value has a 'read' attribute, then it is a stream: copy it
if hasattr(value, 'read'):
while 1:
chunk = value.read(16384)
if not chunk:
break
_print_formatted(formatters, ctx, chunk)
# value is a substitution pattern
elif args:
parts = _re_subst.split(value)
for i in range(len(parts)):
piece = parts[i]
if i%2 == 1 and piece != '%':
idx = int(piece)
if idx < len(args):
piece = args[idx]
else:
piece = '<undef>'
_print_formatted(formatters, ctx, piece)
# plain old value, write to output
else:
_print_formatted(formatters, ctx, value)
finally:
# restore our formatters
formatters.reverse()
ctx.formatters = formatters
class Context:
class _context:
"""A container for the execution context"""
def __init__(self, fp):
self.fp = fp
self.formatters = []
def write(self, value, args=()):
_write_value(value, args, self)
class Reader:
"Abstract class which allows EZT to detect Reader objects."
class ArgCountSyntaxError(Exception):
pass
class _FileReader(Reader):
"""Reads templates from the filesystem."""
def __init__(self, fname):
self.text = open(fname, 'rb').read()
self._dir = os.path.dirname(fname)
def read_other(self, relative):
return _FileReader(os.path.join(self._dir, relative))
class UnknownReference(Exception):
pass
class _TextReader(Reader):
"""'Reads' a template from provided text."""
def __init__(self, text):
self.text = text
def read_other(self, relative):
raise BaseUnavailableError()
class NeedSequenceError(Exception):
pass
class _Iterator:
"""Specialized iterator for EZT that counts items and can look ahead
Implements standard iterator interface and provides an is_last() method
and two public members:
index - integer index of the current item
last_item - last item returned by next()"""
def __init__(self, sequence):
self._iter = iter(sequence)
def next(self):
if hasattr(self, '_next_item'):
self.last_item = self._next_item
del self._next_item
else:
self.last_item = self._iter.next() # may raise StopIteration
if hasattr(self, 'index'):
self.index = self.index + 1
else:
self.index = 0
return self.last_item
def is_last(self):
"""Return true if the current item is the last in the sequence"""
# the only way we can tell if the current item is last is to call next()
# and store the return value so it doesn't get lost
if not hasattr(self, '_next_item'):
try:
self._next_item = self._iter.next()
except StopIteration:
return 1
return 0
def __iter__(self):
return self
class _OldIterator:
"""Alternate implemention of _Iterator for old Pythons without iterators
This class implements the sequence protocol, instead of the iterator
interface, so it's really not an iterator at all. But it can be used in
python "for" loops as a drop-in replacement for _Iterator. It also provides
the is_last() method and "last_item" and "index" members described in the
_Iterator docstring."""
def __init__(self, sequence):
self._seq = sequence
def __getitem__(self, index):
self.last_item = self._seq[index] # may raise IndexError
self.index = index
return self.last_item
def is_last(self):
return self.index + 1 >= len(self._seq)
try:
iter
except NameError:
_iter = _OldIterator
else:
_iter = _Iterator
class EZTException(Exception):
"""Parent class of all EZT exceptions."""
class ArgCountSyntaxError(EZTException):
"""A bracket directive got the wrong number of arguments."""
class UnknownReference(EZTException):
"""The template references an object not contained in the data dictionary."""
class NeedSequenceError(EZTException):
"""The object dereferenced by the template is no sequence (tuple or list)."""
class UnclosedBlocksError(EZTException):
"""This error may be simply a missing [end]."""
class UnmatchedEndError(EZTException):
"""This error may be caused by a misspelled if directive."""
class BaseUnavailableError(EZTException):
"""Base location is unavailable, which disables includes."""
class UnknownFormatConstantError(EZTException):
"""The format specifier is an unknown value."""
def _raw_formatter(s):
try: s = s.encode('utf-8')
except: pass
return s
def _html_formatter(s):
try: s = s.encode('utf-8')
except: pass
return cgi.escape(s)
def _xml_formatter(s):
try: s = s.encode('utf-8')
except: pass
s = s.replace('&', '&#x26;')
s = s.replace('<', '&#x3C;')
s = s.replace('>', '&#x3E;')
return s
def _uri_formatter(s):
try: s = s.encode('utf-8')
except: pass
return urllib.quote(s)
_formatters = {
FORMAT_RAW : _raw_formatter,
FORMAT_HTML : _html_formatter,
FORMAT_XML : _xml_formatter,
FORMAT_URI : _uri_formatter,
}
class UnclosedBlocksError(Exception):
pass
# --- standard test environment ---
def test_parse():
@@ -849,7 +513,7 @@ def test_parse():
['', '["a \\"b[foo]" c.d f]', None, '']
def _test(argv):
import doctest, ezt
import doctest, ezt
verbose = "-v" in argv
return doctest.testmod(ezt, verbose=verbose)

View File

@@ -1,283 +0,0 @@
# -*-python-*-
# -----------------------------------------------------------------------
# Simple Global Authentication client for ViewVC
# License: GPLv2+
# Author: Vitaliy Filippov
# -----------------------------------------------------------------------
#
# USAGE:
#
# import globalauth
# c = globalauth.GlobalAuthClient()
# c.auth(server)
# user_name = c.user_name
# user_url = c.user_url
#
# auth() will call sys.exit() when it needs to stop request processing
#
# -----------------------------------------------------------------------
import os
import re
import sys
import struct
import cgi
import binascii
import time
import datetime
import urllib
import urllib2
import anyjson
import random
import Cookie
import ga_config
class FileCache:
def __init__(self, dir):
self.dir = dir
if not os.path.isdir(dir):
os.mkdir(dir)
def fn(self, key):
key = re.sub('([^a-zA-Z0-9_\-]+)', lambda x: binascii.hexlify(x.group(1)), key)
return self.dir+'/'+key
def clean(self):
t = time.time()
for fn in os.listdir(self.dir):
if t > os.stat(self.dir+'/'+fn).st_mtime:
os.unlink(self.dir+'/'+fn)
def set(self, key, value, expire = 86400):
fn = self.fn(key)
try:
f = open(fn,'w')
if not expire:
expire = 86400
expire = time.time()+expire
f.write(value)
f.close()
os.chmod(fn, 0600)
os.utime(fn, (expire, expire))
except:
raise
return 1
def get(self, key):
fn = self.fn(key)
try:
f = open(fn,'r')
value = f.read()
f.close()
if time.time() > os.stat(fn).st_mtime:
os.unlink(fn)
return ''
return value
except:
pass
return ''
def delete(self, key):
fn = self.fn(key)
try:
os.unlink(fn)
except:
pass
class GlobalAuthClient:
wd = { 0 : 'Mon', 1 : 'Tue', 2 : 'Wed', 3 : 'Thu', 4 : 'Fri', 5 : 'Sat', 6 : 'Sun' }
ms = { 1 : 'Jan', 2 : 'Feb', 3 : 'Mar', 4 : 'Apr', 5 : 'May', 6 : 'Jun', 7 : 'Jul', 8 : 'Aug', 9 : 'Sep', 10 : 'Oct', 11 : 'Nov', 12 : 'Dec' }
def __init__(self, server):
self.server = server
self.v = {}
for name, values in server.params().items():
self.v[name] = values[0]
fs = server.FieldStorage()
for name in fs:
self.v[name] = fs[name].value
self.cookies = Cookie.SimpleCookie()
# '' default value is needed here - else we die here under WSGI without any exception O_o
self.cookies.load(self.server.getenv('HTTP_COOKIE', ''))
self.user_name = ''
self.user_url = ''
if not ga_config.gac.get('globalauth_server', '') and not ga_config.gac.get('fof_sudo_server', ''):
raise Exception('ga_config.gac must contain at least globalauth_server="URL" or fof_sudo_server="URL"')
self.gac = {
'cookie_name' : 'simple_global_auth',
'cookie_expire' : 86400*7,
'cookie_path' : '/',
'cookie_domain' : '',
'globalauth_server' : '',
'cache_dir' : os.path.abspath(os.path.dirname(__file__))+'/cache',
'cut_email_at' : 0,
'ga_always_require' : 0,
'fof_sudo_server' : '',
'fof_sudo_cookie' : 'fof_sudo_id',
'gc_probability' : 20,
}
for i in self.gac:
if ga_config.gac.get(i, None) is not None:
self.gac[i] = ga_config.gac[i]
self.cache = FileCache(self.gac['cache_dir'])
def auth(self):
if self.gac['fof_sudo_server']:
self.auth_fof_sudo()
if not self.user_name and self.gac['globalauth_server']:
self.auth_ga()
def auth_ga(self):
i = random.randint(1, self.gac['gc_probability'])
if i == 1:
self.cache.clean()
r_id = self.cookies.get(self.gac['cookie_name'], '')
if r_id:
r_id = r_id.value
ga_id = self.v.get('ga_id', '')
if self.v.get('ga_client', ''):
self.ga_client(r_id, ga_id)
return
r_data = ''
if r_id == 'nologin':
r_data = 'nologin'
elif r_id != '':
r_data = self.cache.get('D'+r_id)
if r_data != 'nologin':
try: r_data = anyjson.deserialize(r_data)
except: r_data = ''
is_browser = re.match('opera|firefox|chrome|safari', self.server.getenv('HTTP_USER_AGENT'), re.I)
if not r_data and (is_browser or self.gac['ga_always_require']) or self.v.get('ga_require', None):
self.ga_begin()
elif r_data and r_data != 'nologin':
self.set_user(r_data)
def ga_client(self, r_id, ga_id):
ga_key = self.v.get('ga_key', '')
if ga_key and ga_key == self.cache.get('K'+ga_id):
# Server-to-server request
self.cache.delete('K'+ga_id)
data = ''
if self.v.get('ga_nologin','') != '':
data = 'nologin'
else:
try: data = anyjson.deserialize(self.v.get('ga_data',''))
except: raise
if data != '':
if data != 'nologin':
data = anyjson.serialize(data)
self.cache.set('D'+ga_id, data)
self.server.header('text/plain')
self.server.write('1')
sys.exit()
elif ga_key == '' and r_id != ga_id:
# User redirect with different key
d = self.cache.get('D'+ga_id)
if d != 'nologin' and d != '':
try: d = anyjson.deserialize(d)
except: d = ''
if d != '':
self.setcookie(ga_id)
self.redirect(self.clean_uri())
self.server.header('text/plain', status=404)
self.server.write('GlobalAuth key doesn\'t match')
sys.exit()
def ga_begin(self):
ga_id = binascii.hexlify(os.urandom(16))
ga_key = binascii.hexlify(os.urandom(16))
url = self.add_param(self.gac['globalauth_server'], '')
try:
resp = urllib2.urlopen(url+'ga_id='+urllib2.quote(ga_id)+'&ga_key='+urllib2.quote(ga_key))
resp.read()
if resp.code != 200:
raise Exception(resp)
except:
self.setcookie('nologin')
self.redirect(self.clean_uri())
return_uri = 'http://'+self.server.getenv('HTTP_HOST')+self.server.getenv('REQUEST_URI')
return_uri = self.add_param(return_uri, 'ga_client=1')
self.cache.set('K'+ga_id, ga_key)
url = url+'ga_id='+urllib2.quote(ga_id)+'&ga_url='+urllib2.quote(return_uri)
if self.v.get('ga_require', '') == '' and not self.gac['ga_always_require']:
url = url+'&ga_check=1'
self.redirect(url)
def add_param(self, url, param):
if url.find('?') != -1:
url = url+'&'
else:
url = url+'?'
return url+param
def auth_fof_sudo(self):
sudo_id = self.cookies.get(self.gac['fof_sudo_cookie'], '')
if sudo_id:
sudo_id = sudo_id.value
if sudo_id != '':
url = self.gac['fof_sudo_server']
if url.find('?') != -1:
url = url+'&'
else:
url = url+'?'
try:
resp = urllib2.urlopen(url+'id='+urllib2.quote(sudo_id))
d = resp.read()
if resp.code != 200:
raise Exception(resp)
d = anyjson.deserialize(d)
self.set_user(d)
except:
pass
def log(self, s):
sys.stderr.write(s+"\n")
sys.stderr.flush()
def redirect(self, url):
self.server.addheader('Location', url)
self.server.header(status='302 Moved Temporarily')
self.server.write('This document is located <a href="%s">here</a>.' % url)
sys.exit()
def setcookie(self, value):
dom = self.gac['cookie_domain']
if not dom:
dom = self.server.getenv('HTTP_HOST')
exp = ''
if self.gac['cookie_expire'] > 0:
tm = int(time.time()+self.gac['cookie_expire'])
tm = datetime.datetime.utcfromtimestamp(tm)
tm = "%s, %02d-%s-%04d %02d:%02d:%02d GMT" % (self.wd[tm.weekday()], tm.day, self.ms[tm.month], tm.year, tm.hour, tm.minute, tm.second)
exp = '; expires='+tm
self.server.addheader('Set-Cookie', "%s=%s; path=%s; domain=%s%s" % (self.gac['cookie_name'], value, self.gac['cookie_path'], dom, exp))
def clean_uri(self):
uriargs = self.v.copy()
for i in [ 'ga_id', 'ga_res', 'ga_key', 'ga_client', 'ga_nologin', 'ga_require' ]:
uriargs.pop(i, None)
uri = self.server.getenv('REQUEST_URI')
p = uri.find('?')
if p != -1:
uri = uri[0:p]
uri = 'http://'+self.server.getenv('HTTP_HOST')+uri+'?'+urllib.urlencode(uriargs)
return uri
def set_user(self, r_data):
r_email = r_data.get('user_email', '').encode('utf-8')
r_url = r_data.get('user_url', '').encode('utf-8')
if self.gac['cut_email_at']:
p = r_email.find('@')
if p != -1:
r_email = r_email[0:p]
self.user_name = r_email
self.user_url = r_url

View File

@@ -1,199 +0,0 @@
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# idiff: display differences between files highlighting intraline changes
#
# -----------------------------------------------------------------------
from __future__ import generators
import difflib
import sys
import re
from common import _item, _RCSDIFF_NO_CHANGES
import ezt
import sapi
def sidebyside(fromlines, tolines, context):
"""Generate side by side diff"""
### for some reason mdiff chokes on \n's in input lines
line_strip = lambda line: line.rstrip("\n")
fromlines = map(line_strip, fromlines)
tolines = map(line_strip, tolines)
had_changes = 0
gap = False
for fromdata, todata, flag in difflib._mdiff(fromlines, tolines, context):
if fromdata is None and todata is None and flag is None:
gap = True
else:
from_item = _mdiff_split(flag, fromdata)
to_item = _mdiff_split(flag, todata)
had_changes = 1
yield _item(gap=ezt.boolean(gap), columns=(from_item, to_item), type="intraline")
gap = False
if not had_changes:
yield _item(type=_RCSDIFF_NO_CHANGES)
_re_mdiff = re.compile("\0([+-^])(.*?)\1")
def _mdiff_split(flag, (line_number, text)):
"""Break up row from mdiff output into segments"""
segments = []
pos = 0
while True:
m = _re_mdiff.search(text, pos)
if not m:
segments.append(_item(text=sapi.escape(text[pos:]), type=None))
break
if m.start() > pos:
segments.append(_item(text=sapi.escape(text[pos:m.start()]), type=None))
if m.group(1) == "+":
segments.append(_item(text=sapi.escape(m.group(2)), type="add"))
elif m.group(1) == "-":
segments.append(_item(text=sapi.escape(m.group(2)), type="remove"))
elif m.group(1) == "^":
segments.append(_item(text=sapi.escape(m.group(2)), type="change"))
pos = m.end()
return _item(segments=segments, line_number=line_number)
def unified(fromlines, tolines, context):
"""Generate unified diff"""
diff = difflib.Differ().compare(fromlines, tolines)
lastrow = None
had_changes = 0
for row in _trim_context(diff, context):
if row[0].startswith("? "):
had_changes = 1
yield _differ_split(lastrow, row[0])
lastrow = None
else:
if lastrow:
had_changes = 1
yield _differ_split(lastrow, None)
lastrow = row
if lastrow:
had_changes = 1
yield _differ_split(lastrow, None)
if not had_changes:
yield _item(type=_RCSDIFF_NO_CHANGES)
def _trim_context(lines, context_size):
"""Trim context lines that don't surround changes from Differ results
yields (line, leftnum, rightnum, gap) tuples"""
# circular buffer to hold context lines
context_buffer = [None] * (context_size or 0)
context_start = context_len = 0
# number of context lines left to print after encountering a change
context_owed = 0
# current line numbers
leftnum = rightnum = 0
# whether context lines have been dropped
gap = False
for line in lines:
row = save = None
if line.startswith("- "):
leftnum = leftnum + 1
row = line, leftnum, None
context_owed = context_size
elif line.startswith("+ "):
rightnum = rightnum + 1
row = line, None, rightnum
context_owed = context_size
else:
if line.startswith(" "):
leftnum = leftnum = leftnum + 1
rightnum = rightnum = rightnum + 1
if context_owed > 0:
context_owed = context_owed - 1
elif context_size is not None:
save = True
row = line, leftnum, rightnum
if save:
# don't yield row right away, store it in buffer
context_buffer[(context_start + context_len) % context_size] = row
if context_len == context_size:
context_start = (context_start + 1) % context_size
gap = True
else:
context_len = context_len + 1
else:
# yield row, but first drain stuff in buffer
context_len == context_size
while context_len:
yield context_buffer[context_start] + (gap,)
gap = False
context_start = (context_start + 1) % context_size
context_len = context_len - 1
yield row + (gap,)
gap = False
_re_differ = re.compile(r"[+-^]+")
def _differ_split(row, guide):
"""Break row into segments using guide line"""
line, left_number, right_number, gap = row
if left_number and right_number:
type = ""
elif left_number:
type = "remove"
elif right_number:
type = "add"
segments = []
pos = 2
if guide:
assert guide.startswith("? ")
for m in _re_differ.finditer(guide, pos):
if m.start() > pos:
segments.append(_item(text=sapi.escape(line[pos:m.start()]), type=None))
segments.append(_item(text=sapi.escape(line[m.start():m.end()]),
type="change"))
pos = m.end()
segments.append(_item(text=sapi.escape(line[pos:]), type=None))
return _item(gap=ezt.boolean(gap), type=type, segments=segments,
left_number=left_number, right_number=right_number)
try:
### Using difflib._mdiff function here was the easiest way of obtaining
### intraline diffs for use in ViewVC, but it doesn't exist prior to
### Python 2.4 and is not part of the public difflib API, so for now
### fall back if it doesn't exist.
difflib._mdiff
except AttributeError:
sidebyside = None

View File

@@ -99,8 +99,9 @@ __version__ = 1, 6, 1
# is sent to stdout. Or you can call main(args), passing what would
# have been in sys.argv[1:] had the cmd-line form been used.
from compat_difflib import SequenceMatcher
from difflib import SequenceMatcher
import string
TRACE = 0
# define what "junk" means

View File

@@ -1,12 +1,13 @@
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
# Copyright (C) 2000-2001 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
# the LICENSE.html file which can be found at the top level of the ViewCVS
# distribution or at http://viewcvs.sourceforge.net/license-1.html.
#
# For more information, visit http://viewvc.org/
# Contact information:
# Greg Stein, PO Box 760, Palo Alto, CA, 94302
# gstein@lyra.org, http://viewcvs.sourceforge.net/
#
# -----------------------------------------------------------------------
#
@@ -18,62 +19,12 @@
# the arguments.
#
# -----------------------------------------------------------------------
#
import os
import sys
import sapi
import threading
if sys.platform == "win32":
import win32popen
import win32event
import win32process
import debug
import StringIO
def popen(cmd, args, mode, capture_err=1):
if sys.platform == "win32":
command = win32popen.CommandLine(cmd, args)
if mode.find('r') >= 0:
hStdIn = None
if debug.SHOW_CHILD_PROCESSES:
dbgIn, dbgOut = None, StringIO.StringIO()
handle, hStdOut = win32popen.MakeSpyPipe(0, 1, (dbgOut,))
if capture_err:
hStdErr = hStdOut
dbgErr = dbgOut
else:
dbgErr = StringIO.StringIO()
x, hStdErr = win32popen.MakeSpyPipe(None, 1, (dbgErr,))
else:
handle, hStdOut = win32popen.CreatePipe(0, 1)
if capture_err:
hStdErr = hStdOut
else:
hStdErr = win32popen.NullFile(1)
else:
if debug.SHOW_CHILD_PROCESSES:
dbgIn, dbgOut, dbgErr = StringIO.StringIO(), StringIO.StringIO(), StringIO.StringIO()
hStdIn, handle = win32popen.MakeSpyPipe(1, 0, (dbgIn,))
x, hStdOut = win32popen.MakeSpyPipe(None, 1, (dbgOut,))
x, hStdErr = win32popen.MakeSpyPipe(None, 1, (dbgErr,))
else:
hStdIn, handle = win32popen.CreatePipe(0, 1)
hStdOut = None
hStdErr = None
phandle, pid, thandle, tid = win32popen.CreateProcess(command, hStdIn, hStdOut, hStdErr)
if debug.SHOW_CHILD_PROCESSES:
debug.Process(command, dbgIn, dbgOut, dbgErr)
return _pipe(win32popen.File2FileObject(handle, mode), phandle)
# flush the stdio buffers since we are about to change the FD under them
sys.stdout.flush()
sys.stderr.flush()
@@ -84,18 +35,18 @@ def popen(cmd, args, mode, capture_err=1):
# in the parent
# close the descriptor that we don't need and return the other one.
if mode.find('r') >= 0:
if mode == 'r':
os.close(w)
return _pipe(os.fdopen(r, mode), pid)
return _pipe(os.fdopen(r, 'r'), pid)
os.close(r)
return _pipe(os.fdopen(w, mode), pid)
return _pipe(os.fdopen(w, 'w'), pid)
# in the child
# we'll need /dev/null for the discarded I/O
null = os.open('/dev/null', os.O_RDWR)
if mode.find('r') >= 0:
if mode == 'r':
# hook stdout/stderr to the "write" channel
os.dup2(w, 1)
# "close" stdin; the child shouldn't use it
@@ -124,40 +75,99 @@ def popen(cmd, args, mode, capture_err=1):
os.execvp(cmd, (cmd,) + tuple(args))
except:
# aid debugging, if the os.execvp above fails for some reason:
print "<h2>exec failed:</h2><pre>", cmd, ' '.join(args), "</pre>"
import string
print "<h2>exec failed:</h2><pre>", cmd, string.join(args), "</pre>"
raise
# crap. shouldn't be here.
sys.exit(127)
def pipe_cmds(cmds):
# flush the stdio buffers since we are about to change the FD under them
sys.stdout.flush()
sys.stderr.flush()
prev_r, parent_w = os.pipe()
null = os.open('/dev/null', os.O_RDWR)
for cmd in cmds[:-1]:
r, w = os.pipe()
pid = os.fork()
if not pid:
# in the child
# hook up stdin to the "read" channel
os.dup2(prev_r, 0)
# hook up stdout to the output channel
os.dup2(w, 1)
# toss errors
os.dup2(null, 2)
# close these extra descriptors
os.close(prev_r)
os.close(parent_w)
os.close(null)
os.close(r)
os.close(w)
# time to run the command
try:
os.execvp(cmd[0], cmd)
except:
pass
sys.exit(127)
# in the parent
# we don't need these any more
os.close(prev_r)
os.close(w)
# the read channel of this pipe will feed into to the next command
prev_r = r
# no longer needed
os.close(null)
# done with most of the commands. set up the last command to write to stdout
pid = os.fork()
if not pid:
# in the child (the last command)
# hook up stdin to the "read" channel
os.dup2(prev_r, 0)
# close these extra descriptors
os.close(prev_r)
os.close(parent_w)
# run the last command
try:
os.execvp(cmds[-1][0], cmds[-1])
except:
pass
sys.exit(127)
# not needed any more
os.close(prev_r)
# write into the first pipe, wait on the final process
return _pipe(os.fdopen(parent_w, 'w'), pid)
class _pipe:
"Wrapper for a file which can wait() on a child process at close time."
def __init__(self, file, child_pid, done_event = None, thread = None):
def __init__(self, file, child_pid):
self.file = file
self.child_pid = child_pid
if sys.platform == "win32":
if done_event:
self.wait_for = (child_pid, done_event)
else:
self.wait_for = (child_pid,)
else:
self.thread = thread
def eof(self):
### should be calling file.eof() here instead of file.close(), there
### may be data in the pipe or buffer after the process exits
if sys.platform == "win32":
r = win32event.WaitForMultipleObjects(self.wait_for, 1, 0)
if r == win32event.WAIT_OBJECT_0:
self.file.close()
self.file = None
return win32process.GetExitCodeProcess(self.child_pid)
return None
if self.thread and self.thread.isAlive():
return None
pid, status = os.waitpid(self.child_pid, os.WNOHANG)
if pid:
self.file.close()
@@ -169,22 +179,12 @@ class _pipe:
if self.file:
self.file.close()
self.file = None
if sys.platform == "win32":
win32event.WaitForMultipleObjects(self.wait_for, 1, win32event.INFINITE)
return win32process.GetExitCodeProcess(self.child_pid)
else:
if self.thread:
self.thread.join()
if type(self.child_pid) == type([]):
for pid in self.child_pid:
exit = os.waitpid(pid, 0)[1]
return exit
else:
return os.waitpid(self.child_pid, 0)[1]
return os.waitpid(self.child_pid, 0)[1]
return None
def __getattr__(self, name):
return getattr(self.file, name)
def __del__(self):
self.close()
if self.file:
self.close()

489
lib/py2html.py Normal file
View File

@@ -0,0 +1,489 @@
#!/usr/local/bin/python -u
""" Python Highlighter for HTML Version: 0.5
py2html.py [options] files...
options:
-h print help
- read from stdin, write to stdout
-stdout read from files, write to stdout
-files read from files, write to filename+'.html' (default)
-format:
html output HTML page (default)
rawhtml output pure HTML (without headers, titles, etc.)
-mode:
color output in color (default)
mono output b/w (for printing)
-title:Title use 'Title' as title of the generated page
-bgcolor:color use color as background-color for page
-header:file use contents of file as header
-footer:file use contents of file as footer
-URL replace all occurances of 'URL: link' with
'<A HREF="link">link</A>'; this is always enabled
in CGI mode
-v verbose
Takes the input, assuming it is Python code and formats it into
colored HTML. When called without parameters the script tries to
work in CGI mode. It looks for a field 'script=URL' and tries to
use that URL as input file. If it can't find this field, the path
info (the part of the URL following the CGI script name) is
tried. In case no host is given, the host where the CGI script
lives and HTTP are used.
* Uses Just van Rossum's PyFontify version 0.3 to tag Python scripts.
You can get it via his homepage on starship:
URL: http://starship.skyport.net/crew/just
"""
__comments__ = """
The following snippet is a small shell script I use for viewing
Python scripts per less on Unix:
#!/bin/sh
# Browse pretty printed Python code using ANSI codes for highlighting
py2html -stdout -format:ansi -mode:mono $* | less -r
History:
0.5: Added a few suggestions by Kevin Ng to make the CGI version
a little more robust.
"""
__copyright__ = """
-----------------------------------------------------------------------------
(c) Copyright by Marc-Andre Lemburg, 1998 (mailto:mal@lemburg.com)
Permission to use, copy, modify, and distribute this software and its
documentation for any purpose and without fee or royalty is hereby granted,
provided that the above copyright notice appear in all copies and that
both that copyright notice and this permission notice appear in
supporting documentation or portions thereof, including modifications,
that you make.
THE AUTHOR MARC-ANDRE LEMBURG DISCLAIMS ALL WARRANTIES WITH REGARD TO
THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS, IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL,
INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING
FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION
WITH THE USE OR PERFORMANCE OF THIS SOFTWARE !
"""
__version__ = '0.5'
__cgifooter__ = ('\n<PRE># code highlighted using <A HREF='
'"http://starship.skyport.net/~lemburg/">py2html.py</A> '
'version %s</PRE>\n' % __version__)
import sys,string,re
# Adjust path so that PyFontify is found...
sys.path.append('.')
### Constants
# URL of the input form the user is redirected to in case no script=xxx
# form field is given. The URL *must* be absolute. Leave blank to
# have the script issue an error instead.
INPUT_FORM = 'http://starship.skyport.net/~lemburg/SoftwareDescriptions.html#py2html.py'
### Helpers
def fileio(file, mode='rb', data=None, close=0):
if type(file) == type(''):
f = open(file,mode)
close = 1
else:
f = file
if data:
f.write(data)
else:
data = f.read()
if close: f.close()
return data
### Converter class
class PrettyPrint:
""" generic Pretty Printer class
* supports tagging Python scripts in the following ways:
# format/mode | color mono
# --------------------------
# rawhtml | x x (HTML without headers, etc.)
# html | x x (a HTML page with HEAD&BODY:)
# ansi | x (with Ansi-escape sequences)
* interfaces:
file_filter -- takes two files: input & output (may be stdin/stdout)
filter -- takes a string and returns the highlighted version
* to create an instance use:
c = PrettyPrint(tagfct,format,mode)
where format and mode must be strings according to the
above table if you plan to use PyFontify.fontify as
tagfct
* the tagfct has to take one argument, text, and return a taglist
(format: [(id,left,right,sublist),...], where id is the
"name" given to the slice left:right in text and sublist is a
taglist for tags inside the slice or None)
"""
# misc settings
title = ''
bgcolor = '#FFFFFF'
header = ''
footer = ''
replace_URLs = 0
# formats to be used
formats = {}
def __init__(self,tagfct=None,format='html',mode='color'):
self.tag = tagfct
self.set_mode = getattr(self,'set_mode_'+format+'_'+mode)
self.filter = getattr(self,'filter_'+format)
self.set_mode()
def file_filter(self,infile,outfile):
text = fileio(infile,'r')
if type(infile) == type('') and self.title == '':
self.title = infile
fileio(outfile,'w',self.filter(text))
### set pre- and postfixes for formats & modes
#
# These methods must set self.formats to a dictionary having
# an entry for every tag returned by the tagging function.
#
# The format used is simple:
# tag:(prefix,postfix)
# where prefix and postfix are either strings or callable objects,
# that return a string (they are called with the matching tag text
# as only parameter). prefix is inserted in front of the tag, postfix
# is inserted right after the tag.
def set_mode_html_color(self):
self.formats = {
'all':('<PRE>','</PRE>'),
'comment':('<FONT COLOR=#1111CC>','</FONT>'),
'keyword':('<FONT COLOR=#3333CC><B>','</B></FONT>'),
'parameter':('<FONT COLOR=#000066>','</FONT>'),
'identifier':( lambda x,strip=string.strip:
'<A NAME="%s"><FONT COLOR=#CC0000><B>' % (strip(x)),
'</B></FONT></A>'),
'string':('<FONT COLOR=#115511>','</FONT>')
}
set_mode_rawhtml_color = set_mode_html_color
def set_mode_html_mono(self):
self.formats = {
'all':('<PRE>','</PRE>'),
'comment':('',''),
'keyword':( '<U>','</U>'),
'parameter':('',''),
'identifier':( lambda x,strip=string.strip:
'<A NAME="%s"><B>' % (strip(x)),
'</B>'),
'string':('','')
}
set_mode_rawhtml_mono = set_mode_html_mono
def set_mode_ansi_mono(self):
self.formats = {
'all':('',''),
'comment':('\033[2m','\033[m'),
'keyword':('\033[4m','\033[m'),
'parameter':('',''),
'identifier':('\033[1m','\033[m'),
'string':('','')
}
### filter for Python scripts given as string
def escape_html(self,text):
t = (('<','&lt;'),('>','&gt;'))
for x,y in t:
text = string.join(string.split(text,x),y)
return text
def filter_html(self,text):
output = self.fontify(self.escape_html(text))
if self.replace_URLs:
output = re.sub('URL:([ \t]+)([^ \n\r<]+)',
'URL:\\1<A HREF="\\2">\\2</A>',output)
html = """<HTML><HEAD><TITLE>%s</TITLE></HEAD>
<BODY BGCOLOR=%s>
<!--header-->%s
<!--script-->%s
<!--footer-->%s
</BODY>\n"""%(self.title,self.bgcolor,self.header,output,self.footer)
return html
def filter_rawhtml(self,text):
output = self.fontify(self.escape_html(text))
if self.replace_URLs:
output = re.sub('URL:([ \t]+)([^ \n\r<]+)',
'URL:\\1<A HREF="\\2">\\2</A>',output)
return self.header+output+self.footer
def filter_ansi(self,text):
output = self.fontify(text)
return self.header+output+self.footer
### fontify engine
def fontify(self,pytext):
# parse
taglist = self.tag(pytext)
# prepend special 'all' tag:
taglist[:0] = [('all',0,len(pytext),None)]
# prepare splitting
splits = []
addsplits(splits,pytext,self.formats,taglist)
# do splitting & inserting
splits.sort()
l = []
li = 0
for ri,dummy,insert in splits:
if ri > li: l.append(pytext[li:ri])
l.append(insert)
li = ri
if li < len(pytext): l.append(pytext[li:])
return string.join(l,'')
def addsplits(splits,text,formats,taglist):
# helper for fontify()
for id,left,right,sublist in taglist:
try:
pre,post = formats[id]
except KeyError:
# sys.stderr.write('Warning: no format for %s specified\n'%repr(id))
pre,post = '',''
if type(pre) != type(''):
pre = pre(text[left:right])
if type(post) != type(''):
post = post(text[left:right])
# len(splits) is a dummy used to make sorting stable
splits.append((left,len(splits),pre))
if sublist:
addsplits(splits,text,formats,sublist)
splits.append((right,len(splits),post))
def write_html_error(titel,text):
print """\
<HTML><HEADER><TITLE>%s</TITLE></HEADER>
<BODY>
<H2>%s</H2>
%s
</BODY></HTML>
""" % (titel,titel,text)
def redirect_to(url):
sys.stdout.write('Content-Type: text/html\r\n')
sys.stdout.write('Status: 302\r\n')
sys.stdout.write('Location: %s\r\n\r\n' % url)
print """
<HTML><HEAD>
<TITLE>302 Moved Temporarily</TITLE>
</HEAD><BODY>
<H1>302 Moved Temporarily</H1>
The document has moved to <A HREF="%s">%s</A>.<P>
</BODY></HTML>
""" % (url,url)
def main(cmdline):
""" main(cmdline) -- process cmdline as if it were sys.argv
"""
# parse options/files
options = []
optvalues = {}
for o in cmdline[1:]:
if o[0] == '-':
if ':' in o:
k,v = tuple(string.split(o,':'))
optvalues[k] = v
options.append(k)
else:
options.append(o)
else:
break
files = cmdline[len(options)+1:]
# create converting object
# load fontifier
if '-marcs' in options:
# use mxTextTool's tagging engine
from mxTextTools import tag
from mxTextTools.Examples.Python import python_script
tagfct = lambda text,tag=tag,pytable=python_script: \
tag(text,pytable)[1]
print "Py2HTML: using Marc's tagging engine"
else:
# load Just's
try:
import PyFontify
if PyFontify.__version__ < '0.3': raise ValueError
tagfct = PyFontify.fontify
except:
print """
Sorry, but this script needs the PyFontify.py module version 0.3;
You can download it from Just's homepage at
URL: http://starship.skyport.net/crew/just
"""
sys.exit()
if '-format' in options:
format = optvalues['-format']
else:
# use default
format = 'html'
if '-mode' in options:
mode = optvalues['-mode']
else:
# use default
mode = 'color'
c = PrettyPrint(tagfct,format,mode)
convert = c.file_filter
# start working
if '-title' in options:
c.title = optvalues['-title']
if '-bgcolor' in options:
c.bgcolor = optvalues['-bgcolor']
if '-header' in options:
try:
f = open(optvalues['-header'])
c.header = f.read()
f.close()
except IOError:
if verbose: print 'IOError: header file not found'
if '-footer' in options:
try:
f = open(optvalues['-footer'])
c.footer = f.read()
f.close()
except IOError:
if verbose: print 'IOError: footer file not found'
if '-URL' in options:
c.replace_URLs = 1
if '-' in options:
convert(sys.stdin,sys.stdout)
sys.exit()
if '-h' in options:
print __doc__
sys.exit()
if len(files) == 0:
# Turn URL processing on
c.replace_URLs = 1
# Try CGI processing...
import cgi,urllib,urlparse,os
form = cgi.FieldStorage()
if not form.has_key('script'):
# Ok, then try pathinfo
if not os.environ.has_key('PATH_INFO'):
if INPUT_FORM:
redirect_to(INPUT_FORM)
else:
sys.stdout.write('Content-Type: text/html\r\n\r\n')
write_html_error('Missing Parameter',
'Missing script=URL field in request')
sys.exit(1)
url = os.environ['PATH_INFO'][1:] # skip the leading slash
else:
url = form['script'].value
sys.stdout.write('Content-Type: text/html\r\n\r\n')
scheme, host, path, params, query, frag = urlparse.urlparse(url)
if not host:
scheme = 'http'
if os.environ.has_key('HTTP_HOST'):
host = os.environ['HTTP_HOST']
else:
host = 'localhost'
url = urlparse.urlunparse((scheme, host, path, params, query, frag))
#print url; sys.exit()
network = urllib.URLopener()
try:
tempfile,headers = network.retrieve(url)
except IOError,reason:
write_html_error('Error opening "%s"' % url,
'The given URL could not be opened. Reason: %s' %\
str(reason))
sys.exit(1)
f = open(tempfile,'rb')
c.title = url
c.footer = __cgifooter__
convert(f,sys.stdout)
f.close()
network.close()
sys.exit()
if '-stdout' in options:
filebreak = '-'*72
for f in files:
try:
if len(files) > 1:
print filebreak
print 'File:',f
print filebreak
convert(f,sys.stdout)
except IOError:
pass
else:
verbose = ('-v' in options)
if verbose:
print 'Py2HTML: working on',
for f in files:
try:
if verbose: print f,
convert(f,f+'.html')
except IOError:
if verbose: print '(IOError!)',
if verbose:
print
print 'Done.'
if __name__=='__main__':
main(sys.argv)

View File

@@ -1,154 +0,0 @@
# -*- coding: utf-8 -*-
"""
pygments.lexers.sp4
~~~~~~~~~~~~~~~~~~~~~
Lexer for CustIS m4-preprocessed PL/SQL.
:copyright: 2009+ by Vitaliy Filippov <vitalif@mail.ru>.
:license: BSD, see LICENSE for more details.
,
'Sp4Lexer': ('pygments.lexers.sp4', 'SP4', ('sp4',), ('*.sp4'), ('text/x-sp4',))
"""
import re
from pygments.lexer import Lexer, RegexLexer, include, bygroups, using, this, \
do_insertions
from pygments.token import Error, Punctuation, \
Text, Comment, Operator, Keyword, Name, String, Number, Generic
from pygments.util import shebang_matches
__all__ = ['Sp4Lexer']
line_re = re.compile('.*?\n')
class Sp4Lexer(RegexLexer):
name = 'SP4'
aliases = ['sp4']
filenames = ['*.sp4']
mimetypes = ['text/x-sp4']
flags = re.IGNORECASE
tokens = {
'root': [
(r'\s+', Text),
(r'--.*?\n', Comment.Single),
(r'/\*', Comment.Multiline, 'multiline-comments'),
(r'(ABORT|ABS|ABSOLUTE|ACCESS|ADA|ADD|ADMIN|AFTER|AGGREGATE|'
r'ALIAS|ALL|ALLOCATE|ALTER|ANALYSE|ANALYZE|AND|ANY|ARE|AS|'
r'ASC|ASENSITIVE|ASSERTION|ASSIGNMENT|ASYMMETRIC|AT|ATOMIC|'
r'AUTHORIZATION|AVG|BACKWARD|BEFORE|BEGIN|BETWEEN|BITVAR|'
r'BIT_LENGTH|BOTH|BREADTH|BY|C|CACHE|CALL|CALLED|CARDINALITY|'
r'CASCADE|CASCADED|CASE|CAST|CATALOG|CATALOG_NAME|CHAIN|'
r'CHARACTERISTICS|CHARACTER_LENGTH|CHARACTER_SET_CATALOG|'
r'CHARACTER_SET_NAME|CHARACTER_SET_SCHEMA|CHAR_LENGTH|CHECK|'
r'CHECKED|CHECKPOINT|CLASS|CLASS_ORIGIN|CLOB|CLOSE|CLUSTER|'
r'COALSECE|COBOL|COLLATE|COLLATION|COLLATION_CATALOG|'
r'COLLATION_NAME|COLLATION_SCHEMA|COLUMN|COLUMN_NAME|'
r'COMMAND_FUNCTION|COMMAND_FUNCTION_CODE|COMMENT|COMMIT|'
r'COMMITTED|COMPLETION|CONDITION_NUMBER|CONNECT|CONNECTION|'
r'CONNECTION_NAME|CONSTRAINT|CONSTRAINTS|CONSTRAINT_CATALOG|'
r'CONSTRAINT_NAME|CONSTRAINT_SCHEMA|CONSTRUCTOR|CONTAINS|'
r'CONTINUE|CONVERSION|CONVERT|COPY|CORRESPONTING|COUNT|'
r'CREATE|CREATEDB|CREATEUSER|CROSS|CUBE|CURRENT|CURRENT_DATE|'
r'CURRENT_PATH|CURRENT_ROLE|CURRENT_TIME|CURRENT_TIMESTAMP|'
r'CURRENT_USER|CURSOR|CURSOR_NAME|CYCLE|DATA|DATABASE|'
r'DATETIME_INTERVAL_CODE|DATETIME_INTERVAL_PRECISION|DAY|'
r'DEALLOCATE|DECLARE|DEFAULT|DEFAULTS|DEFERRABLE|DEFERRED|'
r'DEFINED|DEFINER|DELETE|DELIMITER|DELIMITERS|DEREF|DESC|'
r'DESCRIBE|DESCRIPTOR|DESTROY|DESTRUCTOR|DETERMINISTIC|'
r'DIAGNOSTICS|DICTIONARY|DISCONNECT|DISPATCH|DISTINCT|DO|'
r'DOMAIN|DROP|DYNAMIC|DYNAMIC_FUNCTION|DYNAMIC_FUNCTION_CODE|'
r'EACH|ELSE|ENCODING|ENCRYPTED|END|END-EXEC|EQUALS|ESCAPE|EVERY|'
r'EXCEPT|ESCEPTION|EXCLUDING|EXCLUSIVE|EXEC|EXECUTE|EXISTING|'
r'EXISTS|EXPLAIN|EXTERNAL|EXTRACT|FALSE|FETCH|FINAL|FIRST|FOR|'
r'FORCE|FOREIGN|FORTRAN|FORWARD|FOUND|FREE|FREEZE|FROM|FULL|'
r'FUNCTION|G|GENERAL|GENERATED|GET|GLOBAL|GO|GOTO|GRANT|GRANTED|'
r'GROUP|GROUPING|HANDLER|HAVING|HIERARCHY|HOLD|HOST|IDENTITY|'
r'IGNORE|ILIKE|IMMEDIATE|IMMUTABLE|IMPLEMENTATION|IMPLICIT|IN|'
r'INCLUDING|INCREMENT|INDEX|INDITCATOR|INFIX|INHERITS|INITIALIZE|'
r'INITIALLY|INNER|INOUT|INPUT|INSENSITIVE|INSERT|INSTANTIABLE|'
r'INSTEAD|INTERSECT|INTO|INVOKER|IS|ISNULL|ISOLATION|ITERATE|JOIN|'
r'K|KEY|KEY_MEMBER|KEY_TYPE|LANCOMPILER|LANGUAGE|LARGE|LAST|'
r'LATERAL|LEADING|LEFT|LENGTH|LESS|LEVEL|LIKE|LILMIT|LISTEN|LOAD|'
r'LOCAL|LOCALTIME|LOCALTIMESTAMP|LOCATION|LOCATOR|LOCK|LOWER|M|'
r'MAP|MATCH|MAX|MAXVALUE|MESSAGE_LENGTH|MESSAGE_OCTET_LENGTH|'
r'MESSAGE_TEXT|METHOD|MIN|MINUTE|MINVALUE|MOD|MODE|MODIFIES|'
r'MODIFY|MONTH|MORE|MOVE|MUMPS|NAMES|NATIONAL|NATURAL|NCHAR|'
r'NCLOB|NEW|NEXT|NO|NOCREATEDB|NOCREATEUSER|NONE|NOT|NOTHING|'
r'NOTIFY|NOTNULL|NULL|NULLABLE|NULLIF|OBJECT|OCTET_LENGTH|OF|OFF|'
r'OFFSET|OIDS|OLD|ON|ONLY|OPEN|OPERATION|OPERATOR|OPTION|OPTIONS|'
r'OR|ORDER|ORDINALITY|OUT|OUTER|OUTPUT|OVERLAPS|OVERLAY|OVERRIDING|'
r'OWNER|PAD|PARAMETER|PARAMETERS|PARAMETER_MODE|PARAMATER_NAME|'
r'PARAMATER_ORDINAL_POSITION|PARAMETER_SPECIFIC_CATALOG|'
r'PARAMETER_SPECIFIC_NAME|PARAMATER_SPECIFIC_SCHEMA|PARTIAL|'
r'PASCAL|PENDANT|PLACING|PLI|POSITION|POSTFIX|PRECISION|PREFIX|'
r'PREORDER|PREPARE|PRESERVE|PRIMARY|PRIOR|PRIVILEGES|PROCEDURAL|'
r'PROCEDURE|PUBLIC|READ|READS|RECHECK|RECURSIVE|REF|REFERENCES|'
r'REFERENCING|REINDEX|RELATIVE|RENAME|REPEATABLE|REPLACE|RESET|'
r'RESTART|RESTRICT|RESULT|RETURN|RETURNED_LENGTH|'
r'RETURNED_OCTET_LENGTH|RETURNED_SQLSTATE|RETURNS|REVOKE|RIGHT|'
r'ROLE|ROLLBACK|ROLLUP|ROUTINE|ROUTINE_CATALOG|ROUTINE_NAME|'
r'ROUTINE_SCHEMA|ROW|ROWS|ROW_COUNT|RULE|SAVE_POINT|SCALE|SCHEMA|'
r'SCHEMA_NAME|SCOPE|SCROLL|SEARCH|SECOND|SECURITY|SELECT|SELF|'
r'SENSITIVE|SERIALIZABLE|SERVER_NAME|SESSION|SESSION_USER|SET|'
r'SETOF|SETS|SHARE|SHOW|SIMILAR|SIMPLE|SIZE|SOME|SOURCE|SPACE|'
r'SPECIFIC|SPECIFICTYPE|SPECIFIC_NAME|SQL|SQLCODE|SQLERROR|'
r'SQLEXCEPTION|SQLSTATE|SQLWARNINIG|STABLE|START|STATE|STATEMENT|'
r'STATIC|STATISTICS|STDIN|STDOUT|STORAGE|STRICT|STRUCTURE|STYPE|'
r'SUBCLASS_ORIGIN|SUBLIST|SUBSTRING|SUM|SYMMETRIC|SYSID|SYSTEM|'
r'SYSTEM_USER|TABLE|TABLE_NAME| TEMP|TEMPLATE|TEMPORARY|TERMINATE|'
r'THAN|THEN|TIMESTAMP|TIMEZONE_HOUR|TIMEZONE_MINUTE|TO|TOAST|'
r'TRAILING|TRANSATION|TRANSACTIONS_COMMITTED|'
r'TRANSACTIONS_ROLLED_BACK|TRANSATION_ACTIVE|TRANSFORM|'
r'TRANSFORMS|TRANSLATE|TRANSLATION|TREAT|TRIGGER|TRIGGER_CATALOG|'
r'TRIGGER_NAME|TRIGGER_SCHEMA|TRIM|TRUE|TRUNCATE|TRUSTED|TYPE|'
r'UNCOMMITTED|UNDER|UNENCRYPTED|UNION|UNIQUE|UNKNOWN|UNLISTEN|'
r'UNNAMED|UNNEST|UNTIL|UPDATE|UPPER|USAGE|USER|'
r'USER_DEFINED_TYPE_CATALOG|USER_DEFINED_TYPE_NAME|'
r'USER_DEFINED_TYPE_SCHEMA|USING|VACUUM|VALID|VALIDATOR|VALUES|'
r'VARIABLE|VERBOSE|VERSION|VIEW|VOLATILE|WHEN|WHENEVER|WHERE|'
r'WITH|WITHOUT|WORK|WRITE|YEAR|ZONE|BINARY_INTEGER|BODY|BREAK|'
r'CONSTANT|DEFINITION|DELETING|DLOB|DUMMY|ELSIF|ERRLVL|ERROR|'
r'EXCEPTION|EXIT|IF|INSERTING|ISOPEN|LOOP|NO_DATA_FOUND|NOTFOUND|'
r'OTHERS|PACKAGE|PRAGMA|RAISE|RECORD|RESTRICT_REFERENCES|RETURNING|'
r'RNDS|RNPS|ROWCOUNT|ROWNUM|ROWTYPE|SAVEPOINT|SQLERRM|TOO_MANY_ROWS|'
r'UPDATING|WHILE|WNDS|WNPS|BULK|CHR|COLLECT|GREATEST|INSTR|LEAST|'
r'LENGTHB|NOCOPY|NOWAIT|NVL|RPAD|SUBST|SUBSTB|SUBSTR|SYSDATE|'
r'TABLESPACE|TO_CHAR|TO_DATE|TO_NUMBER|TRUNC)\b', Keyword),
(r'(ARRAY|BIGINT|BINARY|BIT|BLOB|BOOLEAN|CHAR|CHARACTER|DATE|'
r'DEC|DECIMAL|FLOAT|INT|INTEGER|INTERVAL|NUMBER|NUMERIC|REAL|'
r'SERIAL|SMALLINT|VARCHAR|VARCHAR2|VARYING|INT8|SERIAL8|TEXT)\b',
Name.Builtin),
(r'(_5date_args|_actual_area|_ASSERT|_doc_code|_doc_var|_doc_var_list|'
r'_enforce|_ent|_enum|_enums_define|_enums_member_define|_field|'
r'_fields|_first|_FMT|_in_list|_index|_index_unique|_indexes'
r'_MAX_STRLEN|_MID_STRLEN|_param|_package|_pk|_plsql_param|_plsql_return|'
r'_private_function|_private_procedure|_private_type|_private_var|'
r'_protected_function|_protected_procedure|_protected_type|_protected_var|'
r'_public_function|_public_procedure|_public_type|_public_var|_quote|_RAISE|'
r'_RAISEW|_reference|_references|_request_arg|_request_define|'
r'_request_res|_STR_CONST|_substr|_table|_tail|builtin|cdr|changecom|'
r'changequote|changeword|debugfile|decr|define|divert|dummy_list|dumpdef|'
r'errprint|eval|foreach|forloop|GetNth|ifelse|include|incr|index|indir|'
r'len|lowcase|maketemp|Nth|patsubst|popdef|pushdef|regexp|'
r'remove_prefix|shift|sinclude|Size|syscmd|TRACE|traceoff|traceon|'
r'translit|undefine|undivert|upcase)\b',
Name.Function),
(r'[+*/<>=~!@#%^&|`?^-]', Operator),
(r'[0-9]+', Number.Integer),
(r"'(''|[^'\\]|\\\'|\\\\)*'", String.Single),
(r'"(""|[^"\\]|\\\"|\\\\)*"', String.Symbol),
(r'[a-zA-Z_][a-zA-Z0-9_]*', Name),
(r'[;:()\[\],\.]', Punctuation)
],
'multiline-comments': [
(r'/\*', Comment.Multiline, 'multiline-comments'),
(r'\*/', Comment.Multiline, '#pop'),
(r'[^/\*]+', Comment.Multiline),
(r'[/*]', Comment.Multiline)
]
}

View File

@@ -1,35 +1,47 @@
#!/usr/bin/env python
# -*-python-*-
#!/usr/bin/python
# -*- Mode: python -*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
# Copyright (C) 2000 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
# the LICENSE.html file which can be found at the top level of the ViewCVS
# distribution or at http://www.lyra.org/viewcvs/license-1.html.
#
# For more information, visit http://viewvc.org/
# Contact information:
# Greg Stein, PO Box 760, Palo Alto, CA, 94302
# gstein@lyra.org, http://www.lyra.org/viewcvs/
#
# -----------------------------------------------------------------------
#
# CGI script to process and display queries to CVSdb
#
# This script is part of the ViewVC package. More information can be
# found at http://viewvc.org
# This script is part of the ViewCVS package. More information can be
# found at http://www.lyra.org/viewcvs/.
#
# -----------------------------------------------------------------------
#
#########################################################################
#
# INSTALL-TIME CONFIGURATION
#
# These values will be set during the installation process. During
# development, they will remain None.
#
CONF_PATHNAME = None
#########################################################################
import os
import sys
import string
import cgi
import time
from common import _item, TemplateData
import cvsdb
import viewvc
import viewcvs
import ezt
import debug
import urllib
import fnmatch
class FormData:
def __init__(self, form):
@@ -41,7 +53,6 @@ class FormData:
self.file = ""
self.who = ""
self.sortby = ""
self.textquery = ""
self.date = ""
self.hours = 0
@@ -49,7 +60,7 @@ class FormData:
def decode_thyself(self, form):
try:
self.textquery = string.strip(form["textquery"].value)
self.repository = string.strip(form["repository"].value)
except KeyError:
pass
except TypeError:
@@ -58,16 +69,7 @@ class FormData:
self.valid = 1
try:
self.repository = form["repository"].value.strip()
except KeyError:
pass
except TypeError:
pass
else:
self.valid = 1
try:
self.branch = form["branch"].value.strip()
self.branch = string.strip(form["branch"].value)
except KeyError:
pass
except TypeError:
@@ -76,7 +78,7 @@ class FormData:
self.valid = 1
try:
self.directory = form["directory"].value.strip()
self.directory = string.strip(form["directory"].value)
except KeyError:
pass
except TypeError:
@@ -85,7 +87,7 @@ class FormData:
self.valid = 1
try:
self.file = form["file"].value.strip()
self.file = string.strip(form["file"].value)
except KeyError:
pass
except TypeError:
@@ -94,7 +96,7 @@ class FormData:
self.valid = 1
try:
self.who = form["who"].value.strip()
self.who = string.strip(form["who"].value)
except KeyError:
pass
except TypeError:
@@ -103,14 +105,14 @@ class FormData:
self.valid = 1
try:
self.sortby = form["sortby"].value.strip()
self.sortby = string.strip(form["sortby"].value)
except KeyError:
pass
except TypeError:
pass
try:
self.date = form["date"].value.strip()
self.date = string.strip(form["date"].value)
except KeyError:
pass
except TypeError:
@@ -169,7 +171,7 @@ def listparse_string(str):
## command; add the command and start over
elif c == ",":
## strip ending whitespace on un-quoted data
temp = temp.rstrip()
temp = string.rstrip(temp)
return_list.append( ("", temp) )
temp = ""
state = "eat leading whitespace"
@@ -228,9 +230,8 @@ def decode_command(cmd):
else:
return "exact"
def form_to_cvsdb_query(cfg, form_data):
def form_to_cvsdb_query(form_data):
query = cvsdb.CreateCheckinQuery()
query.SetLimit(cfg.cvsdb.row_limit)
if form_data.repository:
for cmd, str in listparse_string(form_data.repository):
@@ -257,15 +258,10 @@ def form_to_cvsdb_query(cfg, form_data):
cmd = decode_command(cmd)
query.SetAuthor(str, cmd)
if form_data.textquery:
query.SetTextQuery(form_data.textquery)
if form_data.sortby == "author":
query.SetSortMethod("author")
elif form_data.sortby == "file":
query.SetSortMethod("file")
elif form_data.sortby == "relevance" and form_data.textquery:
query.SetSortMethod("relevance")
else:
query.SetSortMethod("date")
@@ -281,107 +277,43 @@ def form_to_cvsdb_query(cfg, form_data):
return query
def prev_rev(rev):
'''Returns a string representing the previous revision of the argument.'''
r = rev.split('.')
# decrement final revision component
r[-1] = str(int(r[-1]) - 1)
# prune if we pass the beginning of the branch
if len(r) > 2 and r[-1] == '0':
r = r[:-2]
return '.'.join(r)
def cvsroot_name_from_path(cvsroot):
## we need to resolve the cvsroot path from the database
## to the name given to it in the viewcvs.conf file
for key, value in cfg.general.cvs_roots.items():
if value == cvsroot:
return key
def is_forbidden(cfg, cvsroot_name, module):
'''Return 1 if MODULE in CVSROOT_NAME is forbidden; return 0 otherwise.'''
return None
# CVSROOT_NAME might be None here if the data comes from an
# unconfigured root. This interfaces doesn't care that the root
# isn't configured, but if that's the case, it will consult only
# the base and per-vhost configuration for authorizer and
# authorizer parameters.
if cvsroot_name:
cfg = cfg.get_root_config(cvsroot_name)
authorizer = cfg.options.authorizer
params = cfg.get_authorizer_params()
# If CVSROOT_NAME isn't configured to use an authorizer, nothing
# is forbidden. If it's configured to use something other than
# the 'forbidden' authorizer, complain. Otherwise, check for
# forbiddenness per the PARAMS as expected.
if not authorizer:
return 0
if authorizer != 'forbidden':
raise Exception("The 'forbidden' authorizer is the only one supported "
"by this interface. The '%s' root is configured to "
"use a different one." % (cvsroot_name))
forbidden = params.get('forbidden', '')
forbidden = map(lambda x: x.strip(), filter(None, forbidden.split(',')))
default = 0
for pat in forbidden:
if pat[0] == '!':
default = 1
if fnmatch.fnmatchcase(module, pat[1:]):
return 0
elif fnmatch.fnmatchcase(module, pat):
return 1
return default
def build_commit(server, cfg, desc, files, cvsroots, viewvc_link):
def build_commit(desc, files):
ob = _item(num_files=len(files), files=[])
ob.log = desc and server.escape(desc).replace('\n', '<br />') or '&nbsp;'
if desc:
ob.desc = string.replace(cgi.escape(desc), '\n', '<br>')
else:
ob.desc = '&nbsp;'
for commit in files:
repository = commit.GetRepository()
directory = commit.GetDirectory()
cvsroot_name = cvsroots.get(repository)
## find the module name (if any)
try:
module = filter(None, directory.split('/'))[0]
except IndexError:
module = None
## skip commits we aren't supposed to show
if module and ((module == 'CVSROOT' and cfg.options.hide_cvsroot) \
or is_forbidden(cfg, cvsroot_name, module)):
continue
ctime = commit.GetTime()
if not ctime:
ctime = "&nbsp;"
ctime = "&nbsp";
else:
if (cfg.options.use_localtime):
ctime = time.strftime("%y/%m/%d %H:%M %Z", time.localtime(ctime))
else:
ctime = time.strftime("%y/%m/%d %H:%M", time.gmtime(ctime)) \
+ ' UTC'
ctime = time.strftime("%y/%m/%d %H:%M", time.localtime(ctime))
## make the file link
try:
file = (directory and directory + "/") + commit.GetFile()
except:
raise Exception, str([directory, commit.GetFile()])
file = os.path.join(commit.GetDirectory(), commit.GetFile())
file_full_path = os.path.join(commit.GetRepository(), file)
## If we couldn't find the cvsroot path configured in the
## viewvc.conf file, or we don't have a VIEWVC_LINK, then
## don't make the link.
if cvsroot_name and viewvc_link:
flink = '[%s] <a href="%s/%s?root=%s">%s</a>' % (
cvsroot_name, viewvc_link, urllib.quote(file),
cvsroot_name, file)
if commit.GetType() == commit.CHANGE:
dlink = '%s/%s?root=%s&amp;view=diff&amp;r1=%s&amp;r2=%s' % (
viewvc_link, urllib.quote(file), cvsroot_name,
prev_rev(commit.GetRevision()), commit.GetRevision())
else:
dlink = None
## if we couldn't find the cvsroot path configured in the
## viewcvs.conf file, then don't make the link
cvsroot_name = cvsroot_name_from_path(commit.GetRepository())
if cvsroot_name:
flink = '<a href="viewcvs.cgi/%s?cvsroot=%s">%s</a>' \
% (file, cvsroot_name, file_full_path)
else:
flink = '[%s] %s' % (repository, file)
dlink = None
flink = file_full_path
ob.relevance = commit.GetRelevance()
ob.plus += int(commit.GetPlusCount())
ob.minus += int(commit.GetMinusCount())
ob.files.append(_item(date=ctime,
author=commit.GetAuthor(),
link=flink,
@@ -389,106 +321,111 @@ def build_commit(server, cfg, desc, files, cvsroots, viewvc_link):
branch=commit.GetBranch(),
plus=int(commit.GetPlusCount()),
minus=int(commit.GetMinusCount()),
type=commit.GetTypeString(),
difflink=dlink,
))
return ob
def run_query(server, cfg, db, form_data, viewvc_link):
query = form_to_cvsdb_query(cfg, form_data)
def run_query(form_data):
query = form_to_cvsdb_query(form_data)
db = cvsdb.ConnectDatabaseReadOnly()
db.RunQuery(query)
commit_list = query.GetCommitList()
if not commit_list:
return [ ], 0
row_limit_reached = query.GetLimitReached()
if not query.commit_list:
return [ ]
commits = [ ]
files = [ ]
cvsroots = {}
viewvc.expand_root_parents(cfg)
rootitems = cfg.general.svn_roots.items() + cfg.general.cvs_roots.items()
for key, value in rootitems:
cvsroots[cvsdb.CleanRepository(value)] = key
current_desc = commit_list[0].GetDescription()
for commit in commit_list:
current_desc = query.commit_list[0].GetDescription()
for commit in query.commit_list:
desc = commit.GetDescription()
if current_desc == desc:
files.append(commit)
continue
commits.append(build_commit(server, cfg, current_desc, files,
cvsroots, viewvc_link))
commits.append(build_commit(current_desc, files))
files = [ commit ]
current_desc = desc
## add the last file group to the commit list
commits.append(build_commit(server, cfg, current_desc, files,
cvsroots, viewvc_link))
commits.append(build_commit(current_desc, files))
# Strip out commits that don't have any files attached to them. The
# files probably aren't present because they've been blocked via
# forbiddenness.
def _only_with_files(commit):
return len(commit.files) > 0
commits = filter(_only_with_files, commits)
return commits, row_limit_reached
return commits
def main(server, cfg, viewvc_link):
try:
def handle_config():
viewcvs.handle_config()
global cfg
cfg = viewcvs.cfg
form = server.FieldStorage()
def main():
handle_config()
form = cgi.FieldStorage()
form_data = FormData(form)
db = cvsdb.ConnectDatabaseReadOnly(cfg, None)
if form_data.valid:
commits, row_limit_reached = run_query(server, cfg, db, form_data, viewvc_link)
commits = run_query(form_data)
query = None
else:
commits = [ ]
row_limit_reached = 0
query = 'skipped'
docroot = cfg.options.docroot
if docroot is None and viewvc_link:
docroot = viewvc_link + '/' + viewvc.docroot_magic_path
data = TemplateData({
data = {
'cfg' : cfg,
'address' : cfg.general.address,
'vsn' : viewvc.__version__,
'textquery' : server.escape(form_data.textquery),
'repository' : server.escape(form_data.repository),
'branch' : server.escape(form_data.branch),
'directory' : server.escape(form_data.directory),
'file' : server.escape(form_data.file),
'who' : server.escape(form_data.who),
'docroot' : docroot,
'vsn' : viewcvs.__version__,
'repository' : cgi.escape(form_data.repository, 1),
'branch' : cgi.escape(form_data.branch, 1),
'directory' : cgi.escape(form_data.directory, 1),
'file' : cgi.escape(form_data.file, 1),
'who' : cgi.escape(form_data.who, 1),
'sortby' : form_data.sortby,
'date' : form_data.date,
'query' : query,
'row_limit_reached' : ezt.boolean(row_limit_reached),
'commits' : commits,
'num_commits' : len(commits),
'rss_href' : None,
'repositories' : db.GetRepositoryList(),
'hours' : form_data.hours and form_data.hours or 2,
})
}
if form_data.hours:
data['hours'] = form_data.hours
else:
data['hours'] = 2
template = ezt.Template()
template.parse_file(os.path.join(viewcvs.g_install_dir,
cfg.templates.query))
viewcvs.http_header()
# generate the page
server.header()
template = viewvc.get_view_template(cfg, "query")
template.generate(server.file(), data)
template.generate(sys.stdout, data)
def run_cgi():
### be nice to share all this logic with viewcvs.run_cgi
try:
main()
except SystemExit, e:
pass
# don't catch SystemExit (caused by sys.exit()). propagate the exit code
sys.exit(e[0])
except:
exc_info = debug.GetExceptionData()
server.header(status=exc_info['status'])
debug.PrintException(server, exc_info)
info = sys.exc_info()
viewcvs.http_header()
print '<html><head><title>Python Exception Occurred</title></head>'
print '<body bgcolor=white><h1>Python Exception Occurred</h1>'
import traceback
lines = apply(traceback.format_exception, info)
print '<pre>'
print cgi.escape(string.join(lines, ''))
print '</pre>'
viewcvs.html_footer()
class _item:
def __init__(self, **kw):
vars(self).update(kw)

441
lib/rcsparse.py Normal file
View File

@@ -0,0 +1,441 @@
#
# Copyright (C) 2000-2001 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewCVS
# distribution or at http://viewcvs.sourceforge.net/license-1.html.
#
# Contact information:
# Greg Stein, PO Box 760, Palo Alto, CA, 94302
# gstein@lyra.org, http://viewcvs.sourceforge.net/
#
# -----------------------------------------------------------------------
#
# This software is being maintained as part of the ViewCVS project.
# Information is available at:
# http://viewcvs.sourceforge.net/
#
# This file was originally based on portions of the blame.py script by
# Curt Hagenlocher.
#
# -----------------------------------------------------------------------
import string
import time
class _TokenStream:
token_term = string.whitespace + ';'
# the algorithm is about the same speed for any CHUNK_SIZE chosen.
# grab a good-sized chunk, but not too large to overwhelm memory.
CHUNK_SIZE = 100000
# CHUNK_SIZE = 5 # for debugging, make the function grind...
def __init__(self, file):
self.rcsfile = file
self.idx = 0
self.buf = self.rcsfile.read(self.CHUNK_SIZE)
if self.buf == '':
raise RuntimeError, 'EOF'
def get(self):
"Get the next token from the RCS file."
# Note: we can afford to loop within Python, examining individual
# characters. For the whitespace and tokens, the number of iterations
# is typically quite small. Thus, a simple iterative loop will beat
# out more complex solutions.
buf = self.buf
idx = self.idx
while 1:
if idx == len(buf):
buf = self.rcsfile.read(self.CHUNK_SIZE)
if buf == '':
# signal EOF by returning None as the token
del self.buf # so we fail if get() is called again
return None
idx = 0
if buf[idx] not in string.whitespace:
break
idx = idx + 1
if buf[idx] == ';':
self.buf = buf
self.idx = idx + 1
return ';'
if buf[idx] != '@':
end = idx + 1
token = ''
while 1:
# find token characters in the current buffer
while end < len(buf) and buf[end] not in self.token_term:
end = end + 1
token = token + buf[idx:end]
if end < len(buf):
# we stopped before the end, so we have a full token
idx = end
break
# we stopped at the end of the buffer, so we may have a partial token
buf = self.rcsfile.read(self.CHUNK_SIZE)
idx = end = 0
self.buf = buf
self.idx = idx
return token
# a "string" which starts with the "@" character. we'll skip it when we
# search for content.
idx = idx + 1
chunks = [ ]
while 1:
if idx == len(buf):
idx = 0
buf = self.rcsfile.read(self.CHUNK_SIZE)
if buf == '':
raise RuntimeError, 'EOF'
i = string.find(buf, '@', idx)
if i == -1:
chunks.append(buf[idx:])
idx = len(buf)
continue
if i == len(buf) - 1:
chunks.append(buf[idx:i])
idx = 0
buf = '@' + self.rcsfile.read(self.CHUNK_SIZE)
if buf == '@':
raise RuntimeError, 'EOF'
continue
if buf[i + 1] == '@':
chunks.append(buf[idx:i+1])
idx = i + 2
continue
chunks.append(buf[idx:i])
self.buf = buf
self.idx = i + 1
return string.join(chunks, '')
# _get = get
# def get(self):
token = self._get()
print 'T:', `token`
return token
def match(self, match):
"Try to match the next token from the input buffer."
token = self.get()
if token != match:
raise RuntimeError, ('Unexpected parsing error in RCS file.\n' +
'Expected token: %s, but saw: %s' % (match, token))
def unget(self, token):
"Put this token back, for the next get() to return."
# Override the class' .get method with a function which clears the
# overridden method then returns the pushed token. Since this function
# will not be looked up via the class mechanism, it should be a "normal"
# function, meaning it won't have "self" automatically inserted.
# Therefore, we need to pass both self and the token thru via defaults.
# note: we don't put this into the input buffer because it may have been
# @-unescaped already.
def give_it_back(self=self, token=token):
del self.get
return token
self.get = give_it_back
class Parser:
def parse_rcs_admin(self):
while 1:
# Read initial token at beginning of line
token = self.ts.get()
# We're done once we reach the description of the RCS tree
if token[0] in string.digits:
self.ts.unget(token)
return
if token == "head":
self.sink.set_head_revision(self.ts.get())
self.ts.match(';')
elif token == "branch":
self.sink.set_principal_branch(self.ts.get())
self.ts.match(';')
elif token == "symbols":
while 1:
tag = self.ts.get()
if tag == ';':
break
(tag_name, tag_rev) = string.split(tag, ':')
self.sink.define_tag(tag_name, tag_rev)
elif token == "comment":
self.sink.set_comment(self.ts.get())
self.ts.match(';')
# Ignore all these other fields - We don't care about them. Also chews
# up "newphrase".
elif token in ("locks", "strict", "expand", "access"):
while 1:
tag = self.ts.get()
if tag == ';':
break
else:
pass
# warn("Unexpected RCS token: $token\n")
raise RuntimeError, "Unexpected EOF";
def parse_rcs_tree(self):
while 1:
revision = self.ts.get()
# End of RCS tree description ?
if revision == 'desc':
self.ts.unget(revision)
return
# Parse date
self.ts.match('date')
date = self.ts.get()
self.ts.match(';')
# Convert date into timestamp
date_fields = string.split(date, '.') + ['0', '0', '0']
date_fields = map(string.atoi, date_fields)
if date_fields[0] < 100:
date_fields[0] = date_fields[0] + 1900
timestamp = time.mktime(tuple(date_fields))
# Parse author
self.ts.match('author')
author = self.ts.get()
self.ts.match(';')
# Parse state
self.ts.match('state')
state = ''
while 1:
token = self.ts.get()
if token == ';':
break
state = state + token + ' '
state = state[:-1] # toss the trailing space
# Parse branches
self.ts.match('branches')
branches = [ ]
while 1:
token = self.ts.get()
if token == ';':
break
branches.append(token)
# Parse revision of next delta in chain
self.ts.match('next')
next = self.ts.get()
if next == ';':
next = None
else:
self.ts.match(';')
# there are some files with extra tags in them. for example:
# owner 640;
# group 15;
# permissions 644;
# hardlinks @configure.in@;
# this is "newphrase" in RCSFILE(5). we just want to skip over these.
while 1:
token = self.ts.get()
if token == 'desc' or token[0] in string.digits:
self.ts.unget(token)
break
# consume everything up to the semicolon
while self.ts.get() != ';':
pass
self.sink.define_revision(revision, timestamp, author, state, branches,
next)
def parse_rcs_description(self):
self.ts.match('desc')
self.sink.set_description(self.ts.get())
def parse_rcs_deltatext(self):
while 1:
revision = self.ts.get()
if revision is None:
# EOF
break
self.ts.match('log')
log = self.ts.get()
### need to add code to chew up "newphrase"
self.ts.match('text')
text = self.ts.get()
self.sink.set_revision_info(revision, log, text)
def parse(self, file, sink):
self.ts = _TokenStream(file)
self.sink = sink
self.parse_rcs_admin()
self.parse_rcs_tree()
# many sinks want to know when the tree has been completed so they can
# do some work to prep for the arrival of the deltatext
self.sink.tree_completed()
self.parse_rcs_description()
self.parse_rcs_deltatext()
# easiest for us to tell the sink it is done, rather than worry about
# higher level software doing it.
self.sink.parse_completed()
self.ts = self.sink = None
class Sink:
def set_head_revision(self, revision):
pass
def set_principal_branch(self, branch_name):
pass
def define_tag(self, name, revision):
pass
def set_comment(self, comment):
pass
def set_description(self, description):
pass
def define_revision(self, revision, timestamp, author, state,
branches, next):
pass
def set_revision_info(self, revision, log, text):
pass
def tree_completed(self):
pass
def parse_completed(self):
pass
# --------------------------------------------------------------------------
#
# TESTING AND DEBUGGING TOOLS
#
class DebugSink:
def set_head_revision(self, revision):
print 'head:', revision
def set_principal_branch(self, branch_name):
print 'branch:', branch_name
def define_tag(self, name, revision):
print 'tag:', name, '=', revision
def set_comment(self, comment):
print 'comment:', comment
def set_description(self, description):
print 'description:', description
def define_revision(self, revision, timestamp, author, state,
branches, next):
print 'revision:', revision
print ' timestamp:', timestamp
print ' author:', author
print ' state:', state
print ' branches:', branches
print ' next:', next
def set_revision_info(self, revision, log, text):
print 'revision:', revision
print ' log:', log
print ' text:', text[:100], '...'
class DumpSink:
"""Dump all the parse information directly to stdout.
The output is relatively unformatted and untagged. It is intended as a
raw dump of the data in the RCS file. A copy can be saved, then changes
made to the parsing engine, then a comparison of the new output against
the old output.
"""
def __init__(self):
global sha
import sha
def set_head_revision(self, revision):
print revision
def set_principal_branch(self, branch_name):
print branch_name
def define_tag(self, name, revision):
print name, revision
def set_comment(self, comment):
print comment
def set_description(self, description):
print description
def define_revision(self, revision, timestamp, author, state,
branches, next):
print revision, timestamp, author, state, branches, next
def set_revision_info(self, revision, log, text):
print revision, sha.new(log).hexdigest(), sha.new(text).hexdigest()
def tree_completed(self):
print 'tree_completed'
def parse_completed(self):
print 'parse_completed'
def dump_file(fname):
Parser().parse(open(fname), DumpSink())
def time_file(fname):
import time
p = Parser().parse
f = open(fname)
s = Sink()
t = time.time()
p(f, s)
t = time.time() - t
print t
def _usage():
print 'This is normally a module for importing, but it has a couple'
print 'features for testing as an executable script.'
print 'USAGE: %s COMMAND filename,v' % sys.argv[0]
print ' where COMMAND is one of:'
print ' dump: filename is "dumped" to stdout'
print ' time: filename is parsed with the time written to stdout'
sys.exit(1)
if __name__ == '__main__':
import sys
if len(sys.argv) != 3:
usage()
if sys.argv[1] == 'dump':
dump_file(sys.argv[2])
elif sys.argv[1] == 'time':
time_file(sys.argv[2])
else:
usage()

339
lib/rlog.py Normal file
View File

@@ -0,0 +1,339 @@
# -*- Mode: python -*-
#
# Copyright (C) 2000-2001 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewCVS
# distribution or at http://viewcvs.sourceforge.net/license-1.html.
#
# Contact information:
# Greg Stein, PO Box 760, Palo Alto, CA, 94302
# gstein@lyra.org, http://viewcvs.sourceforge.net/
#
# -----------------------------------------------------------------------
#
import os
import string
import re
import time
## RLogOutputParser uses the output of rlog to build a list of Commit
## objects describing all the checkins from a given RCS file; this
## parser is fairly optimized, and therefore can be delicate if the
## rlog output varies between versions of rlog; I don't know if it does;
## to make really fast, I should wrap the C rcslib
## there's definately not much error checking here; I'll assume things
## will go smoothly, and trap errors via exception handlers above this
## function
## exception for this class
error = 'rlog error'
class RLogData:
"Container object for all data parsed from a 'rlog' output."
def __init__(self, filename):
self.filename = filename
self.symbolic_name_hash = {}
self.rlog_entry_list = []
def LookupBranch(self, rlog_entry):
index = string.rfind(rlog_entry.revision, '.')
branch_revision = rlog_entry.revision[:index]
try:
branch = self.symbolic_name_hash[branch_revision]
except KeyError:
branch = ''
return branch
class RLogEntry:
## static constants for type of log entry; this will be changed
## to strings I guess -JMP
CHANGE = 0
ADD = 1
REMOVE = 2
## Here's the init function, which isn't needed since this class
## is fully initalized by RLogParser when creating a new log entry.
## Let's keep this initializer as a description of what is held in
## the class, but keep it commented out since it only makes things
## slow.
##
## def __init__(self):
## self.revision = ''
## self.author = ''
## self.branch = ''
## self.pluscount = ''
## self.minuscount = ''
## self.description = ''
## self.time = None
## self.type = RLogEntry.CHANGE
class RLog:
"Provides a alternative file-like interface for running 'rlog'."
def __init__(self, cfg, filename, revision, date):
self.filename = self.fix_filename(filename)
self.checkout_filename = self.create_checkout_filename(self.filename)
self.revision = revision
self.date = date
arg_list = []
if self.revision:
arg_list.append('-r%s' % (self.revision))
if self.date:
arg_list.append('-d%s' % (self.date))
temp = os.path.join(cfg.general.rcs_path, "rlog")
self.cmd = '%s %s "%s"' % (temp, string.join(arg_list), self.filename)
self.rlog = os.popen(self.cmd, 'r')
def fix_filename(self, filename):
## all RCS files have the ",v" ending
if filename[-2:] != ",v":
filename = "%s,v" % (filename)
if os.path.isfile(filename):
return filename
## check the Attic for the RCS file
path, basename = os.path.split(filename)
filename = os.path.join(path, "Attic", basename)
if os.path.isfile(filename):
return filename
raise error, "rlog file not found: %s" % (filename)
def create_checkout_filename(self, filename):
## cut off the ",v"
checkout_filename = filename[:-2]
## check if the file is in the Attic
path, basename = os.path.split(checkout_filename)
if path[-6:] != '/Attic':
return checkout_filename
## remove the "Attic" part of the path
checkout_filename = os.path.join(path[:-6], basename)
return checkout_filename
def readline(self):
try:
line = self.rlog.readline()
except AttributeError:
self.error()
if line:
return line
status = self.close()
if status:
self.error()
return None
def close(self):
status = self.rlog.close()
self.rlog = None
return status
def error(self):
raise error, "unexpected rlog exit: %s" % (self.cmd)
## constants used in the output parser
_rlog_commit_sep = '----------------------------\n'
_rlog_end = '=============================================================================\n'
## regular expression used in the output parser
_re_symbolic_name = re.compile("\s+([^:]+):\s+(.+)$")
_re_revision = re.compile("^revision\s+([0-9.]+).*")
_re_data_line = re.compile(
"^date:\s+(\d+)/(\d+)/(\d+)\s+(\d+):(\d+):(\d+);\s+"\
"author:\s+([^;]+);\s+"\
"state:\s+([^;]+);\s+"\
"lines:\s+\+(\d+)\s+\-(\d+)$")
_re_data_line_add = re.compile(
"^date:\s+(\d+)/(\d+)/(\d+)\s+(\d+):(\d+):(\d+);\s+"\
"author:\s+([^;]+);\s+"\
"state:\s+([^;]+);$")
class RLogOutputParser:
def __init__(self, rlog):
self.rlog = rlog
self.rlog_data = RLogData(rlog.checkout_filename)
## run the parser
self.parse_to_symbolic_names()
self.parse_symbolic_names()
self.parse_to_description()
self.parse_rlog_entries()
def parse_to_symbolic_names(self):
while 1:
line = self.rlog.readline()
if line[:15] == 'symbolic names:':
break
def parse_symbolic_names(self):
## parse all the tags int the branch_hash, it's used later to get
## the text names of non-head branches
while 1:
line = self.rlog.readline()
match = _re_symbolic_name.match(line)
if not match:
break
(tag, revision) = match.groups()
## check if the tag represents a branch, in RCS this means
## the second-to-last number is a zero
index = string.rfind(revision, '.')
if revision[index-2:index] == '.0':
revision = revision[:index-2] + revision[index:]
self.rlog_data.symbolic_name_hash[revision] = tag
def parse_to_description(self):
while 1:
line = self.rlog.readline()
if line[:12] == 'description:':
break
## eat all lines until we reach '-----' seperator
while 1:
line = self.rlog.readline()
if line == _rlog_commit_sep:
break
def parse_rlog_entries(self):
while 1:
rlog_entry = self.parse_one_rlog_entry()
if not rlog_entry:
break
self.rlog_data.rlog_entry_list.append(rlog_entry)
def parse_one_rlog_entry(self):
## revision line/first line
line = self.rlog.readline()
# Since FreeBSD's rlog outputs extra "---...---\n" before
# "===...===\n", _rlog_end may be occured here.
if not line or line == _rlog_end:
return None
## revision
match = _re_revision.match(line)
(revision,) = match.groups()
## data line
line = self.rlog.readline()
match = _re_data_line.match(line)
if not match:
match = _re_data_line_add.match(line)
if not match:
raise error, "bad rlog parser, no cookie!"
## retrieve the matched grops as a tuple in hopes
## this will be faster (ala profiler)
groups = match.groups()
year = string.atoi(groups[0])
month = string.atoi(groups[1])
day = string.atoi(groups[2])
hour = string.atoi(groups[3])
minute = string.atoi(groups[4])
second = string.atoi(groups[5])
author = groups[6]
state = groups[7]
## very strange; here's the deal: if this is a newly added file,
## then there is no plus/minus count count of lines; if there
## is, then this could be a "CHANGE" or "REMOVE", you can tell
## if the file has been removed by looking if state == 'dead'
try:
pluscount = groups[8]
minuscount = groups[9]
except IndexError:
pluscount = ''
minuscount = ''
cmit_type = RLogEntry.ADD
else:
if state == 'dead':
cmit_type = RLogEntry.REMOVE
else:
cmit_type = RLogEntry.CHANGE
## branch line: pretty much ignored if it's there
desc_line_list = []
line = self.rlog.readline()
if not line[:10] == 'branches: ':
desc_line_list.append(string.rstrip(line))
## suck up description
while 1:
line = self.rlog.readline()
## the last line printed out by rlog is '===='...
## or '------'... between entries
if line == _rlog_commit_sep or line == _rlog_end:
break
## append line to the descripton list
desc_line_list.append(string.rstrip(line))
## compute time using time routines in seconds from epoc GMT
## NOTE: mktime's arguments are in local time, and we have
## them in GMT from RCS; therefore, we have to manually
## subtract out the timezone correction
##
## XXX: Linux glib2.0.7 bug: it looks like mktime doesn't honor
## the '0' flag to force no timezone correction, so we look
## at the correction ourself and do the right thing after
## mktime mangles the date
gmt_time = \
time.mktime((year, month, day, hour, minute, second, 0, 0, -1))
if time.localtime(gmt_time)[8] == 1:
# dst time active?
# XXX: This is still wrong in those both nights,
# where the switch between DST and normal time occurs.
gmt_time = gmt_time - time.altzone
else:
gmt_time = gmt_time - time.timezone
## now create and return the RLogEntry
rlog_entry = RLogEntry()
rlog_entry.type = cmit_type
rlog_entry.revision = revision
rlog_entry.author = author
rlog_entry.description = string.join(desc_line_list, '\n')
rlog_entry.time = gmt_time
rlog_entry.pluscount = pluscount
rlog_entry.minuscount = minuscount
return rlog_entry
## entrypoints
def GetRLogData(cfg, path, revision = '', date = ''):
rlog = RLog(cfg, path, revision, date)
rlog_parser = RLogOutputParser(rlog)
return rlog_parser.rlog_data

View File

@@ -1,445 +0,0 @@
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# generic server api - currently supports normal cgi, mod_python, and
# active server pages
#
# -----------------------------------------------------------------------
import types
import os
import sys
import re
import cgi
# global server object. It will be either a CgiServer, a WsgiServer,
# or a proxy to an AspServer or ModPythonServer object.
server = None
# Simple HTML string escaping. Note that we always escape the
# double-quote character -- ViewVC shouldn't ever need to preserve
# that character as-is, and sometimes needs to embed escaped values
# into HTML attributes.
def escape(s):
try: s = s.encode('utf8')
except: pass
s = str(s)
s = s.replace('&', '&amp;')
s = s.replace('>', '&gt;')
s = s.replace('<', '&lt;')
s = s.replace('"', "&quot;")
return s
class Server:
def __init__(self):
self.pageGlobals = {}
def self(self):
return self
def escape(self, s):
return escape(s)
def close(self):
pass
class ThreadedServer(Server):
def __init__(self):
Server.__init__(self)
self.inheritableOut = 0
global server
if not isinstance(server, ThreadedServerProxy):
server = ThreadedServerProxy()
if not isinstance(sys.stdout, File):
sys.stdout = File(server)
server.registerThread(self)
def file(self):
return File(self)
def close(self):
server.unregisterThread()
class ThreadedServerProxy:
"""In a multithreaded server environment, ThreadedServerProxy stores the
different server objects being used to display pages and transparently
forwards access to them based on the current thread id."""
def __init__(self):
self.__dict__['servers'] = { }
global thread
import thread
def registerThread(self, server):
self.__dict__['servers'][thread.get_ident()] = server
def unregisterThread(self):
del self.__dict__['servers'][thread.get_ident()]
def self(self):
"""This function bypasses the getattr and setattr trickery and returns
the actual server object."""
return self.__dict__['servers'][thread.get_ident()]
def __getattr__(self, key):
return getattr(self.self(), key)
def __setattr__(self, key, value):
setattr(self.self(), key, value)
def __delattr__(self, key):
delattr(self.self(), key)
class File:
def __init__(self, server):
self.closed = 0
self.mode = 'w'
self.name = "<AspFile file>"
self.softspace = 0
self.server = server
def write(self, s):
self.server.write(s)
def writelines(self, list):
for s in list:
self.server.write(s)
def flush(self):
self.server.flush()
def truncate(self, size):
pass
def close(self):
pass
class CgiServer(Server):
def __init__(self, inheritableOut = 1):
Server.__init__(self)
self.headerSent = 0
self.headers = []
self.inheritableOut = inheritableOut
self.iis = os.environ.get('SERVER_SOFTWARE', '')[:13] == 'Microsoft-IIS'
if sys.platform == "win32" and inheritableOut:
import msvcrt
msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
global server
server = self
def addheader(self, name, value):
self.headers.append((name, value))
def header(self, content_type='text/html; charset=UTF-8', status=None):
if not self.headerSent:
self.headerSent = 1
extraheaders = ''
for (name, value) in self.headers:
extraheaders = extraheaders + '%s: %s\r\n' % (name, value)
# The only way ViewVC pages and error messages are visible under
# IIS is if a 200 error code is returned. Otherwise IIS instead
# sends the static error page corresponding to the code number.
if status is None or (status[:3] != '304' and self.iis):
status = ''
else:
status = 'Status: %s\r\n' % status
sys.stdout.write('%sContent-Type: %s\r\n%s\r\n'
% (status, content_type, extraheaders))
def redirect(self, url):
if self.iis: url = fix_iis_url(self, url)
self.addheader('Location', url)
self.header(status='301 Moved')
sys.stdout.write('This document is located <a href="%s">here</a>.\n' % url)
def getenv(self, name, value=None):
ret = os.environ.get(name, value)
if self.iis and name == 'PATH_INFO' and ret:
ret = fix_iis_path_info(self, ret)
return ret
def params(self):
return cgi.parse()
def FieldStorage(fp=None, headers=None, outerboundary="",
environ=os.environ, keep_blank_values=0, strict_parsing=0):
return cgi.FieldStorage(fp, headers, outerboundary, environ,
keep_blank_values, strict_parsing)
def write(self, s):
sys.stdout.write(s)
def flush(self):
sys.stdout.flush()
def file(self):
return sys.stdout
class WsgiServer(Server):
def __init__(self, environ, start_response):
Server.__init__(self)
self._environ = environ
self._start_response = start_response;
self._headers = []
self._wsgi_write = None
self.headerSent = False
global server
server = self
def addheader(self, name, value):
self._headers.append((name, value))
def header(self, content_type='text/html; charset=UTF-8', status=None):
if not status:
status = "200 OK"
if not self.headerSent:
self.headerSent = True
self._headers.insert(0, ("Content-Type", content_type),)
self._wsgi_write = self._start_response("%s" % status, self._headers)
def redirect(self, url):
"""Redirect client to url. This discards any data that has been queued
to be sent to the user. But there should never by any anyway.
"""
self.addheader('Location', url)
self.header(status='301 Moved')
self._wsgi_write('This document is located <a href="%s">here</a>.' % url)
def getenv(self, name, value=None):
return self._environ.get(name, value)
def params(self):
return cgi.parse(environ=self._environ, fp=self._environ["wsgi.input"])
def FieldStorage(self, fp=None, headers=None, outerboundary="",
environ=os.environ, keep_blank_values=0, strict_parsing=0):
return cgi.FieldStorage(self._environ["wsgi.input"], headers,
outerboundary, self._environ, keep_blank_values,
strict_parsing)
def write(self, s):
self._wsgi_write(s)
def flush(self):
pass
def file(self):
return File(self)
class AspServer(ThreadedServer):
def __init__(self, Server, Request, Response, Application):
ThreadedServer.__init__(self)
self.headerSent = 0
self.server = Server
self.request = Request
self.response = Response
self.application = Application
def addheader(self, name, value):
self.response.AddHeader(name, value)
def header(self, content_type=None, status=None):
# Normally, setting self.response.ContentType after headers have already
# been sent simply results in an AttributeError exception, but sometimes
# it leads to a fatal ASP error. For this reason I'm keeping the
# self.headerSent member and only checking for the exception as a
# secondary measure
if not self.headerSent:
try:
self.headerSent = 1
if content_type is None:
self.response.ContentType = 'text/html; charset=UTF-8'
else:
self.response.ContentType = content_type
if status is not None: self.response.Status = status
except AttributeError:
pass
def redirect(self, url):
self.response.Redirect(url)
def getenv(self, name, value = None):
ret = self.request.ServerVariables(name)()
if not type(ret) is types.UnicodeType:
return value
ret = str(ret)
if name == 'PATH_INFO':
ret = fix_iis_path_info(self, ret)
return ret
def params(self):
p = {}
for i in self.request.Form:
p[str(i)] = map(str, self.request.Form[i])
for i in self.request.QueryString:
p[str(i)] = map(str, self.request.QueryString[i])
return p
def FieldStorage(self, fp=None, headers=None, outerboundary="",
environ=os.environ, keep_blank_values=0, strict_parsing=0):
# Code based on a very helpful usenet post by "Max M" (maxm@mxm.dk)
# Subject "Re: Help! IIS and Python"
# http://groups.google.com/groups?selm=3C7C0AB6.2090307%40mxm.dk
from StringIO import StringIO
from cgi import FieldStorage
environ = {}
for i in self.request.ServerVariables:
environ[str(i)] = str(self.request.ServerVariables(i)())
# this would be bad for uploaded files, could use a lot of memory
binaryContent, size = self.request.BinaryRead(int(environ['CONTENT_LENGTH']))
fp = StringIO(str(binaryContent))
fs = FieldStorage(fp, None, "", environ, keep_blank_values, strict_parsing)
fp.close()
return fs
def write(self, s):
t = type(s)
if t is types.StringType:
s = buffer(s)
elif not t is types.BufferType:
s = buffer(str(s))
self.response.BinaryWrite(s)
def flush(self):
self.response.Flush()
_re_status = re.compile("\\d+")
class ModPythonServer(ThreadedServer):
def __init__(self, request):
ThreadedServer.__init__(self)
self.request = request
self.headerSent = 0
def addheader(self, name, value):
self.request.headers_out.add(name, value)
def header(self, content_type=None, status=None):
if content_type is None:
self.request.content_type = 'text/html; charset=UTF-8'
else:
self.request.content_type = content_type
self.headerSent = 1
if status is not None:
m = _re_status.match(status)
if not m is None:
self.request.status = int(m.group())
def redirect(self, url):
import mod_python.apache
self.request.headers_out['Location'] = url
self.request.status = mod_python.apache.HTTP_MOVED_TEMPORARILY
self.request.write("You are being redirected to <a href=\"%s\">%s</a>"
% (url, url))
def getenv(self, name, value = None):
try:
return self.request.subprocess_env[name]
except KeyError:
return value
def params(self):
import mod_python.util
if self.request.args is None:
return {}
else:
return mod_python.util.parse_qs(self.request.args)
def FieldStorage(self, fp=None, headers=None, outerboundary="",
environ=os.environ, keep_blank_values=0, strict_parsing=0):
import mod_python.util
return mod_python.util.FieldStorage(self.request, keep_blank_values, strict_parsing)
def write(self, s):
self.request.write(s)
def flush(self):
pass
def fix_iis_url(server, url):
"""When a CGI application under IIS outputs a "Location" header with a url
beginning with a forward slash, IIS tries to optimise the redirect by not
returning any output from the original CGI script at all and instead just
returning the new page in its place. Because of this, the browser does
not know it is getting a different page than it requested. As a result,
The address bar that appears in the browser window shows the wrong location
and if the new page is in a different folder than the old one, any relative
links on it will be broken.
This function can be used to circumvent the IIS "optimization" of local
redirects. If it is passed a location that begins with a forward slash it
will return a URL constructed with the information in CGI environment.
If it is passed a URL or any location that doens't begin with a forward slash
it will return just argument unaltered.
"""
if url[0] == '/':
if server.getenv('HTTPS') == 'on':
dport = "443"
prefix = "https://"
else:
dport = "80"
prefix = "http://"
prefix = prefix + server.getenv('HTTP_HOST')
if server.getenv('SERVER_PORT') != dport:
prefix = prefix + ":" + server.getenv('SERVER_PORT')
return prefix + url
return url
def fix_iis_path_info(server, path_info):
"""Fix the PATH_INFO value in IIS"""
# If the viewvc cgi's are in the /viewvc/ folder on the web server and a
# request looks like
#
# /viewvc/viewvc.cgi/myproject/?someoption
#
# The CGI environment variables on IIS will look like this:
#
# SCRIPT_NAME = /viewvc/viewvc.cgi
# PATH_INFO = /viewvc/viewvc.cgi/myproject/
#
# Whereas on Apache they look like:
#
# SCRIPT_NAME = /viewvc/viewvc.cgi
# PATH_INFO = /myproject/
#
# This function converts the IIS PATH_INFO into the nonredundant form
# expected by ViewVC
return path_info[len(server.getenv('SCRIPT_NAME', '')):]

View File

@@ -1,56 +0,0 @@
# -*-python-*-
#
# Copyright (C) 2006-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
"""Generic API for implementing authorization checks employed by ViewVC."""
class GenericViewVCAuthorizer:
"""Abstract class encapsulating version control authorization routines."""
def __init__(self, username=None, params={}):
"""Create a GenericViewVCAuthorizer object which will be used to
validate that USERNAME has the permissions needed to view version
control repositories (in whole or in part). PARAMS is a
dictionary of custom parameters for the authorizer."""
pass
def check_root_access(self, rootname):
"""Return 1 iff the associated username is permitted to read ROOTNAME."""
pass
def check_universal_access(self, rootname):
"""Return 1 if the associated username is permitted to read every
path in the repository at every revision, 0 if the associated
username is prohibited from reading any path in the repository, or
None if no such determination can be made (perhaps because the
cost of making it is too great)."""
pass
def check_path_access(self, rootname, path_parts, pathtype, rev=None):
"""Return 1 iff the associated username is permitted to read
revision REV of the path PATH_PARTS (of type PATHTYPE) in
repository ROOTNAME."""
pass
##############################################################################
class ViewVCAuthorizer(GenericViewVCAuthorizer):
"""The uber-permissive authorizer."""
def check_root_access(self, rootname):
return 1
def check_universal_access(self, rootname):
return 1
def check_path_access(self, rootname, path_parts, pathtype, rev=None):
return 1

View File

@@ -1,92 +0,0 @@
# -*-python-*-
#
# Copyright (C) 2009 Vitaliy Filippov.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
import vcauth
import vclib
import string
from xml.dom.minidom import parse
class ViewVCAuthorizer(vcauth.GenericViewVCAuthorizer):
"""An authorizer which uses CVSnt access control lists (which are in form
of XML files in CVS/ subdirectories in the repository)."""
def __init__(self, username, params={}):
self.username = username
self.params = params
self.cfg = params['__config']
self.default = params.get('default', 0)
self.cached = {}
self.xmlcache = {}
def dom_rights(self, doc, rightname, filename, username):
result = None
if filename:
for node in doc.getElementsByTagName('file'):
if node.getAttribute('name') == filename:
for acl in node.getElementsByTagName('acl'):
if not acl.getAttribute('branch'):
u = acl.getAttribute('user')
if result is None and (not u or u == 'anonymous') or u == username:
for r in acl.getElementsByTag(rightname):
result = not r.getAttribute('deny')
break
if result is None:
for node in doc.getElementsByTagName('directory'):
for acl in node.getElementsByTagName('acl'):
if not acl.getAttribute('branch'):
u = acl.getAttribute('user')
if result is None and (not u or u == 'anonymous') or u == username:
for r in acl.getElementsByTag(rightname):
result = not r.getAttribute('deny')
return result
def check(self, rootname, path_parts, filename):
d = self.cfg.general.cvs_roots.get(rootname,None)
if not d:
return self.default
i = len(path_parts)
r = None
while i >= 0:
try:
xml = d
if len(path_parts):
xml = xml + '/' + string.join(path_parts, '/')
xml = xml + '/CVS/fileattr.xml'
if self.cached.get(xml, None) is not None:
return self.cached.get(xml, None)
doc = self.xmlcache.get(xml, None)
if doc is None:
doc = parse(xml)
self.xmlcache[xml] = doc
r = self.dom_rights(doc, 'read', filename, self.username)
if r is not None:
self.cached[xml] = r
return r
raise Exception(None)
except:
if len(path_parts) > 0:
path_parts = path_parts[:-1]
filename = ''
i = i-1
return self.default
def check_root_access(self, rootname):
return self.check(rootname, [], '')
def check_path_access(self, rootname, path_parts, pathtype, rev=None):
if not path_parts:
return self.check(rootname, [], '')
if pathtype == vclib.DIR:
return self.check(rootname, path_parts, '')
f = path_parts[-1]
path_parts = path_parts[:-1]
return self.check(rootname, path_parts, f)

View File

@@ -1,52 +0,0 @@
# -*-python-*-
#
# Copyright (C) 2006-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
import vcauth
import vclib
import fnmatch
class ViewVCAuthorizer(vcauth.GenericViewVCAuthorizer):
"""A simple top-level module authorizer."""
def __init__(self, username, params={}):
forbidden = params.get('forbidden', '')
self.forbidden = map(lambda x: x.strip(),
filter(None, forbidden.split(',')))
def check_root_access(self, rootname):
return 1
def check_universal_access(self, rootname):
# If there aren't any forbidden paths, we can grant universal read
# access. Otherwise, we make no claim.
if not self.forbidden:
return 1
return None
def check_path_access(self, rootname, path_parts, pathtype, rev=None):
# No path? No problem.
if not path_parts:
return 1
# Not a directory? We aren't interested.
if pathtype != vclib.DIR:
return 1
# At this point we're looking at a directory path.
module = path_parts[0]
default = 1
for pat in self.forbidden:
if pat[0] == '!':
default = 0
if fnmatch.fnmatchcase(module, pat[1:]):
return 1
elif fnmatch.fnmatchcase(module, pat):
return 0
return default

View File

@@ -1,64 +0,0 @@
# -*-python-*-
#
# Copyright (C) 2008-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
import vcauth
import vclib
import fnmatch
import re
def _split_regexp(restr):
"""Return a 2-tuple consisting of a compiled regular expression
object and a boolean flag indicating if that object should be
interpreted inversely."""
if restr[0] == '!':
return re.compile(restr[1:]), 1
return re.compile(restr), 0
class ViewVCAuthorizer(vcauth.GenericViewVCAuthorizer):
"""A simple regular-expression-based authorizer."""
def __init__(self, username, params={}):
forbidden = params.get('forbiddenre', '')
self.forbidden = map(lambda x: _split_regexp(x.strip()),
filter(None, forbidden.split(',')))
def _check_root_path_access(self, root_path):
default = 1
for forbidden, negated in self.forbidden:
if negated:
default = 0
if forbidden.search(root_path):
return 1
elif forbidden.search(root_path):
return 0
return default
def check_root_access(self, rootname):
return self._check_root_path_access(rootname)
def check_universal_access(self, rootname):
# If there aren't any forbidden regexps, we can grant universal
# read access. Otherwise, we make no claim.
if not self.forbidden:
return 1
return None
def check_path_access(self, rootname, path_parts, pathtype, rev=None):
root_path = rootname
if path_parts:
root_path = root_path + '/' + '/'.join(path_parts)
if pathtype == vclib.DIR:
root_path = root_path + '/'
else:
root_path = root_path + '/'
return self._check_root_path_access(root_path)

View File

@@ -1,62 +0,0 @@
# -*-python-*-
#
# Copyright (C) 2009 Vitaliy Filippov.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
import vcauth
import vclib
import string
import grp
import re
class ViewVCAuthorizer(vcauth.GenericViewVCAuthorizer):
"""A simple authorizer checking system group membership for every
repository and every user."""
def __init__(self, username, params={}):
self.username = username
self.params = params
self.cfg = params['__config']
self.cached = {}
self.grp = {}
self.byroot = {}
self.fmt = map(lambda l: l.strip(), params.get('group_name_format', 'svn.%s|svn.%s.ro').split('|'))
byr = params.get('by_root', '')
for i in byr.split(','):
if i.find(':') < 0:
continue
(root, auth) = i.split(':', 2)
self.byroot[root.strip()] = map(lambda l: l.strip(), auth.split('|'))
def check_root_access(self, rootname):
r = self.cached.get(rootname, None)
if r is not None:
return r
grent = self.grp.get(rootname, None)
if grent is None:
grent = map(lambda grn: self.getgrnam(grn, rootname), self.byroot.get(rootname, self.fmt))
self.grp[rootname] = grent
r = 0
for i in grent:
if i and i.gr_mem and len(i.gr_mem) and self.username in i.gr_mem:
r = 1
break
self.cached[rootname] = r
return r
def getgrnam(self, grn, rootname):
try:
r = grp.getgrnam(grn.replace('%s', re.sub('[^\w\.\-]+', '', rootname)))
except KeyError:
r = None
return r
def check_path_access(self, rootname, path_parts, pathtype, rev=None):
return self.check_root_access(rootname)

View File

@@ -1,269 +0,0 @@
# -*-python-*-
#
# Copyright (C) 2006-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
# (c) 2006 Sergey Lapin <slapin@dataart.com>
import vcauth
import os.path
import debug
from ConfigParser import ConfigParser
class ViewVCAuthorizer(vcauth.GenericViewVCAuthorizer):
"""Subversion authz authorizer module"""
def __init__(self, username, params={}):
self.rootpaths = { } # {root -> { paths -> access boolean for USERNAME }}
# Get the authz file location from a passed-in parameter.
self.authz_file = params.get('authzfile')
if not self.authz_file:
raise debug.ViewVCException("No authzfile configured")
if not os.path.exists(self.authz_file):
raise debug.ViewVCException("Configured authzfile file not found")
# See if the admin wants us to do case normalization of usernames.
self.force_username_case = params.get('force_username_case')
if self.force_username_case == "upper":
self.username = username and username.upper() or username
elif self.force_username_case == "lower":
self.username = username and username.lower() or username
elif not self.force_username_case:
self.username = username
else:
raise debug.ViewVCException("Invalid value for force_username_case "
"option")
def _get_paths_for_root(self, rootname):
if self.rootpaths.has_key(rootname):
return self.rootpaths[rootname]
paths_for_root = { }
# Parse the authz file, replacing ConfigParser's optionxform()
# method with something that won't futz with the case of the
# option names.
cp = ConfigParser()
cp.optionxform = lambda x: x
try:
cp.read(self.authz_file)
except:
raise debug.ViewVCException("Unable to parse configured authzfile file")
# Figure out if there are any aliases for the current username
aliases = []
if cp.has_section('aliases'):
for alias in cp.options('aliases'):
entry = cp.get('aliases', alias)
if entry == self.username:
aliases.append(alias)
# Figure out which groups USERNAME has a part of.
groups = []
if cp.has_section('groups'):
all_groups = []
def _process_group(groupname):
"""Inline function to handle groups within groups.
For a group to be within another group in SVN, the group
definitions must be in the correct order in the config file.
ie. If group A is a member of group B then group A must be
defined before group B in the [groups] section.
Unfortunately, the ConfigParser class provides no way of
finding the order in which groups were defined so, for reasons
of practicality, this function lets you get away with them
being defined in the wrong order. Recursion is guarded
against though."""
# If we already know the user is part of this already-
# processed group, return that fact.
if groupname in groups:
return 1
# Otherwise, ensure we don't process a group twice.
if groupname in all_groups:
return 0
# Store the group name in a global list so it won't be processed again
all_groups.append(groupname)
group_member = 0
groupname = groupname.strip()
entries = cp.get('groups', groupname).split(',')
for entry in entries:
entry = entry.strip()
if entry == self.username:
group_member = 1
break
elif entry[0:1] == "@" and _process_group(entry[1:]):
group_member = 1
break
elif entry[0:1] == "&" and entry[1:] in aliases:
group_member = 1
break
if group_member:
groups.append(groupname)
return group_member
# Process the groups
for group in cp.options('groups'):
_process_group(group)
def _userspec_matches_user(userspec):
# If there is an inversion character, recurse and return the
# opposite result.
if userspec[0:1] == '~':
return not _userspec_matches_user(userspec[1:])
# See if the userspec applies to our current user.
return userspec == '*' \
or userspec == self.username \
or (self.username is not None and userspec == "$authenticated") \
or (self.username is None and userspec == "$anonymous") \
or (userspec[0:1] == "@" and userspec[1:] in groups) \
or (userspec[0:1] == "&" and userspec[1:] in aliases)
def _process_access_section(section):
"""Inline function for determining user access in a single
config secction. Return a two-tuple (ALLOW, DENY) containing
the access determination for USERNAME in a given authz file
SECTION (if any)."""
# Figure if this path is explicitly allowed or denied to USERNAME.
allow = deny = 0
for user in cp.options(section):
user = user.strip()
if _userspec_matches_user(user):
# See if the 'r' permission is among the ones granted to
# USER. If so, we can stop looking. (Entry order is not
# relevant -- we'll use the most permissive entry, meaning
# one 'allow' is all we need.)
allow = cp.get(section, user).find('r') != -1
deny = not allow
if allow:
break
return allow, deny
# Read the other (non-"groups") sections, and figure out in which
# repositories USERNAME or his groups have read rights. We'll
# first check groups that have no specific repository designation,
# then superimpose those that have a repository designation which
# matches the one we're asking about.
root_sections = []
for section in cp.sections():
# Skip the "groups" section -- we handled that already.
if section == 'groups':
continue
if section == 'aliases':
continue
# Process root-agnostic access sections; skip (but remember)
# root-specific ones that match our root; ignore altogether
# root-specific ones that don't match our root. While we're at
# it, go ahead and figure out the repository path we're talking
# about.
if section.find(':') == -1:
path = section
else:
name, path = section.split(':', 1)
if name == rootname:
root_sections.append(section)
continue
# Check for a specific access determination.
allow, deny = _process_access_section(section)
# If we got an explicit access determination for this path and this
# USERNAME, record it.
if allow or deny:
if path != '/':
path = '/' + '/'.join(filter(None, path.split('/')))
paths_for_root[path] = allow
# Okay. Superimpose those root-specific values now.
for section in root_sections:
# Get the path again.
name, path = section.split(':', 1)
# Check for a specific access determination.
allow, deny = _process_access_section(section)
# If we got an explicit access determination for this path and this
# USERNAME, record it.
if allow or deny:
if path != '/':
path = '/' + '/'.join(filter(None, path.split('/')))
paths_for_root[path] = allow
# If the root isn't readable, there's no point in caring about all
# the specific paths the user can't see. Just point the rootname
# to a None paths dictionary.
root_is_readable = 0
for path in paths_for_root.keys():
if paths_for_root[path]:
root_is_readable = 1
break
if not root_is_readable:
paths_for_root = None
self.rootpaths[rootname] = paths_for_root
return paths_for_root
def check_root_access(self, rootname):
paths = self._get_paths_for_root(rootname)
return (paths is not None) and 1 or 0
def check_universal_access(self, rootname):
paths = self._get_paths_for_root(rootname)
if not paths: # None or empty.
return 0
# Search the access determinations. If there's a mix, we can't
# claim a universal access determination.
found_allow = 0
found_deny = 0
for access in paths.values():
if access:
found_allow = 1
else:
found_deny = 1
if found_allow and found_deny:
return None
# We didn't find both allowances and denials, so we must have
# found one or the other. Denials only is a universal denial.
if found_deny:
return 0
# ... but allowances only is only a universal allowance if read
# access is granted to the root directory.
if found_allow and paths.has_key('/'):
return 1
# Anything else is indeterminable.
return None
def check_path_access(self, rootname, path_parts, pathtype, rev=None):
# Crawl upward from the path represented by PATH_PARTS toward to
# the root of the repository, looking for an explicitly grant or
# denial of access.
paths = self._get_paths_for_root(rootname)
if paths is None:
return 0
parts = path_parts[:]
while parts:
path = '/' + '/'.join(parts)
if paths.has_key(path):
return paths[path]
del parts[-1]
return paths.get('/', 0)

View File

@@ -1,457 +0,0 @@
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
"""Version Control lib is an abstract API to access versioning systems
such as CVS.
"""
import types
# item types returned by Repository.itemtype().
FILE = 'FILE'
DIR = 'DIR'
# diff types recognized by Repository.rawdiff().
UNIFIED = 1
CONTEXT = 2
SIDE_BY_SIDE = 3
# root types returned by Repository.roottype().
CVS = 'cvs'
SVN = 'svn'
# action kinds found in ChangedPath.action
ADDED = 'added'
DELETED = 'deleted'
REPLACED = 'replaced'
MODIFIED = 'modified'
# log sort keys
SORTBY_DEFAULT = 0 # default/no sorting
SORTBY_DATE = 1 # sorted by date, youngest first
SORTBY_REV = 2 # sorted by revision, youngest first
# ======================================================================
#
class Repository:
"""Abstract class representing a repository."""
def rootname(self):
"""Return the name of this repository."""
def roottype(self):
"""Return the type of this repository (vclib.CVS, vclib.SVN, ...)."""
def rootpath(self):
"""Return the location of this repository."""
def authorizer(self):
"""Return the vcauth.Authorizer object associated with this
repository, or None if no such association has been made."""
def open(self):
"""Open a connection to the repository."""
def itemtype(self, path_parts, rev):
"""Return the type of the item (file or dir) at the given path and revision
The result will be vclib.DIR or vclib.FILE
The path is specified as a list of components, relative to the root
of the repository. e.g. ["subdir1", "subdir2", "filename"]
rev is the revision of the item to check
"""
pass
def openfile(self, path_parts, rev, options):
"""Open a file object to read file contents at a given path and revision.
The return value is a 2-tuple of containg the file object and revision
number in canonical form.
The path is specified as a list of components, relative to the root
of the repository. e.g. ["subdir1", "subdir2", "filename"]
rev is the revision of the file to check out
options is a dictionary of implementation specific options
"""
def listdir(self, path_parts, rev, options):
"""Return list of files in a directory
The result is a list of DirEntry objects
The path is specified as a list of components, relative to the root
of the repository. e.g. ["subdir1", "subdir2", "filename"]
rev is the revision of the directory to list
options is a dictionary of implementation specific options
"""
def dirlogs(self, path_parts, rev, entries, options):
"""Augment directory entries with log information
New properties will be set on all of the DirEntry objects in the entries
list. At the very least, a "rev" property will be set to a revision
number or None if the entry doesn't have a number. Other properties that
may be set include "date", "author", "log", "size", and "lockinfo".
The path is specified as a list of components, relative to the root
of the repository. e.g. ["subdir1", "subdir2", "filename"]
rev is the revision of the directory listing and will effect which log
messages are returned
entries is a list of DirEntry objects returned from a previous call to
the listdir() method
options is a dictionary of implementation specific options
"""
def itemlog(self, path_parts, rev, sortby, first, limit, options):
"""Retrieve an item's log information
The result is a list of Revision objects
The path is specified as a list of components, relative to the root
of the repository. e.g. ["subdir1", "subdir2", "filename"]
rev is the revision of the item to return information about
sortby indicates the way in which the returned list should be
sorted (SORTBY_DEFAULT, SORTBY_DATE, SORTBY_REV)
first is the 0-based index of the first Revision returned (after
sorting, if any, has occured)
limit is the maximum number of returned Revisions, or 0 to return
all available data
options is a dictionary of implementation specific options
"""
def itemprops(self, path_parts, rev):
"""Return a dictionary mapping property names to property values
for properties stored on an item.
The path is specified as a list of components, relative to the root
of the repository. e.g. ["subdir1", "subdir2", "filename"]
rev is the revision of the item to return information about.
"""
def rawdiff(self, path_parts1, rev1, path_parts2, rev2, type, options={}):
"""Return a diff (in GNU diff format) of two file revisions
type is the requested diff type (UNIFIED, CONTEXT, etc)
options is a dictionary that can contain the following options plus
implementation-specific options
context - integer, number of context lines to include
funout - boolean, include C function names
ignore_white - boolean, ignore whitespace
Return value is a python file object
"""
def annotate(self, path_parts, rev, include_text=False):
"""Return a list of Annotation object, sorted by their
"line_number" components, which describe the lines of given
version of a file.
The file path is specified as a list of components, relative to
the root of the repository. e.g. ["subdir1", "subdir2", "filename"]
rev is the revision of the item to return information about.
If include_text is true, populate the Annotation objects' "text"
members with the corresponding line of file content; otherwise,
leave that member set to None."""
def revinfo(self, rev):
"""Return information about a global revision
rev is the revision of the item to return information about
Return value is a 5-tuple containing: the date, author, log
message, a list of ChangedPath items representing paths changed,
and a dictionary mapping property names to property values for
properties stored on an item.
Raise vclib.UnsupportedFeature if the version control system
doesn't support a global revision concept.
"""
def isexecutable(self, path_parts, rev):
"""Return true iff a given revision of a versioned file is to be
considered an executable program or script.
The path is specified as a list of components, relative to the root
of the repository. e.g. ["subdir1", "subdir2", "filename"]
rev is the revision of the item to return information about
"""
def filesize(self, path_parts, rev):
"""Return the size of a versioned file's contents if it can be
obtained without a brute force measurement; -1 otherwise.
NOTE: Callers that require a filesize answer when this function
returns -1 may obtain it by measuring the data returned via
openfile().
The path is specified as a list of components, relative to the root
of the repository. e.g. ["subdir1", "subdir2", "filename"]
rev is the revision of the item to return information about
"""
# ======================================================================
class DirEntry:
"""Instances represent items in a directory listing"""
def __init__(self, name, kind, errors=[]):
"""Create a new DirEntry() item:
NAME: The name of the directory entry
KIND: The path kind of the entry (vclib.DIR, vclib.FILE)
ERRORS: A list of error strings representing problems encountered
while determining the other info about this entry
"""
self.name = name
self.kind = kind
self.errors = errors
class Revision:
"""Instances holds information about revisions of versioned resources"""
def __init__(self, number, string, date, author, changed, log, size, lockinfo):
"""Create a new Revision() item:
NUMBER: Revision in an integer-based, sortable format
STRING: Revision as a string
DATE: Seconds since Epoch (GMT) that this revision was created
AUTHOR: Author of the revision
CHANGED: Lines-changed (contextual diff) information
LOG: Log message associated with the creation of this revision
SIZE: Size (in bytes) of this revision's fulltext (files only)
LOCKINFO: Information about locks held on this revision
"""
self.number = number
self.string = string
self.date = date
self.author = author
self.changed = changed
self.log = log
self.size = size
self.lockinfo = lockinfo
def __cmp__(self, other):
return cmp(self.number, other.number)
class Annotation:
"""Instances represent per-line file annotation information"""
def __init__(self, text, line_number, rev, prev_rev, author, date):
"""Create a new Annotation() item:
TEXT: Raw text of a line of file contents
LINE_NUMBER: Line number on which the line is found
REV: Revision in which the line was last modified
PREV_REV: Revision prior to 'rev'
AUTHOR: Author who last modified the line
DATE: Date on which the line was last modified, in seconds since
the epoch, GMT
"""
self.text = text
self.line_number = line_number
self.rev = rev
self.prev_rev = prev_rev
self.author = author
self.date = date
class ChangedPath:
"""Instances represent changes to paths"""
def __init__(self, path_parts, rev, pathtype, base_path_parts,
base_rev, action, copied, text_changed, props_changed):
"""Create a new ChangedPath() item:
PATH_PARTS: Path that was changed
REV: Revision represented by this change
PATHTYPE: Type of this path (vclib.DIR, vclib.FILE, ...)
BASE_PATH_PARTS: Previous path for this changed item
BASE_REV: Previous revision for this changed item
ACTION: Kind of change (vclib.ADDED, vclib.DELETED, ...)
COPIED: Boolean -- was this path copied from elsewhere?
TEXT_CHANGED: Boolean -- did the file's text change?
PROPS_CHANGED: Boolean -- did the item's metadata change?
"""
self.path_parts = path_parts
self.rev = rev
self.pathtype = pathtype
self.base_path_parts = base_path_parts
self.base_rev = base_rev
self.action = action
self.copied = copied
self.text_changed = text_changed
self.props_changed = props_changed
# ======================================================================
class Error(Exception):
pass
class ReposNotFound(Error):
pass
class UnsupportedFeature(Error):
pass
class ItemNotFound(Error):
def __init__(self, path):
# use '/' rather than os.sep because this is for user consumption, and
# it was defined using URL separators
if type(path) in (types.TupleType, types.ListType):
path = '/'.join(path)
Error.__init__(self, path)
class InvalidRevision(Error):
def __init__(self, revision=None):
if revision is None:
Error.__init__(self, "Invalid revision")
else:
Error.__init__(self, "Invalid revision " + str(revision))
class NonTextualFileContents(Error):
pass
# ======================================================================
# Implementation code used by multiple vclib modules
import tempfile
import os
import time
def _diff_args(type, options):
"""generate argument list to pass to diff or rcsdiff"""
args = []
if type == CONTEXT:
if options.has_key('context'):
if options['context'] is None:
args.append('--context=%i' % 0x01ffffff)
else:
args.append('--context=%i' % options['context'])
else:
args.append('-c')
elif type == UNIFIED:
if options.has_key('context'):
if options['context'] is None:
args.append('--unified=%i' % 0x01ffffff)
else:
args.append('--unified=%i' % options['context'])
else:
args.append('-u')
elif type == SIDE_BY_SIDE:
args.append('--side-by-side')
args.append('--width=164')
else:
raise NotImplementedError
if options.get('funout', 0):
args.append('-p')
if options.get('ignore_white', 0):
args.append('-w')
return args
class _diff_fp:
"""File object reading a diff between temporary files, cleaning up
on close"""
def __init__(self, temp1, temp2, info1=None, info2=None, diff_cmd='diff', diff_opts=[]):
self.temp1 = temp1
self.temp2 = temp2
self.temp3 = tempfile.mktemp()
args = diff_opts[:]
if info1 and info2:
args.extend(["-L", self._label(info1), "-L", self._label(info2)])
args.extend([temp1, temp2])
args.insert(0, diff_cmd)
os.system("'"+"' '".join(args)+"' > '"+self.temp3+"' 2> '"+self.temp3+"'")
self.fp = open(self.temp3, 'rb')
self.fp.seek(0)
def read(self, bytes):
return self.fp.read(bytes)
def readline(self):
return self.fp.readline()
def readlines(self):
return self.fp.readlines()
def close(self):
try:
if self.fp:
self.fp.close()
self.fp = None
finally:
try:
if self.temp1 and self.temp1 != '/dev/null':
os.remove(self.temp1)
self.temp1 = None
finally:
if self.temp2 and self.temp2 != '/dev/null':
os.remove(self.temp2)
self.temp2 = None
try:
os.remove(self.temp3)
finally:
self.temp3 = None
def __del__(self):
self.close()
def _label(self, (path, date, rev)):
date = date and time.strftime('%Y/%m/%d %H:%M:%S', time.gmtime(date))
return "%s\t%s\t%s" % (path, date, rev)
def check_root_access(repos):
"""Return 1 iff the associated username is permitted to read REPOS,
as determined by consulting REPOS's Authorizer object (if any)."""
auth = repos.authorizer()
if not auth:
return 1
return auth.check_root_access(repos.rootname())
def check_path_access(repos, path_parts, pathtype=None, rev=None):
"""Return 1 iff the associated username is permitted to read
revision REV of the path PATH_PARTS (of type PATHTYPE) in repository
REPOS, as determined by consulting REPOS's Authorizer object (if any)."""
auth = repos.authorizer()
if not auth:
return 1
if not pathtype:
pathtype = repos.itemtype(path_parts, rev)
if not path_parts:
return auth.check_root_access(repos.rootname())
return auth.check_path_access(repos.rootname(), path_parts, pathtype, rev)

View File

@@ -1,67 +0,0 @@
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
import os
import os.path
import time
def cvs_strptime(timestr):
return time.strptime(timestr, '%Y/%m/%d %H:%M:%S')[:-1] + (0,)
def canonicalize_rootpath(rootpath):
assert os.path.isabs(rootpath)
return os.path.normpath(rootpath)
def _is_cvsroot(path):
return os.path.exists(os.path.join(path, "CVSROOT", "config"))
def expand_root_parent(parent_path):
# Each subdirectory of PARENT_PATH that contains a child
# "CVSROOT/config" is added the set of returned roots. Or, if the
# PARENT_PATH itself contains a child "CVSROOT/config", then all its
# subdirectories are returned as roots.
assert os.path.isabs(parent_path)
roots = {}
subpaths = os.listdir(parent_path)
for rootname in subpaths:
rootpath = os.path.join(parent_path, rootname)
if _is_cvsroot(parent_path) or _is_cvsroot(rootpath):
roots[rootname] = canonicalize_rootpath(rootpath)
return roots
def find_root_in_parent(parent_path, rootname):
"""Search PARENT_PATH for a root named ROOTNAME, returning the
canonicalized ROOTPATH of the root if found; return None if no such
root is found."""
# Is PARENT_PATH itself a CVS repository? If so, we allow ROOTNAME
# to be any subdir within it. Otherwise, we expect
# PARENT_PATH/ROOTNAME to be a CVS repository.
assert os.path.isabs(parent_path)
rootpath = os.path.join(parent_path, rootname)
if (_is_cvsroot(parent_path) and os.path.exists(rootpath)) \
or _is_cvsroot(rootpath):
return canonicalize_rootpath(rootpath)
return None
def CVSRepository(name, rootpath, authorizer, utilities, use_rcsparse, charset_guesser = None):
rootpath = canonicalize_rootpath(rootpath)
if use_rcsparse:
import ccvs
return ccvs.CCVSRepository(name, rootpath, authorizer, utilities, charset_guesser)
else:
import bincvs
return bincvs.BinCVSRepository(name, rootpath, authorizer, utilities, charset_guesser)

File diff suppressed because it is too large Load Diff

View File

@@ -1,467 +0,0 @@
#!/usr/bin/env python
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
# Copyright (C) 2000 Curt Hagenlocher <curt@hagenlocher.org>
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# blame.py: Annotate each line of a CVS file with its author,
# revision #, date, etc.
#
# -----------------------------------------------------------------------
#
# This file is based on the cvsblame.pl portion of the Bonsai CVS tool,
# developed by Steve Lamm for Netscape Communications Corporation. More
# information about Bonsai can be found at
# http://www.mozilla.org/bonsai.html
#
# cvsblame.pl, in turn, was based on Scott Furman's cvsblame script
#
# -----------------------------------------------------------------------
import re
import time
import math
import rcsparse
import vclib
import cvsdb
class CVSParser(rcsparse.Sink):
# Precompiled regular expressions
trunk_rev = re.compile('^[0-9]+\\.[0-9]+$')
last_branch = re.compile('(.*)\\.[0-9]+')
is_branch = re.compile('^(.*)\\.0\\.([0-9]+)$')
d_command = re.compile('^d(\d+)\\s(\\d+)')
a_command = re.compile('^a(\d+)\\s(\\d+)')
SECONDS_PER_DAY = 86400
def __init__(self):
self.Reset()
def Reset(self):
self.last_revision = {}
self.prev_revision = {}
self.revision_date = {}
self.revision_author = {}
self.revision_branches = {}
self.next_delta = {}
self.prev_delta = {}
self.tag_revision = {}
self.timestamp = {}
self.revision_ctime = {}
self.revision_age = {}
self.revision_log = {}
self.revision_deltatext = {}
self.revision_map = [] # map line numbers to revisions
self.lines_added = {}
self.lines_removed = {}
# Map a tag to a numerical revision number. The tag can be a symbolic
# branch tag, a symbolic revision tag, or an ordinary numerical
# revision number.
def map_tag_to_revision(self, tag_or_revision):
try:
revision = self.tag_revision[tag_or_revision]
match = self.is_branch.match(revision)
if match:
branch = match.group(1) + '.' + match.group(2)
if self.last_revision.get(branch):
return self.last_revision[branch]
else:
return match.group(1)
else:
return revision
except:
return ''
# Construct an ordered list of ancestor revisions to the given
# revision, starting with the immediate ancestor and going back
# to the primordial revision (1.1).
#
# Note: The generated path does not traverse the tree the same way
# that the individual revision deltas do. In particular,
# the path traverses the tree "backwards" on branches.
def ancestor_revisions(self, revision):
ancestors = []
revision = self.prev_revision.get(revision)
while revision:
ancestors.append(revision)
revision = self.prev_revision.get(revision)
return ancestors
# Split deltatext specified by rev to each line.
def deltatext_split(self, rev):
lines = self.revision_deltatext[rev].split('\n')
if lines[-1] == '':
del lines[-1]
return lines
# Extract the given revision from the digested RCS file.
# (Essentially the equivalent of cvs up -rXXX)
def extract_revision(self, revision):
path = []
add_lines_remaining = 0
start_line = 0
count = 0
while revision:
path.append(revision)
revision = self.prev_delta.get(revision)
path.reverse()
path = path[1:] # Get rid of head revision
text = self.deltatext_split(self.head_revision)
# Iterate, applying deltas to previous revision
for revision in path:
adjust = 0
diffs = self.deltatext_split(revision)
self.lines_added[revision] = 0
self.lines_removed[revision] = 0
lines_added_now = 0
lines_removed_now = 0
for command in diffs:
dmatch = self.d_command.match(command)
amatch = self.a_command.match(command)
if add_lines_remaining > 0:
# Insertion lines from a prior "a" command
text.insert(start_line + adjust, command)
add_lines_remaining = add_lines_remaining - 1
adjust = adjust + 1
elif dmatch:
# "d" - Delete command
start_line = int(dmatch.group(1))
count = int(dmatch.group(2))
begin = start_line + adjust - 1
del text[begin:begin + count]
adjust = adjust - count
lines_removed_now = lines_removed_now + count
elif amatch:
# "a" - Add command
start_line = int(amatch.group(1))
count = int(amatch.group(2))
add_lines_remaining = count
lines_added_now = lines_added_now + count
else:
raise RuntimeError, 'Error parsing diff commands'
self.lines_added[revision] = self.lines_added[revision] + lines_added_now
self.lines_removed[revision] = self.lines_removed[revision] + lines_removed_now
return text
def set_head_revision(self, revision):
self.head_revision = revision
def set_principal_branch(self, branch_name):
self.principal_branch = branch_name
def define_tag(self, name, revision):
# Create an associate array that maps from tag name to
# revision number and vice-versa.
self.tag_revision[name] = revision
def set_comment(self, comment):
self.file_description = comment
def set_description(self, description):
self.rcs_file_description = description
# Construct dicts that represent the topology of the RCS tree
# and other arrays that contain info about individual revisions.
#
# The following dicts are created, keyed by revision number:
# self.revision_date -- e.g. "96.02.23.00.21.52"
# self.timestamp -- seconds since 12:00 AM, Jan 1, 1970 GMT
# self.revision_author -- e.g. "tom"
# self.revision_branches -- descendant branch revisions, separated by spaces,
# e.g. "1.21.4.1 1.21.2.6.1"
# self.prev_revision -- revision number of previous *ancestor* in RCS tree.
# Traversal of this array occurs in the direction
# of the primordial (1.1) revision.
# self.prev_delta -- revision number of previous revision which forms
# the basis for the edit commands in this revision.
# This causes the tree to be traversed towards the
# trunk when on a branch, and towards the latest trunk
# revision when on the trunk.
# self.next_delta -- revision number of next "delta". Inverts prev_delta.
#
# Also creates self.last_revision, keyed by a branch revision number, which
# indicates the latest revision on a given branch,
# e.g. self.last_revision{"1.2.8"} == 1.2.8.5
def define_revision(self, revision, timestamp, author, state,
branches, next):
self.tag_revision[revision] = revision
branch = self.last_branch.match(revision).group(1)
self.last_revision[branch] = revision
#self.revision_date[revision] = date
self.timestamp[revision] = timestamp
# Pretty print the date string
ltime = time.localtime(self.timestamp[revision])
formatted_date = time.strftime("%d %b %Y %H:%M", ltime)
self.revision_ctime[revision] = formatted_date
# Save age
self.revision_age[revision] = ((time.time() - self.timestamp[revision])
/ self.SECONDS_PER_DAY)
# save author
self.revision_author[revision] = author
# ignore the state
# process the branch information
branch_text = ''
for branch in branches:
self.prev_revision[branch] = revision
self.next_delta[revision] = branch
self.prev_delta[branch] = revision
branch_text = branch_text + branch + ''
self.revision_branches[revision] = branch_text
# process the "next revision" information
if next:
self.next_delta[revision] = next
self.prev_delta[next] = revision
is_trunk_revision = self.trunk_rev.match(revision) is not None
if is_trunk_revision:
self.prev_revision[revision] = next
else:
self.prev_revision[next] = revision
# Construct associative arrays containing info about individual revisions.
#
# The following associative arrays are created, keyed by revision number:
# revision_log -- log message
# revision_deltatext -- Either the complete text of the revision,
# in the case of the head revision, or the
# encoded delta between this revision and another.
# The delta is either with respect to the successor
# revision if this revision is on the trunk or
# relative to its immediate predecessor if this
# revision is on a branch.
def set_revision_info(self, revision, log, text):
self.revision_log[revision] = log
self.revision_deltatext[revision] = text
def parse_cvs_file(self, rcs_pathname, opt_rev = None, opt_m_timestamp = None):
# Args in: opt_rev - requested revision
# opt_m - time since modified
# Args out: revision_map
# timestamp
# revision_deltatext
# CheckHidden(rcs_pathname)
try:
rcsfile = open(rcs_pathname, 'rb')
except:
raise RuntimeError, ('error: %s appeared to be under CVS control, ' +
'but the RCS file is inaccessible.') % rcs_pathname
rcsparse.parse(rcsfile, self)
rcsfile.close()
if opt_rev in [None, '', 'HEAD']:
# Explicitly specified topmost revision in tree
revision = self.head_revision
else:
# Symbolic tag or specific revision number specified.
revision = self.map_tag_to_revision(opt_rev)
if revision == '':
raise RuntimeError, 'error: -r: No such revision: ' + opt_rev
# The primordial revision is not always 1.1! Go find it.
primordial = revision
while self.prev_revision.get(primordial):
primordial = self.prev_revision[primordial]
# Don't display file at all, if -m option is specified and no
# changes have been made in the specified file.
if opt_m_timestamp and self.timestamp[revision] < opt_m_timestamp:
return ''
# Figure out how many lines were in the primordial, i.e. version 1.1,
# check-in by moving backward in time from the head revision to the
# first revision.
line_count = 0
if self.revision_deltatext.get(self.head_revision):
tmp_array = self.deltatext_split(self.head_revision)
line_count = len(tmp_array)
skip = 0
rev = self.prev_revision.get(self.head_revision)
while rev:
diffs = self.deltatext_split(rev)
for command in diffs:
dmatch = self.d_command.match(command)
amatch = self.a_command.match(command)
if skip > 0:
# Skip insertion lines from a prior "a" command
skip = skip - 1
elif dmatch:
# "d" - Delete command
start_line = int(dmatch.group(1))
count = int(dmatch.group(2))
line_count = line_count - count
elif amatch:
# "a" - Add command
start_line = int(amatch.group(1))
count = int(amatch.group(2))
skip = count
line_count = line_count + count
else:
raise RuntimeError, 'error: illegal RCS file'
rev = self.prev_revision.get(rev)
# Now, play the delta edit commands *backwards* from the primordial
# revision forward, but rather than applying the deltas to the text of
# each revision, apply the changes to an array of revision numbers.
# This creates a "revision map" -- an array where each element
# represents a line of text in the given revision but contains only
# the revision number in which the line was introduced rather than
# the line text itself.
#
# Note: These are backward deltas for revisions on the trunk and
# forward deltas for branch revisions.
# Create initial revision map for primordial version.
self.revision_map = [primordial] * line_count
ancestors = [revision, ] + self.ancestor_revisions(revision)
ancestors = ancestors[:-1] # Remove "1.1"
last_revision = primordial
ancestors.reverse()
for revision in ancestors:
is_trunk_revision = self.trunk_rev.match(revision) is not None
if is_trunk_revision:
diffs = self.deltatext_split(last_revision)
# Revisions on the trunk specify deltas that transform a
# revision into an earlier revision, so invert the translation
# of the 'diff' commands.
for command in diffs:
if skip > 0:
skip = skip - 1
else:
dmatch = self.d_command.match(command)
amatch = self.a_command.match(command)
if dmatch:
start_line = int(dmatch.group(1))
count = int(dmatch.group(2))
temp = []
while count > 0:
temp.append(revision)
count = count - 1
self.revision_map = (self.revision_map[:start_line - 1] +
temp + self.revision_map[start_line - 1:])
elif amatch:
start_line = int(amatch.group(1))
count = int(amatch.group(2))
del self.revision_map[start_line:start_line + count]
skip = count
else:
raise RuntimeError, 'Error parsing diff commands'
else:
# Revisions on a branch are arranged backwards from those on
# the trunk. They specify deltas that transform a revision
# into a later revision.
adjust = 0
diffs = self.deltatext_split(revision)
for command in diffs:
if skip > 0:
skip = skip - 1
else:
dmatch = self.d_command.match(command)
amatch = self.a_command.match(command)
if dmatch:
start_line = int(dmatch.group(1))
count = int(dmatch.group(2))
adj_begin = start_line + adjust - 1
adj_end = start_line + adjust - 1 + count
del self.revision_map[adj_begin:adj_end]
adjust = adjust - count
elif amatch:
start_line = int(amatch.group(1))
count = int(amatch.group(2))
skip = count
temp = []
while count > 0:
temp.append(revision)
count = count - 1
self.revision_map = (self.revision_map[:start_line + adjust] +
temp + self.revision_map[start_line + adjust:])
adjust = adjust + skip
else:
raise RuntimeError, 'Error parsing diff commands'
last_revision = revision
return revision
class BlameSource:
def __init__(self, rcs_file, opt_rev=None, charset_guesser=None, include_text=False):
# Parse the CVS file
parser = CVSParser()
revision = parser.parse_cvs_file(rcs_file, opt_rev)
count = len(parser.revision_map)
lines = parser.extract_revision(revision)
if len(lines) != count:
raise RuntimeError, 'Internal consistency error'
# set up some state variables
self.revision = revision
self.lines = lines
self.num_lines = count
self.parser = parser
self.guesser = charset_guesser
self.include_text = include_text
# keep track of where we are during an iteration
self.idx = -1
self.last = None
def __getitem__(self, idx):
if idx == self.idx:
return self.last
if idx >= self.num_lines:
raise IndexError("No more annotations")
if idx != self.idx + 1:
raise BlameSequencingError()
# Get the line and metadata for it.
rev = self.parser.revision_map[idx]
prev_rev = self.parser.prev_revision.get(rev)
line_number = idx + 1
author = self.parser.revision_author[rev]
if not self.include_text:
thisline = None
elif self.guesser:
thisline = self.guesser.utf8(self.lines[idx])
else:
thisline = self.lines[idx]
### TODO: Put a real date in here.
item = vclib.Annotation(thisline, line_number, rev, prev_rev, author, None)
self.last = item
self.idx = idx
return item
class BlameSequencingError(Exception):
pass

View File

@@ -1,417 +0,0 @@
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
import os
import re
import cStringIO
import tempfile
import vclib
import rcsparse
import blame
import cvsdb
### The functionality shared with bincvs should probably be moved to a
### separate module
from bincvs import BaseCVSRepository, Revision, Tag, _file_log, _log_path, _logsort_date_cmp, _logsort_rev_cmp, _path_join
class CCVSRepository(BaseCVSRepository):
def dirlogs(self, path_parts, rev, entries, options):
"""see vclib.Repository.dirlogs docstring
rev can be a tag name or None. if set only information from revisions
matching the tag will be retrieved
Option values recognized by this implementation:
cvs_subdirs
boolean. true to fetch logs of the most recently modified file in each
subdirectory
Option values returned by this implementation:
cvs_tags, cvs_branches
lists of tag and branch names encountered in the directory
"""
if self.itemtype(path_parts, rev) != vclib.DIR: # does auth-check
raise vclib.Error("Path '%s' is not a directory."
% (part2path(path_parts)))
entries_to_fetch = []
for entry in entries:
if vclib.check_path_access(self, path_parts + [entry.name], None, rev):
entries_to_fetch.append(entry)
subdirs = options.get('cvs_subdirs', 0)
dirpath = self._getpath(path_parts)
alltags = { # all the tags seen in the files of this dir
'MAIN' : '',
'HEAD' : '1.1'
}
for entry in entries_to_fetch:
entry.rev = entry.date = entry.author = None
entry.dead = entry.absent = entry.log = entry.lockinfo = None
path = _log_path(entry, dirpath, subdirs)
if path:
entry.path = path
try:
rcsparse.parse(open(path, 'rb'), InfoSink(entry, rev, alltags))
if self.guesser:
entry.log = self.guesser.utf8(entry.log)
except IOError, e:
entry.errors.append("rcsparse error: %s" % e)
except RuntimeError, e:
entry.errors.append("rcsparse error: %s" % e)
except rcsparse.RCSStopParser:
pass
branches = options['cvs_branches'] = []
tags = options['cvs_tags'] = []
for name, rev in alltags.items():
if Tag(None, rev).is_branch:
branches.append(name)
else:
tags.append(name)
def itemlog(self, path_parts, rev, sortby, first, limit, options):
"""see vclib.Repository.itemlog docstring
rev parameter can be a revision number, a branch number, a tag name,
or None. If None, will return information about all revisions, otherwise,
will only return information about the specified revision or branch.
Option values returned by this implementation:
cvs_tags
dictionary of Tag objects for all tags encountered
"""
if self.itemtype(path_parts, rev) != vclib.FILE: # does auth-check
raise vclib.Error("Path '%s' is not a file." % (_path_join(path_parts)))
path = self.rcsfile(path_parts, 1)
sink = TreeSink()
rcsparse.parse(open(path, 'rb'), sink)
filtered_revs = _file_log(sink.revs.values(), sink.tags, sink.lockinfo,
sink.default_branch, rev)
for rev in filtered_revs:
if rev.prev and len(rev.number) == 2:
rev.changed = rev.prev.next_changed
options['cvs_tags'] = sink.tags
if sortby == vclib.SORTBY_DATE:
filtered_revs.sort(_logsort_date_cmp)
elif sortby == vclib.SORTBY_REV:
filtered_revs.sort(_logsort_rev_cmp)
if len(filtered_revs) < first:
return []
if limit:
return filtered_revs[first:first+limit]
return filtered_revs
def rawdiff(self, path_parts1, rev1, path_parts2, rev2, type, options={}):
if path_parts1 and self.itemtype(path_parts1, rev1) != vclib.FILE: # does auth-check
raise vclib.Error("Path '%s' is not a file." % (_path_join(path_parts1)))
if path_parts2 and self.itemtype(path_parts2, rev2) != vclib.FILE: # does auth-check
raise vclib.Error("Path '%s' is not a file." % (_path_join(path_parts2)))
if not path_parts1 and not path_parts2:
raise vclib.Error("Nothing to diff.")
if path_parts1:
temp1 = tempfile.mktemp()
open(temp1, 'wb').write(self.openfile(path_parts1, rev1, {})[0].getvalue())
r1 = self.itemlog(path_parts1, rev1, vclib.SORTBY_DEFAULT, 0, 0, {})[-1]
info1 = (self.rcsfile(path_parts1, root=1, v=0), r1.date, r1.string)
else:
temp1 = '/dev/null'
info1 = ('/dev/null', '', '')
if path_parts2:
temp2 = tempfile.mktemp()
open(temp2, 'wb').write(self.openfile(path_parts2, rev2, {})[0].getvalue())
r2 = self.itemlog(path_parts2, rev2, vclib.SORTBY_DEFAULT, 0, 0, {})[-1]
info2 = (self.rcsfile(path_parts2, root=1, v=0), r2.date, r2.string)
else:
temp2 = '/dev/null'
info2 = ('/dev/null', '', '')
diff_args = vclib._diff_args(type, options)
return vclib._diff_fp(temp1, temp2, info1, info2,
self.utilities.diff or 'diff', diff_args)
def annotate(self, path_parts, rev=None, include_text=False):
if self.itemtype(path_parts, rev) != vclib.FILE: # does auth-check
raise vclib.Error("Path '%s' is not a file." % (_path_join(path_parts)))
source = blame.BlameSource(self.rcsfile(path_parts, 1), rev, self.guesser, include_text)
return source, source.revision
def revinfo(self, rev):
raise vclib.UnsupportedFeature
def openfile(self, path_parts, rev, options):
if self.itemtype(path_parts, rev) != vclib.FILE: # does auth-check
raise vclib.Error("Path '%s' is not a file." % (_path_join(path_parts)))
path = self.rcsfile(path_parts, 1)
sink = COSink(rev)
rcsparse.parse(open(path, 'rb'), sink)
revision = sink.last and sink.last.string
return cStringIO.StringIO('\n'.join(sink.sstext.text)), revision
class MatchingSink(rcsparse.Sink):
"""Superclass for sinks that search for revisions based on tag or number"""
def __init__(self, find):
"""Initialize with tag name or revision number string to match against"""
if not find or find == 'MAIN' or find == 'HEAD':
self.find = None
else:
self.find = find
self.find_tag = None
def set_principal_branch(self, branch_number):
if self.find is None:
self.find_tag = Tag(None, branch_number)
def define_tag(self, name, revision):
if name == self.find:
self.find_tag = Tag(None, revision)
def admin_completed(self):
if self.find_tag is None:
if self.find is None:
self.find_tag = Tag(None, '')
else:
try:
self.find_tag = Tag(None, self.find)
except ValueError:
pass
class InfoSink(MatchingSink):
def __init__(self, entry, tag, alltags):
MatchingSink.__init__(self, tag)
self.entry = entry
self.alltags = alltags
self.matching_rev = None
self.perfect_match = 0
self.lockinfo = { }
self.saw_revision = False
def define_tag(self, name, revision):
MatchingSink.define_tag(self, name, revision)
self.alltags[name] = revision
def admin_completed(self):
MatchingSink.admin_completed(self)
if self.find_tag is None:
# tag we're looking for doesn't exist
if self.entry.kind == vclib.FILE:
self.entry.absent = 1
raise rcsparse.RCSStopParser
def parse_completed(self):
if not self.saw_revision:
#self.entry.errors.append("No revisions exist on %s" % (view_tag or "MAIN"))
self.entry.absent = 1
def set_locker(self, rev, locker):
self.lockinfo[rev] = locker
def define_revision(self, revision, date, author, state, branches, next):
self.saw_revision = True
if self.perfect_match:
return
tag = self.find_tag
rev = Revision(revision, date, author, state == "dead")
rev.lockinfo = self.lockinfo.get(revision)
# perfect match if revision number matches tag number or if
# revision is on trunk and tag points to trunk. imperfect match
# if tag refers to a branch and either a) this revision is the
# highest revision so far found on that branch, or b) this
# revision is the branchpoint.
perfect = ((rev.number == tag.number) or
(not tag.number and len(rev.number) == 2))
if perfect or (tag.is_branch and \
((tag.number == rev.number[:-1] and
(not self.matching_rev or
rev.number > self.matching_rev.number)) or
(rev.number == tag.number[:-1]))):
self.matching_rev = rev
self.perfect_match = perfect
def set_revision_info(self, revision, log, text):
if self.matching_rev:
if revision == self.matching_rev.string:
self.entry.rev = self.matching_rev.string
self.entry.date = self.matching_rev.date
self.entry.author = self.matching_rev.author
self.entry.dead = self.matching_rev.dead
self.entry.lockinfo = self.matching_rev.lockinfo
self.entry.absent = 0
self.entry.log = log
raise rcsparse.RCSStopParser
else:
raise rcsparse.RCSStopParser
class TreeSink(rcsparse.Sink):
d_command = re.compile('^d(\d+)\\s(\\d+)')
a_command = re.compile('^a(\d+)\\s(\\d+)')
def __init__(self):
self.revs = { }
self.tags = { }
self.head = None
self.default_branch = None
self.lockinfo = { }
def set_head_revision(self, revision):
self.head = revision
def set_principal_branch(self, branch_number):
self.default_branch = branch_number
def set_locker(self, rev, locker):
self.lockinfo[rev] = locker
def define_tag(self, name, revision):
# check !tags.has_key(tag_name)
self.tags[name] = revision
def define_revision(self, revision, date, author, state, branches, next):
# check !revs.has_key(revision)
self.revs[revision] = Revision(revision, date, author, state == "dead")
def set_revision_info(self, revision, log, text):
# check revs.has_key(revision)
rev = self.revs[revision]
rev.log = log
changed = None
added = 0
deled = 0
if self.head != revision:
changed = 1
lines = text.split('\n')
idx = 0
while idx < len(lines):
command = lines[idx]
dmatch = self.d_command.match(command)
idx = idx + 1
if dmatch:
deled = deled + int(dmatch.group(2))
else:
amatch = self.a_command.match(command)
if amatch:
count = int(amatch.group(2))
added = added + count
idx = idx + count
elif command:
raise "error while parsing deltatext: %s" % command
if len(rev.number) == 2:
rev.next_changed = changed and "+%i -%i" % (deled, added)
else:
rev.changed = changed and "+%i -%i" % (added, deled)
class StreamText:
d_command = re.compile('^d(\d+)\\s(\\d+)')
a_command = re.compile('^a(\d+)\\s(\\d+)')
def __init__(self, text):
self.text = text.split('\n')
def command(self, cmd):
adjust = 0
add_lines_remaining = 0
diffs = cmd.split('\n')
if diffs[-1] == "":
del diffs[-1]
if len(diffs) == 0:
return
if diffs[0] == "":
del diffs[0]
for command in diffs:
if add_lines_remaining > 0:
# Insertion lines from a prior "a" command
self.text.insert(start_line + adjust, command)
add_lines_remaining = add_lines_remaining - 1
adjust = adjust + 1
continue
dmatch = self.d_command.match(command)
amatch = self.a_command.match(command)
if dmatch:
# "d" - Delete command
start_line = int(dmatch.group(1))
count = int(dmatch.group(2))
begin = start_line + adjust - 1
del self.text[begin:begin + count]
adjust = adjust - count
elif amatch:
# "a" - Add command
start_line = int(amatch.group(1))
count = int(amatch.group(2))
add_lines_remaining = count
else:
raise RuntimeError, 'Error parsing diff commands'
def secondnextdot(s, start):
# find the position the second dot after the start index.
return s.find('.', s.find('.', start) + 1)
class COSink(MatchingSink):
def __init__(self, rev):
MatchingSink.__init__(self, rev)
def set_head_revision(self, revision):
self.head = Revision(revision)
self.last = None
self.sstext = None
def admin_completed(self):
MatchingSink.admin_completed(self)
if self.find_tag is None:
raise vclib.InvalidRevision(self.find)
def set_revision_info(self, revision, log, text):
tag = self.find_tag
rev = Revision(revision)
if rev.number == tag.number:
self.log = log
depth = len(rev.number)
if rev.number == self.head.number:
assert self.sstext is None
self.sstext = StreamText(text)
elif (depth == 2 and tag.number and rev.number >= tag.number[:depth]):
assert len(self.last.number) == 2
assert rev.number < self.last.number
self.sstext.command(text)
elif (depth > 2 and rev.number[:depth-1] == tag.number[:depth-1] and
(rev.number <= tag.number or len(tag.number) == depth-1)):
assert len(rev.number) - len(self.last.number) in (0, 2)
assert rev.number > self.last.number
self.sstext.command(text)
else:
rev = None
if rev:
#print "tag =", tag.number, "rev =", rev.number, "<br>"
self.last = rev

View File

@@ -1,44 +0,0 @@
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
"""This package provides parsing tools for RCS files.
To use this package, first create a subclass of Sink. This should
declare all the callback methods you care about. Create an instance
of your class, and open() the RCS file you want to read. Then call
parse() to parse the file.
"""
# Make the "Sink" class and the various exception classes visible in this
# scope. That way, applications never need to import any of the
# sub-packages.
from common import *
try:
from tparse import parse
except ImportError:
try:
from texttools import Parser
except ImportError:
from default import Parser
def parse(file, sink):
"""Parse an RCS file.
Parameters: FILE is the file object to parse. (I.e. an object of the
built-in Python type "file", usually created using Python's built-in
"open()" function). It should be opened in binary mode.
SINK is an instance of (some subclass of) Sink. It's methods will be
called as the file is parsed; see the definition of Sink for the
details.
"""
return Parser().parse(file, sink)

View File

@@ -1,485 +0,0 @@
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
"""common.py: common classes and functions for the RCS parsing tools."""
import calendar
import string
class Sink:
"""Interface to be implemented by clients. The RCS parser calls this as
it parses the RCS file.
All these methods have stub implementations that do nothing, so you only
have to override the callbacks that you care about.
"""
def set_head_revision(self, revision):
"""Reports the head revision for this RCS file.
This is the value of the 'head' header in the admin section of the RCS
file. This function can only be called before admin_completed().
Parameter: REVISION is a string containing a revision number. This is
an actual revision number, not a branch number.
"""
pass
def set_principal_branch(self, branch_name):
"""Reports the principal branch for this RCS file. This is only called
if the principal branch is not trunk.
This is the value of the 'branch' header in the admin section of the RCS
file. This function can only be called before admin_completed().
Parameter: BRANCH_NAME is a string containing a branch number. If this
function is called, the parameter is typically "1.1.1", indicating the
vendor branch.
"""
pass
def set_access(self, accessors):
"""Reports the access control list for this RCS file. This function is
only called if the ACL is set. If this function is not called then
there is no ACL and all users are allowed access.
This is the value of the 'access' header in the admin section of the RCS
file. This function can only be called before admin_completed().
Parameter: ACCESSORS is a list of strings. Each string is a username.
The user is allowed access if and only if their username is in the list,
OR the user owns the RCS file on disk, OR the user is root.
Note that CVS typically doesn't use this field.
"""
pass
def define_tag(self, name, revision):
"""Reports a tag or branch definition. This function will be called
once for each tag or branch.
This is taken from the 'symbols' header in the admin section of the RCS
file. This function can only be called before admin_completed().
Parameters: NAME is a string containing the tag or branch name.
REVISION is a string containing a revision number. This may be
an actual revision number (for a tag) or a branch number.
The revision number consists of a number of decimal components separated
by dots. There are three common forms. If there are an odd number of
components, it's a branch. Otherwise, if the next-to-last component is
zero, it's a branch (and the next-to-last component is an artifact of
CVS and should not be shown to the user). Otherwise, it's a tag.
This function is called in the order that the tags appear in the RCS
file header. For CVS, this appears to be in reverse chronological
order of tag/branch creation.
"""
pass
def set_locker(self, revision, locker):
"""Reports a lock on this RCS file. This function will be called once
for each lock.
This is taken from the 'locks' header in the admin section of the RCS
file. This function can only be called before admin_completed().
Parameters: REVISION is a string containing a revision number. This is
an actual revision number, not a branch number.
LOCKER is a string containing a username.
"""
pass
def set_locking(self, mode):
"""Signals strict locking mode. This function will be called if and
only if the RCS file is in strict locking mode.
This is taken from the 'strict' header in the admin section of the RCS
file. This function can only be called before admin_completed().
Parameters: MODE is always the string 'strict'.
"""
pass
def set_comment(self, comment):
"""Reports the comment for this RCS file.
This is the value of the 'comment' header in the admin section of the
RCS file. This function can only be called before admin_completed().
Parameter: COMMENT is a string containing the comment. This may be
multi-line.
This field does not seem to be used by CVS.
"""
pass
def set_expansion(self, mode):
"""Reports the keyword expansion mode for this RCS file.
This is the value of the 'expand' header in the admin section of the
RCS file. This function can only be called before admin_completed().
Parameter: MODE is a string containing the keyword expansion mode.
Possible values include 'o' and 'b', amongst others.
"""
pass
def admin_completed(self):
"""Reports that the initial RCS header has been parsed. This function is
called exactly once.
"""
pass
def define_revision(self, revision, timestamp, author, state,
branches, next):
"""Reports metadata about a single revision.
This function is called for each revision. It is called later than
admin_completed() and earlier than tree_completed().
Parameter: REVISION is a revision number, as a string. This is an
actual revision number, not a branch number.
TIMESTAMP is the date and time that the revision was created, as an
integer number of seconds since the epoch. (I.e. "UNIX time" format).
AUTHOR is the author name, as a string.
STATE is the state of the revision, as a string. Common values are
"Exp" and "dead".
BRANCHES is a list of strings, with each string being an actual
revision number (not a branch number). For each branch which is based
on this revision and has commits, the revision number of the first
branch commit is listed here.
NEXT is either None or a string representing an actual revision number
(not a branch number).
When on trunk, NEXT points to what humans might consider to be the
'previous' revision number. For example, 1.3's NEXT is 1.2.
However, on a branch, NEXT really does point to what humans would
consider to be the 'next' revision number. For example, 1.1.2.1's
NEXT would be 1.1.2.2.
In other words, NEXT always means "where to find the next deltatext
that you need this revision to retrieve".
"""
pass
def tree_completed(self):
"""Reports that the RCS revision tree has been parsed. This function is
called exactly once. This function will be called later than
admin_completed().
"""
pass
def set_description(self, description):
"""Reports the description from the RCS file. This is set using the
"-m" flag to "cvs add". However, many CVS users don't use that option,
so this is often empty.
This function is called once, after tree_completed().
Parameter: DESCRIPTION is a string containing the description. This may
be multi-line.
"""
pass
def set_revision_info(self, revision, log, text):
"""Reports the log message and contents of a CVS revision.
This function is called for each revision. It is called later than
set_description().
Parameters: REVISION is a string containing the actual revision number.
LOG is a string containing the log message. This may be multi-line.
TEXT is the contents of the file in this revision, either as full-text or
as a diff. This is usually multi-line, and often quite large and/or
binary.
"""
pass
def parse_completed(self):
"""Reports that parsing an RCS file is complete.
This function is called once. After it is called, no more calls will be
made via this interface.
"""
pass
# --------------------------------------------------------------------------
#
# EXCEPTIONS USED BY RCSPARSE
#
class RCSParseError(Exception):
pass
class RCSIllegalCharacter(RCSParseError):
pass
class RCSExpected(RCSParseError):
def __init__(self, got, wanted):
RCSParseError.__init__(
self,
'Unexpected parsing error in RCS file.\n'
'Expected token: %s, but saw: %s'
% (wanted, got)
)
class RCSStopParser(Exception):
pass
# --------------------------------------------------------------------------
#
# STANDARD TOKEN STREAM-BASED PARSER
#
class _Parser:
stream_class = None # subclasses need to define this
def _read_until_semicolon(self):
"""Read all tokens up to and including the next semicolon token.
Return the tokens (not including the semicolon) as a list."""
tokens = []
while 1:
token = self.ts.get()
if token == ';':
break
tokens.append(token)
return tokens
def _parse_admin_head(self, token):
rev = self.ts.get()
if rev == ';':
# The head revision is not specified. Just drop the semicolon
# on the floor.
pass
else:
self.sink.set_head_revision(rev)
self.ts.match(';')
def _parse_admin_branch(self, token):
branch = self.ts.get()
if branch != ';':
self.sink.set_principal_branch(branch)
self.ts.match(';')
def _parse_admin_access(self, token):
accessors = self._read_until_semicolon()
if accessors:
self.sink.set_access(accessors)
def _parse_admin_symbols(self, token):
while 1:
tag_name = self.ts.get()
if tag_name == ';':
break
self.ts.match(':')
tag_rev = self.ts.get()
self.sink.define_tag(tag_name, tag_rev)
def _parse_admin_locks(self, token):
while 1:
locker = self.ts.get()
if locker == ';':
break
self.ts.match(':')
rev = self.ts.get()
self.sink.set_locker(rev, locker)
def _parse_admin_strict(self, token):
self.sink.set_locking("strict")
self.ts.match(';')
def _parse_admin_comment(self, token):
self.sink.set_comment(self.ts.get())
self.ts.match(';')
def _parse_admin_expand(self, token):
expand_mode = self.ts.get()
self.sink.set_expansion(expand_mode)
self.ts.match(';')
admin_token_map = {
'head' : _parse_admin_head,
'branch' : _parse_admin_branch,
'access' : _parse_admin_access,
'symbols' : _parse_admin_symbols,
'locks' : _parse_admin_locks,
'strict' : _parse_admin_strict,
'comment' : _parse_admin_comment,
'expand' : _parse_admin_expand,
'desc' : None,
}
def parse_rcs_admin(self):
while 1:
# Read initial token at beginning of line
token = self.ts.get()
try:
f = self.admin_token_map[token]
except KeyError:
# We're done once we reach the description of the RCS tree
if token[0] in string.digits:
self.ts.unget(token)
return
else:
# Chew up "newphrase"
# warn("Unexpected RCS token: $token\n")
while self.ts.get() != ';':
pass
else:
if f is None:
self.ts.unget(token)
return
else:
f(self, token)
def _parse_rcs_tree_entry(self, revision):
# Parse date
self.ts.match('date')
date = self.ts.get()
self.ts.match(';')
# Convert date into standard UNIX time format (seconds since epoch)
date_fields = string.split(date, '.')
# According to rcsfile(5): the year "contains just the last two
# digits of the year for years from 1900 through 1999, and all the
# digits of years thereafter".
if len(date_fields[0]) == 2:
date_fields[0] = '19' + date_fields[0]
date_fields = map(string.atoi, date_fields)
EPOCH = 1970
if date_fields[0] < EPOCH:
raise ValueError, 'invalid year for revision %s' % (revision,)
try:
timestamp = calendar.timegm(tuple(date_fields) + (0, 0, 0,))
except ValueError, e:
raise ValueError, 'invalid date for revision %s: %s' % (revision, e,)
# Parse author
### NOTE: authors containing whitespace are violations of the
### RCS specification. We are making an allowance here because
### CVSNT is known to produce these sorts of authors.
self.ts.match('author')
author = ' '.join(self._read_until_semicolon())
# Parse state
self.ts.match('state')
state = ''
while 1:
token = self.ts.get()
if token == ';':
break
state = state + token + ' '
state = state[:-1] # toss the trailing space
# Parse branches
self.ts.match('branches')
branches = self._read_until_semicolon()
# Parse revision of next delta in chain
self.ts.match('next')
next = self.ts.get()
if next == ';':
next = None
else:
self.ts.match(';')
# there are some files with extra tags in them. for example:
# owner 640;
# group 15;
# permissions 644;
# hardlinks @configure.in@;
# commitid mLiHw3bulRjnTDGr;
# this is "newphrase" in RCSFILE(5). we just want to skip over these.
while 1:
token = self.ts.get()
if token == 'desc' or token[0] in string.digits:
self.ts.unget(token)
break
# consume everything up to the semicolon
self._read_until_semicolon()
self.sink.define_revision(revision, timestamp, author, state, branches,
next)
def parse_rcs_tree(self):
while 1:
revision = self.ts.get()
# End of RCS tree description ?
if revision == 'desc':
self.ts.unget(revision)
return
self._parse_rcs_tree_entry(revision)
def parse_rcs_description(self):
self.ts.match('desc')
self.sink.set_description(self.ts.get())
def parse_rcs_deltatext(self):
while 1:
revision = self.ts.get()
if revision is None:
# EOF
break
text, sym2, log, sym1 = self.ts.mget(4)
if sym1 != 'log':
print `text[:100], sym2[:100], log[:100], sym1[:100]`
raise RCSExpected(sym1, 'log')
if sym2 != 'text':
raise RCSExpected(sym2, 'text')
### need to add code to chew up "newphrase"
self.sink.set_revision_info(revision, log, text)
def parse(self, file, sink):
"""Parse an RCS file.
Parameters: FILE is the file object to parse. (I.e. an object of the
built-in Python type "file", usually created using Python's built-in
"open()" function).
SINK is an instance of (some subclass of) Sink. It's methods will be
called as the file is parsed; see the definition of Sink for the
details.
"""
self.ts = self.stream_class(file)
self.sink = sink
self.parse_rcs_admin()
# let sink know when the admin section has been completed
self.sink.admin_completed()
self.parse_rcs_tree()
# many sinks want to know when the tree has been completed so they can
# do some work to prep for the arrival of the deltatext
self.sink.tree_completed()
self.parse_rcs_description()
self.parse_rcs_deltatext()
# easiest for us to tell the sink it is done, rather than worry about
# higher level software doing it.
self.sink.parse_completed()
self.ts = self.sink = None
# --------------------------------------------------------------------------

View File

@@ -1,122 +0,0 @@
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
"""debug.py: various debugging tools for the rcsparse package."""
import time
from __init__ import parse
import common
class DebugSink(common.Sink):
def set_head_revision(self, revision):
print 'head:', revision
def set_principal_branch(self, branch_name):
print 'branch:', branch_name
def define_tag(self, name, revision):
print 'tag:', name, '=', revision
def set_comment(self, comment):
print 'comment:', comment
def set_description(self, description):
print 'description:', description
def define_revision(self, revision, timestamp, author, state,
branches, next):
print 'revision:', revision
print ' timestamp:', timestamp
print ' author:', author
print ' state:', state
print ' branches:', branches
print ' next:', next
def set_revision_info(self, revision, log, text):
print 'revision:', revision
print ' log:', log
print ' text:', text[:100], '...'
class DumpSink(common.Sink):
"""Dump all the parse information directly to stdout.
The output is relatively unformatted and untagged. It is intended as a
raw dump of the data in the RCS file. A copy can be saved, then changes
made to the parsing engine, then a comparison of the new output against
the old output.
"""
def __init__(self):
global sha
import sha
def set_head_revision(self, revision):
print revision
def set_principal_branch(self, branch_name):
print branch_name
def define_tag(self, name, revision):
print name, revision
def set_comment(self, comment):
print comment
def set_description(self, description):
print description
def define_revision(self, revision, timestamp, author, state,
branches, next):
print revision, timestamp, author, state, branches, next
def set_revision_info(self, revision, log, text):
print revision, sha.new(log).hexdigest(), sha.new(text).hexdigest()
def tree_completed(self):
print 'tree_completed'
def parse_completed(self):
print 'parse_completed'
def dump_file(fname):
parse(open(fname, 'rb'), DumpSink())
def time_file(fname):
f = open(fname, 'rb')
s = common.Sink()
t = time.time()
parse(f, s)
t = time.time() - t
print t
def _usage():
print 'This is normally a module for importing, but it has a couple'
print 'features for testing as an executable script.'
print 'USAGE: %s COMMAND filename,v' % sys.argv[0]
print ' where COMMAND is one of:'
print ' dump: filename is "dumped" to stdout'
print ' time: filename is parsed with the time written to stdout'
sys.exit(1)
if __name__ == '__main__':
import sys
if len(sys.argv) != 3:
_usage()
if sys.argv[1] == 'dump':
dump_file(sys.argv[2])
elif sys.argv[1] == 'time':
time_file(sys.argv[2])
else:
_usage()

View File

@@ -1,176 +0,0 @@
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# This file was originally based on portions of the blame.py script by
# Curt Hagenlocher.
#
# -----------------------------------------------------------------------
import string
import common
class _TokenStream:
token_term = string.whitespace + ";:"
try:
token_term = frozenset(token_term)
except NameError:
pass
# the algorithm is about the same speed for any CHUNK_SIZE chosen.
# grab a good-sized chunk, but not too large to overwhelm memory.
# note: we use a multiple of a standard block size
CHUNK_SIZE = 192 * 512 # about 100k
# CHUNK_SIZE = 5 # for debugging, make the function grind...
def __init__(self, file):
self.rcsfile = file
self.idx = 0
self.buf = self.rcsfile.read(self.CHUNK_SIZE)
if self.buf == '':
raise RuntimeError, 'EOF'
def get(self):
"Get the next token from the RCS file."
# Note: we can afford to loop within Python, examining individual
# characters. For the whitespace and tokens, the number of iterations
# is typically quite small. Thus, a simple iterative loop will beat
# out more complex solutions.
buf = self.buf
lbuf = len(buf)
idx = self.idx
while 1:
if idx == lbuf:
buf = self.rcsfile.read(self.CHUNK_SIZE)
if buf == '':
# signal EOF by returning None as the token
del self.buf # so we fail if get() is called again
return None
lbuf = len(buf)
idx = 0
if buf[idx] not in string.whitespace:
break
idx = idx + 1
if buf[idx] in ';:':
self.buf = buf
self.idx = idx + 1
return buf[idx]
if buf[idx] != '@':
end = idx + 1
token = ''
while 1:
# find token characters in the current buffer
while end < lbuf and buf[end] not in self.token_term:
end = end + 1
token = token + buf[idx:end]
if end < lbuf:
# we stopped before the end, so we have a full token
idx = end
break
# we stopped at the end of the buffer, so we may have a partial token
buf = self.rcsfile.read(self.CHUNK_SIZE)
lbuf = len(buf)
idx = end = 0
self.buf = buf
self.idx = idx
return token
# a "string" which starts with the "@" character. we'll skip it when we
# search for content.
idx = idx + 1
chunks = [ ]
while 1:
if idx == lbuf:
idx = 0
buf = self.rcsfile.read(self.CHUNK_SIZE)
if buf == '':
raise RuntimeError, 'EOF'
lbuf = len(buf)
i = string.find(buf, '@', idx)
if i == -1:
chunks.append(buf[idx:])
idx = lbuf
continue
if i == lbuf - 1:
chunks.append(buf[idx:i])
idx = 0
buf = '@' + self.rcsfile.read(self.CHUNK_SIZE)
if buf == '@':
raise RuntimeError, 'EOF'
lbuf = len(buf)
continue
if buf[i + 1] == '@':
chunks.append(buf[idx:i+1])
idx = i + 2
continue
chunks.append(buf[idx:i])
self.buf = buf
self.idx = i + 1
return string.join(chunks, '')
# _get = get
# def get(self):
token = self._get()
print 'T:', `token`
return token
def match(self, match):
"Try to match the next token from the input buffer."
token = self.get()
if token != match:
raise common.RCSExpected(token, match)
def unget(self, token):
"Put this token back, for the next get() to return."
# Override the class' .get method with a function which clears the
# overridden method then returns the pushed token. Since this function
# will not be looked up via the class mechanism, it should be a "normal"
# function, meaning it won't have "self" automatically inserted.
# Therefore, we need to pass both self and the token thru via defaults.
# note: we don't put this into the input buffer because it may have been
# @-unescaped already.
def give_it_back(self=self, token=token):
del self.get
return token
self.get = give_it_back
def mget(self, count):
"Return multiple tokens. 'next' is at the end."
result = [ ]
for i in range(count):
result.append(self.get())
result.reverse()
return result
class Parser(common._Parser):
stream_class = _TokenStream

View File

@@ -1,73 +0,0 @@
#! /usr/bin/python
# (Be in -*- python -*- mode.)
#
# ====================================================================
# Copyright (c) 2006-2007 CollabNet. All rights reserved.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
# are also available at http://subversion.tigris.org/license-1.html.
# If newer versions of this license are posted there, you may use a
# newer version instead, at your option.
#
# This software consists of voluntary contributions made by many
# individuals. For exact contribution history, see the revision
# history and logs, available at http://cvs2svn.tigris.org/.
# ====================================================================
"""Parse an RCS file, showing the rcsparse callbacks that are called.
This program is useful to see whether an RCS file has a problem (in
the sense of not being parseable by rcsparse) and also to illuminate
the correspondence between RCS file contents and rcsparse callbacks.
The output of this program can also be considered to be a kind of
'canonical' format for RCS files, at least in so far as rcsparse
returns all relevant information in the file and provided that the
order of callbacks is always the same."""
import sys
import os
class Logger:
def __init__(self, f, name):
self.f = f
self.name = name
def __call__(self, *args):
self.f.write(
'%s(%s)\n' % (self.name, ', '.join(['%r' % arg for arg in args]),)
)
class LoggingSink:
def __init__(self, f):
self.f = f
def __getattr__(self, name):
return Logger(self.f, name)
if __name__ == '__main__':
# Since there is nontrivial logic in __init__.py, we have to import
# parse() via that file. First make sure that the directory
# containing this script is in the path:
sys.path.insert(0, os.path.dirname(sys.argv[0]))
from __init__ import parse
if sys.argv[1:]:
for path in sys.argv[1:]:
if os.path.isfile(path) and path.endswith(',v'):
parse(
open(path, 'rb'), LoggingSink(sys.stdout)
)
else:
sys.stderr.write('%r is being ignored.\n' % path)
else:
parse(sys.stdin, LoggingSink(sys.stdout))

View File

@@ -1,73 +0,0 @@
#! /usr/bin/python
# (Be in -*- python -*- mode.)
#
# ====================================================================
# Copyright (c) 2007 CollabNet. All rights reserved.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
# are also available at http://subversion.tigris.org/license-1.html.
# If newer versions of this license are posted there, you may use a
# newer version instead, at your option.
#
# This software consists of voluntary contributions made by many
# individuals. For exact contribution history, see the revision
# history and logs, available at http://viewvc.tigris.org/.
# ====================================================================
"""Run tests of rcsparse code."""
import sys
import os
import glob
from cStringIO import StringIO
from difflib import Differ
# Since there is nontrivial logic in __init__.py, we have to import
# parse() via that file. First make sure that the directory
# containing this script is in the path:
script_dir = os.path.dirname(sys.argv[0])
sys.path.insert(0, script_dir)
from __init__ import parse
from parse_rcs_file import LoggingSink
test_dir = os.path.join(script_dir, 'test-data')
filelist = glob.glob(os.path.join(test_dir, '*,v'))
filelist.sort()
all_tests_ok = 1
for filename in filelist:
sys.stderr.write('%s: ' % (filename,))
f = StringIO()
try:
parse(open(filename, 'rb'), LoggingSink(f))
except Exception, e:
sys.stderr.write('Error parsing file: %s!\n' % (e,))
all_tests_ok = 0
else:
output = f.getvalue()
expected_output_filename = filename[:-2] + '.out'
expected_output = open(expected_output_filename, 'rb').read()
if output == expected_output:
sys.stderr.write('OK\n')
else:
sys.stderr.write('Output does not match expected output!\n')
differ = Differ()
for diffline in differ.compare(
expected_output.splitlines(1), output.splitlines(1)
):
sys.stderr.write(diffline)
all_tests_ok = 0
if all_tests_ok:
sys.exit(0)
else:
sys.exit(1)

View File

@@ -1,102 +0,0 @@
head 1.2;
access;
symbols
B_SPLIT:1.2.0.4
B_MIXED:1.2.0.2
T_MIXED:1.2
B_FROM_INITIALS_BUT_ONE:1.1.1.1.0.4
B_FROM_INITIALS:1.1.1.1.0.2
T_ALL_INITIAL_FILES_BUT_ONE:1.1.1.1
T_ALL_INITIAL_FILES:1.1.1.1
vendortag:1.1.1.1
vendorbranch:1.1.1;
locks; strict;
comment @# @;
1.2
date 2003.05.23.00.17.53; author jrandom; state Exp;
branches
1.2.2.1
1.2.4.1;
next 1.1;
1.1
date 98.05.22.23.20.19; author jrandom; state Exp;
branches
1.1.1.1;
next ;
1.1.1.1
date 98.05.22.23.20.19; author jrandom; state Exp;
branches;
next ;
1.2.2.1
date 2003.05.23.00.31.36; author jrandom; state Exp;
branches;
next ;
1.2.4.1
date 2003.06.03.03.20.31; author jrandom; state Exp;
branches;
next ;
desc
@@
1.2
log
@Second commit to proj, affecting all 7 files.
@
text
@This is the file `default' in the top level of the project.
Every directory in the `proj' project has a file named `default'.
This line was added in the second commit (affecting all 7 files).
@
1.2.4.1
log
@First change on branch B_SPLIT.
This change excludes sub3/default, because it was not part of this
commit, and sub1/subsubB/default, which is not even on the branch yet.
@
text
@a5 2
First change on branch B_SPLIT.
@
1.2.2.1
log
@Modify three files, on branch B_MIXED.
@
text
@a5 2
This line was added on branch B_MIXED only (affecting 3 files).
@
1.1
log
@Initial revision
@
text
@d4 2
@
1.1.1.1
log
@Initial import.
@
text
@@

View File

@@ -1,26 +0,0 @@
set_head_revision('1.2')
define_tag('B_SPLIT', '1.2.0.4')
define_tag('B_MIXED', '1.2.0.2')
define_tag('T_MIXED', '1.2')
define_tag('B_FROM_INITIALS_BUT_ONE', '1.1.1.1.0.4')
define_tag('B_FROM_INITIALS', '1.1.1.1.0.2')
define_tag('T_ALL_INITIAL_FILES_BUT_ONE', '1.1.1.1')
define_tag('T_ALL_INITIAL_FILES', '1.1.1.1')
define_tag('vendortag', '1.1.1.1')
define_tag('vendorbranch', '1.1.1')
set_locking('strict')
set_comment('# ')
admin_completed()
define_revision('1.2', 1053649073, 'jrandom', 'Exp', ['1.2.2.1', '1.2.4.1'], '1.1')
define_revision('1.1', 895879219, 'jrandom', 'Exp', ['1.1.1.1'], None)
define_revision('1.1.1.1', 895879219, 'jrandom', 'Exp', [], None)
define_revision('1.2.2.1', 1053649896, 'jrandom', 'Exp', [], None)
define_revision('1.2.4.1', 1054610431, 'jrandom', 'Exp', [], None)
tree_completed()
set_description('')
set_revision_info('1.2', 'Second commit to proj, affecting all 7 files.\n', "This is the file `default' in the top level of the project.\n\nEvery directory in the `proj' project has a file named `default'.\n\nThis line was added in the second commit (affecting all 7 files).\n")
set_revision_info('1.2.4.1', 'First change on branch B_SPLIT.\n\nThis change excludes sub3/default, because it was not part of this\ncommit, and sub1/subsubB/default, which is not even on the branch yet.\n', 'a5 2\n\nFirst change on branch B_SPLIT.\n')
set_revision_info('1.2.2.1', 'Modify three files, on branch B_MIXED.\n', 'a5 2\n\nThis line was added on branch B_MIXED only (affecting 3 files).\n')
set_revision_info('1.1', 'Initial revision\n', 'd4 2\n')
set_revision_info('1.1.1.1', 'Initial import.\n', '')
parse_completed()

View File

@@ -1,10 +0,0 @@
head ;
access;
symbols;
locks; strict;
comment @# @;
desc
@@

View File

@@ -1,6 +0,0 @@
set_locking('strict')
set_comment('# ')
admin_completed()
tree_completed()
set_description('')
parse_completed()

View File

@@ -1,348 +0,0 @@
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
import string
# note: this will raise an ImportError if it isn't available. the rcsparse
# package will recognize this and switch over to the default parser.
from mx import TextTools
import common
# for convenience
_tt = TextTools
_idchar_list = map(chr, range(33, 127)) + map(chr, range(160, 256))
_idchar_list.remove('$')
_idchar_list.remove(',')
#_idchar_list.remove('.') # leave as part of 'num' symbol
_idchar_list.remove(':')
_idchar_list.remove(';')
_idchar_list.remove('@')
_idchar = string.join(_idchar_list, '')
_idchar_set = _tt.set(_idchar)
_onechar_token_set = _tt.set(':;')
_not_at_set = _tt.invset('@')
_T_TOKEN = 30
_T_STRING_START = 40
_T_STRING_SPAN = 60
_T_STRING_END = 70
_E_COMPLETE = 100 # ended on a complete token
_E_TOKEN = 110 # ended mid-token
_E_STRING_SPAN = 130 # ended within a string
_E_STRING_END = 140 # ended with string-end ('@') (could be mid-@@)
_SUCCESS = +100
_EOF = 'EOF'
_CONTINUE = 'CONTINUE'
_UNUSED = 'UNUSED'
# continuation of a token over a chunk boundary
_c_token_table = (
(_T_TOKEN, _tt.AllInSet, _idchar_set),
)
class _mxTokenStream:
# the algorithm is about the same speed for any CHUNK_SIZE chosen.
# grab a good-sized chunk, but not too large to overwhelm memory.
# note: we use a multiple of a standard block size
CHUNK_SIZE = 192 * 512 # about 100k
# CHUNK_SIZE = 5 # for debugging, make the function grind...
def __init__(self, file):
self.rcsfile = file
self.tokens = [ ]
self.partial = None
self.string_end = None
def _parse_chunk(self, buf, start=0):
"Get the next token from the RCS file."
buflen = len(buf)
assert start < buflen
# construct a tag table which refers to the buffer we need to parse.
table = (
#1: ignore whitespace. with or without whitespace, move to the next rule.
(None, _tt.AllInSet, _tt.whitespace_set, +1),
#2
(_E_COMPLETE, _tt.EOF + _tt.AppendTagobj, _tt.Here, +1, _SUCCESS),
#3: accumulate token text and exit, or move to the next rule.
(_UNUSED, _tt.AllInSet + _tt.AppendMatch, _idchar_set, +2),
#4
(_E_TOKEN, _tt.EOF + _tt.AppendTagobj, _tt.Here, -3, _SUCCESS),
#5: single character tokens exit immediately, or move to the next rule
(_UNUSED, _tt.IsInSet + _tt.AppendMatch, _onechar_token_set, +2),
#6
(_E_COMPLETE, _tt.EOF + _tt.AppendTagobj, _tt.Here, -5, _SUCCESS),
#7: if this isn't an '@' symbol, then we have a syntax error (go to a
# negative index to indicate that condition). otherwise, suck it up
# and move to the next rule.
(_T_STRING_START, _tt.Is + _tt.AppendTagobj, '@'),
#8
(None, _tt.Is, '@', +4, +1),
#9
(buf, _tt.Is, '@', +1, -1),
#10
(_T_STRING_END, _tt.Skip + _tt.AppendTagobj, 0, 0, +1),
#11
(_E_STRING_END, _tt.EOF + _tt.AppendTagobj, _tt.Here, -10, _SUCCESS),
#12
(_E_STRING_SPAN, _tt.EOF + _tt.AppendTagobj, _tt.Here, +1, _SUCCESS),
#13: suck up everything that isn't an AT. go to next rule to look for EOF
(buf, _tt.AllInSet, _not_at_set, 0, +1),
#14: go back to look for double AT if we aren't at the end of the string
(_E_STRING_SPAN, _tt.EOF + _tt.AppendTagobj, _tt.Here, -6, _SUCCESS),
)
# Fast, texttools may be, but it's somewhat lacking in clarity.
# Here's an attempt to document the logic encoded in the table above:
#
# Flowchart:
# _____
# / /\
# 1 -> 2 -> 3 -> 5 -> 7 -> 8 -> 9 -> 10 -> 11
# | \/ \/ \/ /\ \/
# \ 4 6 12 14 /
# \_______/_____/ \ / /
# \ 13 /
# \__________________________________________/
#
# #1: Skip over any whitespace.
# #2: If now EOF, exit with code _E_COMPLETE.
# #3: If we have a series of characters in _idchar_set, then:
# #4: Output them as a token, and go back to #1.
# #5: If we have a character in _onechar_token_set, then:
# #6: Output it as a token, and go back to #1.
# #7: If we do not have an '@', then error.
# If we do, then log a _T_STRING_START and continue.
# #8: If we have another '@', continue on to #9. Otherwise:
# #12: If now EOF, exit with code _E_STRING_SPAN.
# #13: Record the slice up to the next '@' (or EOF).
# #14: If now EOF, exit with code _E_STRING_SPAN.
# Otherwise, go back to #8.
# #9: If we have another '@', then we've just seen an escaped
# (by doubling) '@' within an @-string. Record a slice including
# just one '@' character, and jump back to #8.
# Otherwise, we've *either* seen the terminating '@' of an @-string,
# *or* we've seen one half of an escaped @@ sequence that just
# happened to be split over a chunk boundary - in either case,
# we continue on to #10.
# #10: Log a _T_STRING_END.
# #11: If now EOF, exit with _E_STRING_END. Otherwise, go back to #1.
success, taglist, idx = _tt.tag(buf, table, start)
if not success:
### need a better way to report this error
raise common.RCSIllegalCharacter()
assert idx == buflen
# pop off the last item
last_which = taglist.pop()
i = 0
tlen = len(taglist)
while i < tlen:
if taglist[i] == _T_STRING_START:
j = i + 1
while j < tlen:
if taglist[j] == _T_STRING_END:
s = _tt.join(taglist, '', i+1, j)
del taglist[i:j]
tlen = len(taglist)
taglist[i] = s
break
j = j + 1
else:
assert last_which == _E_STRING_SPAN
s = _tt.join(taglist, '', i+1)
del taglist[i:]
self.partial = (_T_STRING_SPAN, [ s ])
break
i = i + 1
# figure out whether we have a partial last-token
if last_which == _E_TOKEN:
self.partial = (_T_TOKEN, [ taglist.pop() ])
elif last_which == _E_COMPLETE:
pass
elif last_which == _E_STRING_SPAN:
assert self.partial
else:
assert last_which == _E_STRING_END
self.partial = (_T_STRING_END, [ taglist.pop() ])
taglist.reverse()
taglist.extend(self.tokens)
self.tokens = taglist
def _set_end(self, taglist, text, l, r, subtags):
self.string_end = l
def _handle_partial(self, buf):
which, chunks = self.partial
if which == _T_TOKEN:
success, taglist, idx = _tt.tag(buf, _c_token_table)
if not success:
# The start of this buffer was not a token. So the end of the
# prior buffer was a complete token.
self.tokens.insert(0, string.join(chunks, ''))
else:
assert len(taglist) == 1 and taglist[0][0] == _T_TOKEN \
and taglist[0][1] == 0 and taglist[0][2] == idx
if idx == len(buf):
#
# The whole buffer was one huge token, so we may have a
# partial token again.
#
# Note: this modifies the list of chunks in self.partial
#
chunks.append(buf)
# consumed the whole buffer
return len(buf)
# got the rest of the token.
chunks.append(buf[:idx])
self.tokens.insert(0, string.join(chunks, ''))
# no more partial token
self.partial = None
return idx
if which == _T_STRING_END:
if buf[0] != '@':
self.tokens.insert(0, string.join(chunks, ''))
return 0
chunks.append('@')
start = 1
else:
start = 0
self.string_end = None
string_table = (
(None, _tt.Is, '@', +3, +1),
(_UNUSED, _tt.Is + _tt.AppendMatch, '@', +1, -1),
(self._set_end, _tt.Skip + _tt.CallTag, 0, 0, _SUCCESS),
(None, _tt.EOF, _tt.Here, +1, _SUCCESS),
# suck up everything that isn't an AT. move to next rule to look
# for EOF
(_UNUSED, _tt.AllInSet + _tt.AppendMatch, _not_at_set, 0, +1),
# go back to look for double AT if we aren't at the end of the string
(None, _tt.EOF, _tt.Here, -5, _SUCCESS),
)
success, unused, idx = _tt.tag(buf, string_table,
start, len(buf), chunks)
# must have matched at least one item
assert success
if self.string_end is None:
assert idx == len(buf)
self.partial = (_T_STRING_SPAN, chunks)
elif self.string_end < len(buf):
self.partial = None
self.tokens.insert(0, string.join(chunks, ''))
else:
self.partial = (_T_STRING_END, chunks)
return idx
def _parse_more(self):
buf = self.rcsfile.read(self.CHUNK_SIZE)
if not buf:
return _EOF
if self.partial:
idx = self._handle_partial(buf)
if idx is None:
return _CONTINUE
if idx < len(buf):
self._parse_chunk(buf, idx)
else:
self._parse_chunk(buf)
return _CONTINUE
def get(self):
try:
return self.tokens.pop()
except IndexError:
pass
while not self.tokens:
action = self._parse_more()
if action == _EOF:
return None
return self.tokens.pop()
# _get = get
# def get(self):
token = self._get()
print 'T:', `token`
return token
def match(self, match):
if self.tokens:
token = self.tokens.pop()
else:
token = self.get()
if token != match:
raise common.RCSExpected(token, match)
def unget(self, token):
self.tokens.append(token)
def mget(self, count):
"Return multiple tokens. 'next' is at the end."
while len(self.tokens) < count:
action = self._parse_more()
if action == _EOF:
### fix this
raise RuntimeError, 'EOF hit while expecting tokens'
result = self.tokens[-count:]
del self.tokens[-count:]
return result
class Parser(common._Parser):
stream_class = _mxTokenStream

View File

@@ -1,103 +0,0 @@
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
"Version Control lib driver for Subversion repositories"
import os
import os.path
import re
import urllib
_re_url = re.compile('^(http|https|file|svn|svn\+[^:]+)://')
def _canonicalize_path(path):
import svn.core
try:
return svn.core.svn_path_canonicalize(path)
except AttributeError: # svn_path_canonicalize() appeared in 1.4.0 bindings
# There's so much more that we *could* do here, but if we're
# here at all its because there's a really old Subversion in
# place, and those older Subversion versions cared quite a bit
# less about the specifics of path canonicalization.
if re.search(_re_url, path):
return path.rstrip('/')
else:
return os.path.normpath(path)
def canonicalize_rootpath(rootpath):
# Try to canonicalize the rootpath using Subversion semantics.
rootpath = _canonicalize_path(rootpath)
# ViewVC's support for local repositories is more complete and more
# performant than its support for remote ones, so if we're on a
# Unix-y system and we have a file:/// URL, convert it to a local
# path instead.
if os.name == 'posix':
rootpath_lower = rootpath.lower()
if rootpath_lower in ['file://localhost',
'file://localhost/',
'file://',
'file:///'
]:
return '/'
if rootpath_lower.startswith('file://localhost/'):
rootpath = os.path.normpath(urllib.unquote(rootpath[16:]))
elif rootpath_lower.startswith('file:///'):
rootpath = os.path.normpath(urllib.unquote(rootpath[7:]))
# Ensure that we have an absolute path (or URL), and return.
if not re.search(_re_url, rootpath):
assert os.path.isabs(rootpath)
return rootpath
def expand_root_parent(parent_path):
roots = {}
if re.search(_re_url, parent_path):
pass
else:
# Any subdirectories of PARENT_PATH which themselves have a child
# "format" are returned as roots.
assert os.path.isabs(parent_path)
subpaths = os.listdir(parent_path)
for rootname in subpaths:
rootpath = os.path.join(parent_path, rootname)
if os.path.exists(os.path.join(rootpath, "format")):
roots[rootname] = canonicalize_rootpath(rootpath)
return roots
def find_root_in_parent(parent_path, rootname):
"""Search PARENT_PATH for a root named ROOTNAME, returning the
canonicalized ROOTPATH of the root if found; return None if no such
root is found."""
if not re.search(_re_url, parent_path):
assert os.path.isabs(parent_path)
rootpath = os.path.join(parent_path, rootname)
format_path = os.path.join(rootpath, "format")
if os.path.exists(format_path):
return canonicalize_rootpath(rootpath)
return None
def SubversionRepository(name, rootpath, authorizer, utilities, config_dir):
rootpath = canonicalize_rootpath(rootpath)
if re.search(_re_url, rootpath):
import svn_ra
return svn_ra.RemoteSubversionRepository(name, rootpath, authorizer,
utilities, config_dir)
else:
import svn_repos
return svn_repos.LocalSubversionRepository(name, rootpath, authorizer,
utilities, config_dir)

View File

@@ -1,812 +0,0 @@
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
"Version Control lib driver for remotely accessible Subversion repositories."
import vclib
import sys
import os
import re
import tempfile
import time
import urllib
from svn_repos import Revision, SVNChangedPath, _datestr_to_date, \
_compare_paths, _path_parts, _cleanup_path, \
_rev2optrev, _fix_subversion_exception, \
_split_revprops, _canonicalize_path
from svn import core, delta, client, wc, ra
### Require Subversion 1.3.1 or better. (for svn_ra_get_locations support)
if (core.SVN_VER_MAJOR, core.SVN_VER_MINOR, core.SVN_VER_PATCH) < (1, 3, 1):
raise Exception, "Version requirement not met (needs 1.3.1 or better)"
### BEGIN COMPATABILITY CODE ###
try:
SVN_INVALID_REVNUM = core.SVN_INVALID_REVNUM
except AttributeError: # The 1.4.x bindings are missing core.SVN_INVALID_REVNUM
SVN_INVALID_REVNUM = -1
def list_directory(url, peg_rev, rev, flag, ctx):
try:
dirents, locks = client.svn_client_ls3(url, peg_rev, rev, flag, ctx)
except TypeError: # 1.4.x bindings are goofed
dirents = client.svn_client_ls3(None, url, peg_rev, rev, flag, ctx)
locks = {}
return dirents, locks
def get_directory_props(ra_session, path, rev):
try:
dirents, fetched_rev, props = ra.svn_ra_get_dir(ra_session, path, rev)
except ValueError: # older bindings are goofed
props = ra.svn_ra_get_dir(ra_session, path, rev)
return props
def client_log(url, start_rev, end_rev, log_limit, include_changes,
cross_copies, cb_func, ctx):
include_changes = include_changes and 1 or 0
cross_copies = cross_copies and 1 or 0
try:
client.svn_client_log4([url], start_rev, start_rev, end_rev,
log_limit, include_changes, not cross_copies,
0, None, cb_func, ctx)
except AttributeError:
# Wrap old svn_log_message_receiver_t interface with a
# svn_log_entry_t one.
def cb_convert(paths, revision, author, date, message, pool):
class svn_log_entry_t:
pass
log_entry = svn_log_entry_t()
log_entry.changed_paths = paths
log_entry.revision = revision
log_entry.revprops = { core.SVN_PROP_REVISION_LOG : message,
core.SVN_PROP_REVISION_AUTHOR : author,
core.SVN_PROP_REVISION_DATE : date,
}
cb_func(log_entry, pool)
client.svn_client_log2([url], start_rev, end_rev, log_limit,
include_changes, not cross_copies, cb_convert, ctx)
def setup_client_ctx(config_dir):
# Ensure that the configuration directory exists.
core.svn_config_ensure(config_dir)
# Fetch the configuration (and 'config' bit thereof).
cfg = core.svn_config_get_config(config_dir)
config = cfg.get(core.SVN_CONFIG_CATEGORY_CONFIG)
# Here's the compat-sensitive part: try to use
# svn_cmdline_create_auth_baton(), and fall back to making our own
# if that fails.
try:
auth_baton = core.svn_cmdline_create_auth_baton(1, None, None, config_dir,
1, 1, config, None)
except AttributeError:
auth_baton = core.svn_auth_open([
client.svn_client_get_simple_provider(),
client.svn_client_get_username_provider(),
client.svn_client_get_ssl_server_trust_file_provider(),
client.svn_client_get_ssl_client_cert_file_provider(),
client.svn_client_get_ssl_client_cert_pw_file_provider(),
])
if config_dir is not None:
core.svn_auth_set_parameter(auth_baton,
core.SVN_AUTH_PARAM_CONFIG_DIR,
config_dir)
# Create, setup, and return the client context baton.
ctx = client.svn_client_create_context()
ctx.config = cfg
ctx.auth_baton = auth_baton
return ctx
### END COMPATABILITY CODE ###
class LogCollector:
def __init__(self, path, show_all_logs, lockinfo, access_check_func):
# This class uses leading slashes for paths internally
if not path:
self.path = '/'
else:
self.path = path[0] == '/' and path or '/' + path
self.logs = []
self.show_all_logs = show_all_logs
self.lockinfo = lockinfo
self.access_check_func = access_check_func
self.done = False
def add_log(self, log_entry, pool):
if self.done:
return
paths = log_entry.changed_paths
revision = log_entry.revision
msg, author, date, revprops = _split_revprops(log_entry.revprops)
# Changed paths have leading slashes
changed_paths = paths.keys()
changed_paths.sort(lambda a, b: _compare_paths(a, b))
this_path = None
if self.path in changed_paths:
this_path = self.path
change = paths[self.path]
if change.copyfrom_path:
this_path = change.copyfrom_path
for changed_path in changed_paths:
if changed_path != self.path:
# If a parent of our path was copied, our "next previous"
# (huh?) path will exist elsewhere (under the copy source).
if (self.path.rfind(changed_path) == 0) and \
self.path[len(changed_path)] == '/':
change = paths[changed_path]
if change.copyfrom_path:
this_path = change.copyfrom_path + self.path[len(changed_path):]
if self.show_all_logs or this_path:
if self.access_check_func is None \
or self.access_check_func(self.path[1:], revision):
entry = Revision(revision, date, author, msg, None, self.lockinfo,
self.path[1:], None, None)
self.logs.append(entry)
else:
self.done = True
if this_path:
self.path = this_path
def cat_to_tempfile(svnrepos, path, rev):
"""Check out file revision to temporary file"""
temp = tempfile.mktemp()
stream = core.svn_stream_from_aprfile(temp)
url = svnrepos._geturl(path)
client.svn_client_cat(core.Stream(stream), url, _rev2optrev(rev),
svnrepos.ctx)
core.svn_stream_close(stream)
return temp
class SelfCleanFP:
def __init__(self, path):
self._fp = open(path, 'r')
self._path = path
self._eof = 0
def read(self, len=None):
if len:
chunk = self._fp.read(len)
else:
chunk = self._fp.read()
if chunk == '':
self._eof = 1
return chunk
def readline(self):
chunk = self._fp.readline()
if chunk == '':
self._eof = 1
return chunk
def readlines(self):
lines = self._fp.readlines()
self._eof = 1
return lines
def close(self):
self._fp.close()
os.remove(self._path)
def __del__(self):
self.close()
def eof(self):
return self._eof
class RemoteSubversionRepository(vclib.Repository):
def __init__(self, name, rootpath, authorizer, utilities, config_dir):
self.name = name
self.rootpath = rootpath
self.auth = authorizer
self.diff_cmd = utilities.diff or 'diff'
self.config_dir = config_dir or None
# See if this repository is even viewable, authz-wise.
if not vclib.check_root_access(self):
raise vclib.ReposNotFound(name)
def open(self):
# Setup the client context baton, complete with non-prompting authstuffs.
self.ctx = setup_client_ctx(self.config_dir)
ra_callbacks = ra.svn_ra_callbacks_t()
ra_callbacks.auth_baton = self.ctx.auth_baton
self.ra_session = ra.svn_ra_open(self.rootpath, ra_callbacks, None,
self.ctx.config)
self.youngest = ra.svn_ra_get_latest_revnum(self.ra_session)
self._dirent_cache = { }
self._revinfo_cache = { }
# See if a universal read access determination can be made.
if self.auth and self.auth.check_universal_access(self.name) == 1:
self.auth = None
def rootname(self):
return self.name
def rootpath(self):
return self.rootpath
def roottype(self):
return vclib.SVN
def authorizer(self):
return self.auth
def itemtype(self, path_parts, rev):
pathtype = None
if not len(path_parts):
pathtype = vclib.DIR
else:
path = self._getpath(path_parts)
rev = self._getrev(rev)
try:
kind = ra.svn_ra_check_path(self.ra_session, path, rev)
if kind == core.svn_node_file:
pathtype = vclib.FILE
elif kind == core.svn_node_dir:
pathtype = vclib.DIR
except:
pass
if pathtype is None:
raise vclib.ItemNotFound(path_parts)
if not vclib.check_path_access(self, path_parts, pathtype, rev):
raise vclib.ItemNotFound(path_parts)
return pathtype
def openfile(self, path_parts, rev, options):
path = self._getpath(path_parts)
if self.itemtype(path_parts, rev) != vclib.FILE: # does auth-check
raise vclib.Error("Path '%s' is not a file." % path)
rev = self._getrev(rev)
url = self._geturl(path)
### rev here should be the last history revision of the URL
fp = SelfCleanFP(cat_to_tempfile(self, path, rev))
lh_rev, c_rev = self._get_last_history_rev(path_parts, rev)
return fp, lh_rev
def listdir(self, path_parts, rev, options):
path = self._getpath(path_parts)
if self.itemtype(path_parts, rev) != vclib.DIR: # does auth-check
raise vclib.Error("Path '%s' is not a directory." % path)
rev = self._getrev(rev)
entries = []
dirents, locks = self._get_dirents(path, rev)
for name in dirents.keys():
entry = dirents[name]
if entry.kind == core.svn_node_dir:
kind = vclib.DIR
elif entry.kind == core.svn_node_file:
kind = vclib.FILE
else:
kind = None
entries.append(vclib.DirEntry(name, kind))
return entries
def dirlogs(self, path_parts, rev, entries, options):
path = self._getpath(path_parts)
if self.itemtype(path_parts, rev) != vclib.DIR: # does auth-check
raise vclib.Error("Path '%s' is not a directory." % path)
rev = self._getrev(rev)
dirents, locks = self._get_dirents(path, rev)
for entry in entries:
entry_path_parts = path_parts + [entry.name]
dirent = dirents.get(entry.name, None)
# dirents is authz-sanitized, so ensure the entry is found therein.
if dirent is None:
continue
# Get authz-sanitized revision metadata.
entry.date, entry.author, entry.log, revprops, changes = \
self._revinfo(dirent.created_rev)
entry.rev = str(dirent.created_rev)
entry.size = dirent.size
entry.lockinfo = None
if locks.has_key(entry.name):
entry.lockinfo = locks[entry.name].owner
def itemlog(self, path_parts, rev, sortby, first, limit, options):
assert sortby == vclib.SORTBY_DEFAULT or sortby == vclib.SORTBY_REV
path_type = self.itemtype(path_parts, rev) # does auth-check
path = self._getpath(path_parts)
rev = self._getrev(rev)
url = self._geturl(path)
# If this is a file, fetch the lock status and size (as of REV)
# for this item.
lockinfo = size_in_rev = None
if path_type == vclib.FILE:
basename = path_parts[-1]
list_url = self._geturl(self._getpath(path_parts[:-1]))
dirents, locks = list_directory(list_url, _rev2optrev(rev),
_rev2optrev(rev), 0, self.ctx)
if locks.has_key(basename):
lockinfo = locks[basename].owner
if dirents.has_key(basename):
size_in_rev = dirents[basename].size
# Special handling for the 'svn_latest_log' scenario.
### FIXME: Don't like this hack. We should just introduce
### something more direct in the vclib API.
if options.get('svn_latest_log', 0):
dir_lh_rev, dir_c_rev = self._get_last_history_rev(path_parts, rev)
date, author, log, revprops, changes = self._revinfo(dir_lh_rev)
return [vclib.Revision(dir_lh_rev, str(dir_lh_rev), date, author,
None, log, size_in_rev, lockinfo)]
def _access_checker(check_path, check_rev):
return vclib.check_path_access(self, _path_parts(check_path),
path_type, check_rev)
# It's okay if we're told to not show all logs on a file -- all
# the revisions should match correctly anyway.
lc = LogCollector(path, options.get('svn_show_all_dir_logs', 0),
lockinfo, _access_checker)
cross_copies = options.get('svn_cross_copies', 0)
log_limit = 0
if limit:
log_limit = first + limit
client_log(url, _rev2optrev(rev), _rev2optrev(1), log_limit, 1,
cross_copies, lc.add_log, self.ctx)
revs = lc.logs
revs.sort()
prev = None
for rev in revs:
# Swap out revision info with stuff from the cache (which is
# authz-sanitized).
rev.date, rev.author, rev.log, revprops, changes \
= self._revinfo(rev.number)
rev.prev = prev
prev = rev
revs.reverse()
if len(revs) < first:
return []
if limit:
return revs[first:first+limit]
return revs
def itemprops(self, path_parts, rev):
path = self._getpath(path_parts)
path_type = self.itemtype(path_parts, rev) # does auth-check
rev = self._getrev(rev)
url = self._geturl(path)
pairs = client.svn_client_proplist2(url, _rev2optrev(rev),
_rev2optrev(rev), 0, self.ctx)
return pairs and pairs[0][1] or {}
def annotate(self, path_parts, rev, include_text=False):
path = self._getpath(path_parts)
if self.itemtype(path_parts, rev) != vclib.FILE: # does auth-check
raise vclib.Error("Path '%s' is not a file." % path)
rev = self._getrev(rev)
url = self._geturl(path)
# Examine logs for the file to determine the oldest revision we are
# permitted to see.
log_options = {
'svn_cross_copies' : 1,
'svn_show_all_dir_logs' : 1,
}
revs = self.itemlog(path_parts, rev, vclib.SORTBY_REV, 0, 0, log_options)
oldest_rev = revs[-1].number
# Now calculate the annotation data. Note that we'll not
# inherently trust the provided author and date, because authz
# rules might necessitate that we strip that information out.
blame_data = []
def _blame_cb(line_no, revision, author, date,
line, pool, blame_data=blame_data):
prev_rev = None
if revision > 1:
prev_rev = revision - 1
# If we have an invalid revision, clear the date and author
# values. Otherwise, if we have authz filtering to do, use the
# revinfo cache to do so.
if revision < 0:
date = author = None
elif self.auth:
date, author, msg, revprops, changes = self._revinfo(revision)
# Strip text if the caller doesn't want it.
if not include_text:
line = None
blame_data.append(vclib.Annotation(line, line_no + 1, revision, prev_rev,
author, date))
client.blame2(url, _rev2optrev(rev), _rev2optrev(oldest_rev),
_rev2optrev(rev), _blame_cb, self.ctx)
return blame_data, rev
def revinfo(self, rev):
return self._revinfo(rev, 1)
def rawdiff(self, path_parts1, rev1, path_parts2, rev2, type, options={}):
if path_parts1 is not None:
p1 = self._getpath(path_parts1)
r1 = self._getrev(rev1)
if not vclib.check_path_access(self, path_parts1, vclib.FILE, rev1):
raise vclib.ItemNotFound(path_parts1)
else:
p1 = None
if path_parts2 is not None:
if not p1:
raise vclib.ItemNotFound(parh_parts2)
p2 = self._getpath(path_parts2)
r2 = self._getrev(rev2)
if not vclib.check_path_access(self, path_parts2, vclib.FILE, rev2):
raise vclib.ItemNotFound(path_parts2)
else:
p2 = None
args = vclib._diff_args(type, options)
def _date_from_rev(rev):
date, author, msg, revprops, changes = self._revinfo(rev)
return date
try:
info1 = p1, _date_from_rev(r1), r1
info2 = p2, _date_from_rev(r2), r2
if p1:
temp1 = cat_to_tempfile(self, p1, r1)
else:
temp1 = '/dev/null'
if p2:
temp2 = cat_to_tempfile(self, p2, r2)
else:
temp2 = '/dev/null'
return vclib._diff_fp(temp1, temp2, info1, info2, self.diff_cmd, args)
except core.SubversionException, e:
_fix_subversion_exception(e)
if e.apr_err == core.SVN_ERR_FS_NOT_FOUND:
raise vclib.InvalidRevision
raise
def isexecutable(self, path_parts, rev):
props = self.itemprops(path_parts, rev) # does authz-check
return props.has_key(core.SVN_PROP_EXECUTABLE)
def filesize(self, path_parts, rev):
path = self._getpath(path_parts)
if self.itemtype(path_parts, rev) != vclib.FILE: # does auth-check
raise vclib.Error("Path '%s' is not a file." % path)
rev = self._getrev(rev)
dirents, locks = self._get_dirents(self._getpath(path_parts[:-1]), rev)
dirent = dirents.get(path_parts[-1], None)
return dirent.size
def _getpath(self, path_parts):
return '/'.join(path_parts)
def _getrev(self, rev):
if rev is None or rev == 'HEAD':
return self.youngest
try:
if type(rev) == type(''):
while rev[0] == 'r':
rev = rev[1:]
rev = int(rev)
except:
raise vclib.InvalidRevision(rev)
if (rev < 0) or (rev > self.youngest):
raise vclib.InvalidRevision(rev)
return rev
def _geturl(self, path=None):
if not path:
return self.rootpath
path = self.rootpath + '/' + urllib.quote(path)
return _canonicalize_path(path)
def _get_dirents(self, path, rev):
"""Return a 2-type of dirents and locks, possibly reading/writing
from a local cache of that information. This functions performs
authz checks, stripping out unreadable dirents."""
dir_url = self._geturl(path)
path_parts = _path_parts(path)
if path:
key = str(rev) + '/' + path
else:
key = str(rev)
# Ensure that the cache gets filled...
dirents_locks = self._dirent_cache.get(key)
if not dirents_locks:
tmp_dirents, locks = list_directory(dir_url, _rev2optrev(rev),
_rev2optrev(rev), 0, self.ctx)
dirents = {}
for name, dirent in tmp_dirents.items():
dirent_parts = path_parts + [name]
kind = dirent.kind
if (kind == core.svn_node_dir or kind == core.svn_node_file) \
and vclib.check_path_access(self, dirent_parts,
kind == core.svn_node_dir \
and vclib.DIR or vclib.FILE, rev):
lh_rev, c_rev = self._get_last_history_rev(dirent_parts, rev)
dirent.created_rev = lh_rev
dirents[name] = dirent
dirents_locks = [dirents, locks]
self._dirent_cache[key] = dirents_locks
# ...then return the goodies from the cache.
return dirents_locks[0], dirents_locks[1]
def _get_last_history_rev(self, path_parts, rev):
"""Return the a 2-tuple which contains:
- the last interesting revision equal to or older than REV in
the history of PATH_PARTS.
- the created_rev of of PATH_PARTS as of REV."""
path = self._getpath(path_parts)
url = self._geturl(self._getpath(path_parts))
optrev = _rev2optrev(rev)
# Get the last-changed-rev.
revisions = []
def _info_cb(path, info, pool, retval=revisions):
revisions.append(info.last_changed_rev)
client.svn_client_info(url, optrev, optrev, _info_cb, 0, self.ctx)
last_changed_rev = revisions[0]
# Now, this object might not have been directly edited since the
# last-changed-rev, but it might have been the child of a copy.
# To determine this, we'll run a potentially no-op log between
# LAST_CHANGED_REV and REV.
lc = LogCollector(path, 1, None, None)
client_log(url, optrev, _rev2optrev(last_changed_rev), 1, 1, 0,
lc.add_log, self.ctx)
revs = lc.logs
if revs:
revs.sort()
return revs[0].number, last_changed_rev
else:
return last_changed_rev, last_changed_rev
def _revinfo_fetch(self, rev, include_changed_paths=0):
need_changes = include_changed_paths or self.auth
revs = []
def _log_cb(log_entry, pool, retval=revs):
# If Subversion happens to call us more than once, we choose not
# to care.
if retval:
return
revision = log_entry.revision
msg, author, date, revprops = _split_revprops(log_entry.revprops)
action_map = { 'D' : vclib.DELETED,
'A' : vclib.ADDED,
'R' : vclib.REPLACED,
'M' : vclib.MODIFIED,
}
# Easy out: if we won't use the changed-path info, just return a
# changes-less tuple.
if not need_changes:
return revs.append([date, author, msg, revprops, None])
# Subversion 1.5 and earlier didn't offer the 'changed_paths2'
# hash, and in Subversion 1.6, it's offered but broken.
try:
changed_paths = log_entry.changed_paths2
paths = (changed_paths or {}).keys()
except:
changed_paths = log_entry.changed_paths
paths = (changed_paths or {}).keys()
paths.sort(lambda a, b: _compare_paths(a, b))
# If we get this far, our caller needs changed-paths, or we need
# them for authz-related sanitization.
changes = []
found_readable = found_unreadable = 0
for path in paths:
change = changed_paths[path]
# svn_log_changed_path_t (which we might get instead of the
# svn_log_changed_path2_t we'd prefer) doesn't have the
# 'node_kind' member.
pathtype = None
if hasattr(change, 'node_kind'):
if change.node_kind == core.svn_node_dir:
pathtype = vclib.DIR
elif change.node_kind == core.svn_node_file:
pathtype = vclib.FILE
# svn_log_changed_path2_t only has the 'text_modified' and
# 'props_modified' bits in Subversion 1.7 and beyond. And
# svn_log_changed_path_t is without.
text_modified = props_modified = 0
if hasattr(change, 'text_modified'):
if change.text_modified == core.svn_tristate_true:
text_modified = 1
if hasattr(change, 'props_modified'):
if change.props_modified == core.svn_tristate_true:
props_modified = 1
# Wrong, diddily wrong wrong wrong. Can you say,
# "Manufacturing data left and right because it hurts to
# figure out the right stuff?"
action = action_map.get(change.action, vclib.MODIFIED)
if change.copyfrom_path and change.copyfrom_rev:
is_copy = 1
base_path = change.copyfrom_path
base_rev = change.copyfrom_rev
elif action == vclib.ADDED or action == vclib.REPLACED:
is_copy = 0
base_path = base_rev = None
else:
is_copy = 0
base_path = path
base_rev = revision - 1
# Check authz rules (sadly, we have to lie about the path type)
parts = _path_parts(path)
if vclib.check_path_access(self, parts, vclib.FILE, revision):
if is_copy and base_path and (base_path != path):
parts = _path_parts(base_path)
if not vclib.check_path_access(self, parts, vclib.FILE, base_rev):
is_copy = 0
base_path = None
base_rev = None
found_unreadable = 1
changes.append(SVNChangedPath(path, revision, pathtype, base_path,
base_rev, action, is_copy,
text_modified, props_modified))
found_readable = 1
else:
found_unreadable = 1
# If our caller doesn't want changed-path stuff, and we have
# the info we need to make an authz determination already,
# quit this loop and get on with it.
if (not include_changed_paths) and found_unreadable and found_readable:
break
# Filter unreadable information.
if found_unreadable:
msg = None
if not found_readable:
author = None
date = None
# Drop unrequested changes.
if not include_changed_paths:
changes = None
# Add this revision information to the "return" array.
retval.append([date, author, msg, revprops, changes])
optrev = _rev2optrev(rev)
client_log(self.rootpath, optrev, optrev, 1, need_changes, 0,
_log_cb, self.ctx)
return tuple(revs[0])
def _revinfo(self, rev, include_changed_paths=0):
"""Internal-use, cache-friendly revision information harvester."""
# Consult the revinfo cache first. If we don't have cached info,
# or our caller wants changed paths and we don't have those for
# this revision, go do the real work.
rev = self._getrev(rev)
cached_info = self._revinfo_cache.get(rev)
if not cached_info \
or (include_changed_paths and cached_info[4] is None):
cached_info = self._revinfo_fetch(rev, include_changed_paths)
self._revinfo_cache[rev] = cached_info
return cached_info
##--- custom --##
def get_youngest_revision(self):
return self.youngest
def get_location(self, path, rev, old_rev):
try:
results = ra.get_locations(self.ra_session, path, rev, [old_rev])
except core.SubversionException, e:
_fix_subversion_exception(e)
if e.apr_err == core.SVN_ERR_FS_NOT_FOUND:
raise vclib.ItemNotFound(path)
raise
try:
old_path = results[old_rev]
except KeyError:
raise vclib.ItemNotFound(path)
old_path = _cleanup_path(old_path)
old_path_parts = _path_parts(old_path)
# Check access (lying about path types)
if not vclib.check_path_access(self, old_path_parts, vclib.FILE, old_rev):
raise vclib.ItemNotFound(path)
return old_path
def created_rev(self, path, rev):
lh_rev, c_rev = self._get_last_history_rev(_path_parts(path), rev)
return lh_rev
def last_rev(self, path, peg_revision, limit_revision=None):
"""Given PATH, known to exist in PEG_REVISION, find the youngest
revision older than, or equal to, LIMIT_REVISION in which path
exists. Return that revision, and the path at which PATH exists in
that revision."""
# Here's the plan, man. In the trivial case (where PEG_REVISION is
# the same as LIMIT_REVISION), this is a no-brainer. If
# LIMIT_REVISION is older than PEG_REVISION, we can use Subversion's
# history tracing code to find the right location. If, however,
# LIMIT_REVISION is younger than PEG_REVISION, we suffer from
# Subversion's lack of forward history searching. Our workaround,
# ugly as it may be, involves a binary search through the revisions
# between PEG_REVISION and LIMIT_REVISION to find our last live
# revision.
peg_revision = self._getrev(peg_revision)
limit_revision = self._getrev(limit_revision)
if peg_revision == limit_revision:
return peg_revision, path
elif peg_revision > limit_revision:
path = self.get_location(path, peg_revision, limit_revision)
return limit_revision, path
else:
direction = 1
while peg_revision != limit_revision:
mid = (peg_revision + 1 + limit_revision) / 2
try:
path = self.get_location(path, peg_revision, mid)
except vclib.ItemNotFound:
limit_revision = mid - 1
else:
peg_revision = mid
return peg_revision, path
def get_symlink_target(self, path_parts, rev):
"""Return the target of the symbolic link versioned at PATH_PARTS
in REV, or None if that object is not a symlink."""
path = self._getpath(path_parts)
path_type = self.itemtype(path_parts, rev) # does auth-check
rev = self._getrev(rev)
url = self._geturl(path)
# Symlinks must be files with the svn:special property set on them
# and with file contents which read "link SOME_PATH".
if path_type != vclib.FILE:
return None
pairs = client.svn_client_proplist2(url, _rev2optrev(rev),
_rev2optrev(rev), 0, self.ctx)
props = pairs and pairs[0][1] or {}
if not props.has_key(core.SVN_PROP_SPECIAL):
return None
pathspec = ''
### FIXME: We're being a touch sloppy here, first by grabbing the
### whole file and then by checking only the first line
### of it.
fp = SelfCleanFP(cat_to_tempfile(self, path, rev))
pathspec = fp.readline()
fp.close()
if pathspec[:5] != 'link ':
return None
return pathspec[5:]

View File

@@ -1,979 +0,0 @@
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
"Version Control lib driver for locally accessible Subversion repositories"
import vclib
import os
import os.path
import cStringIO
import time
import tempfile
import popen
import re
import urllib
from svn import fs, repos, core, client, delta, diff
### Require Subversion 1.3.1 or better.
if (core.SVN_VER_MAJOR, core.SVN_VER_MINOR, core.SVN_VER_PATCH) < (1, 3, 1):
raise Exception, "Version requirement not met (needs 1.3.1 or better)"
### Pre-1.5 Subversion doesn't have SVN_ERR_CEASE_INVOCATION
try:
_SVN_ERR_CEASE_INVOCATION = core.SVN_ERR_CEASE_INVOCATION
except:
_SVN_ERR_CEASE_INVOCATION = core.SVN_ERR_CANCELLED
### Pre-1.5 SubversionException's might not have the .msg and .apr_err members
def _fix_subversion_exception(e):
if not hasattr(e, 'apr_err'):
e.apr_err = e[1]
if not hasattr(e, 'message'):
e.message = e[0]
### Pre-1.4 Subversion doesn't have svn_path_canonicalize()
def _canonicalize_path(path):
try:
return core.svn_path_canonicalize(path)
except AttributeError:
return path
def _allow_all(root, path, pool):
"""Generic authz_read_func that permits access to all paths"""
return 1
def _path_parts(path):
return filter(None, path.split('/'))
def _cleanup_path(path):
"""Return a cleaned-up Subversion filesystem path"""
return '/'.join(_path_parts(path))
def _fs_path_join(base, relative):
return _cleanup_path(base + '/' + relative)
def _compare_paths(path1, path2):
path1_len = len (path1);
path2_len = len (path2);
min_len = min(path1_len, path2_len)
i = 0
# Are the paths exactly the same?
if path1 == path2:
return 0
# Skip past common prefix
while (i < min_len) and (path1[i] == path2[i]):
i = i + 1
# Children of paths are greater than their parents, but less than
# greater siblings of their parents
char1 = '\0'
char2 = '\0'
if (i < path1_len):
char1 = path1[i]
if (i < path2_len):
char2 = path2[i]
if (char1 == '/') and (i == path2_len):
return 1
if (char2 == '/') and (i == path1_len):
return -1
if (i < path1_len) and (char1 == '/'):
return -1
if (i < path2_len) and (char2 == '/'):
return 1
# Common prefix was skipped above, next character is compared to
# determine order
return cmp(char1, char2)
def _rev2optrev(rev):
assert type(rev) is int
rt = core.svn_opt_revision_t()
rt.kind = core.svn_opt_revision_number
rt.value.number = rev
return rt
def _rootpath2url(rootpath, path):
rootpath = os.path.abspath(rootpath)
drive, rootpath = os.path.splitdrive(rootpath)
if os.sep != '/':
rootpath = rootpath.replace(os.sep, '/')
rootpath = urllib.quote(rootpath)
path = urllib.quote(path)
if drive:
url = 'file:///' + drive + rootpath + '/' + path
else:
url = 'file://' + rootpath + '/' + path
return _canonicalize_path(url)
# Given a dictionary REVPROPS of revision properties, pull special
# ones out of them and return a 4-tuple containing the log message,
# the author, the date (converted from the date string property), and
# a dictionary of any/all other revprops.
def _split_revprops(revprops):
if not revprops:
return None, None, None, {}
special_props = []
for prop in core.SVN_PROP_REVISION_LOG, \
core.SVN_PROP_REVISION_AUTHOR, \
core.SVN_PROP_REVISION_DATE:
if revprops.has_key(prop):
special_props.append(revprops[prop])
del(revprops[prop])
else:
special_props.append(None)
msg, author, datestr = tuple(special_props)
date = _datestr_to_date(datestr)
return msg, author, date, revprops
def _datestr_to_date(datestr):
try:
return core.svn_time_from_cstring(datestr) / 1000000
except:
return None
class Revision(vclib.Revision):
"Hold state for each revision's log entry."
def __init__(self, rev, date, author, msg, size, lockinfo,
filename, copy_path, copy_rev):
vclib.Revision.__init__(self, rev, str(rev), date, author, None,
msg, size, lockinfo)
self.filename = filename
self.copy_path = copy_path
self.copy_rev = copy_rev
class NodeHistory:
"""An iterable object that returns 2-tuples of (revision, path)
locations along a node's change history, ordered from youngest to
oldest."""
def __init__(self, fs_ptr, show_all_logs, limit=0):
self.histories = []
self.fs_ptr = fs_ptr
self.show_all_logs = show_all_logs
self.oldest_rev = None
self.limit = limit
def add_history(self, path, revision, pool):
# If filtering, only add the path and revision to the histories
# list if they were actually changed in this revision (where
# change means the path itself was changed, or one of its parents
# was copied). This is useful for omitting bubble-up directory
# changes.
if not self.oldest_rev:
self.oldest_rev = revision
else:
assert(revision < self.oldest_rev)
if not self.show_all_logs:
rev_root = fs.revision_root(self.fs_ptr, revision)
changed_paths = fs.paths_changed(rev_root)
paths = changed_paths.keys()
if path not in paths:
# Look for a copied parent
test_path = path
found = 0
while 1:
off = test_path.rfind('/')
if off < 0:
break
test_path = test_path[0:off]
if test_path in paths:
copyfrom_rev, copyfrom_path = fs.copied_from(rev_root, test_path)
if copyfrom_rev >= 0 and copyfrom_path:
found = 1
break
if not found:
return
self.histories.append([revision, _cleanup_path(path)])
if self.limit and len(self.histories) == self.limit:
raise core.SubversionException("", _SVN_ERR_CEASE_INVOCATION)
def __getitem__(self, idx):
return self.histories[idx]
def _get_last_history_rev(fsroot, path):
history = fs.node_history(fsroot, path)
history = fs.history_prev(history, 0)
history_path, history_rev = fs.history_location(history)
return history_rev
def temp_checkout(svnrepos, path, rev):
"""Check out file revision to temporary file"""
temp = tempfile.mktemp()
fp = open(temp, 'wb')
try:
root = svnrepos._getroot(rev)
stream = fs.file_contents(root, path)
try:
while 1:
chunk = core.svn_stream_read(stream, core.SVN_STREAM_CHUNK_SIZE)
if not chunk:
break
fp.write(chunk)
finally:
core.svn_stream_close(stream)
finally:
fp.close()
return temp
class FileContentsPipe:
def __init__(self, root, path):
self._stream = fs.file_contents(root, path)
self._eof = 0
def read(self, len=None):
chunk = None
if not self._eof:
if len is None:
buffer = cStringIO.StringIO()
try:
while 1:
hunk = core.svn_stream_read(self._stream, 8192)
if not hunk:
break
buffer.write(hunk)
chunk = buffer.getvalue()
finally:
buffer.close()
else:
chunk = core.svn_stream_read(self._stream, len)
if not chunk:
self._eof = 1
return chunk
def readline(self):
chunk = None
if not self._eof:
chunk, self._eof = core.svn_stream_readline(self._stream, '\n')
if not self._eof:
chunk = chunk + '\n'
if not chunk:
self._eof = 1
return chunk
def readlines(self):
lines = []
while True:
line = self.readline()
if not line:
break
lines.append(line)
return lines
def close(self):
return core.svn_stream_close(self._stream)
def eof(self):
return self._eof
class BlameSource:
def __init__(self, local_url, rev, first_rev, include_text, config_dir):
self.idx = -1
self.first_rev = first_rev
self.blame_data = []
self.include_text = include_text
ctx = client.svn_client_create_context()
core.svn_config_ensure(config_dir)
ctx.config = core.svn_config_get_config(config_dir)
ctx.auth_baton = core.svn_auth_open([])
try:
### TODO: Is this use of FIRST_REV always what we want? Should we
### pass 1 here instead and do filtering later?
client.blame3(local_url, _rev2optrev(rev), _rev2optrev(first_rev),
_rev2optrev(rev), diff.svn_diff_file_options_t(), True, self._blame_cb, ctx)
except core.SubversionException, e:
_fix_subversion_exception(e)
if e.apr_err == core.SVN_ERR_CLIENT_IS_BINARY_FILE:
raise vclib.NonTextualFileContents
raise
def _blame_cb(self, line_no, rev, author, date, text, pool):
prev_rev = None
if rev > self.first_rev:
prev_rev = rev - 1
if not self.include_text:
text = None
self.blame_data.append(vclib.Annotation(text, line_no + 1, rev,
prev_rev, author, None))
def __getitem__(self, idx):
if idx != self.idx + 1:
raise BlameSequencingError()
self.idx = idx
return self.blame_data[idx]
class BlameSequencingError(Exception):
pass
class SVNChangedPath(vclib.ChangedPath):
"""Wrapper around vclib.ChangedPath which handles path splitting."""
def __init__(self, path, rev, pathtype, base_path, base_rev,
action, copied, text_changed, props_changed):
path_parts = _path_parts(path or '')
base_path_parts = _path_parts(base_path or '')
vclib.ChangedPath.__init__(self, path_parts, rev, pathtype,
base_path_parts, base_rev, action,
copied, text_changed, props_changed)
class LocalSubversionRepository(vclib.Repository):
def __init__(self, name, rootpath, authorizer, utilities, config_dir):
if not (os.path.isdir(rootpath) \
and os.path.isfile(os.path.join(rootpath, 'format'))):
raise vclib.ReposNotFound(name)
# Initialize some stuff.
self.rootpath = rootpath
self.name = name
self.auth = authorizer
self.diff_cmd = utilities.diff or 'diff'
self.config_dir = config_dir or None
# See if this repository is even viewable, authz-wise.
if not vclib.check_root_access(self):
raise vclib.ReposNotFound(name)
def open(self):
# Open the repository and init some other variables.
self.repos = repos.svn_repos_open(self.rootpath)
self.fs_ptr = repos.svn_repos_fs(self.repos)
self.youngest = fs.youngest_rev(self.fs_ptr)
self._fsroots = {}
self._revinfo_cache = {}
# See if a universal read access determination can be made.
if self.auth and self.auth.check_universal_access(self.name) == 1:
self.auth = None
def rootname(self):
return self.name
def rootpath(self):
return self.rootpath
def roottype(self):
return vclib.SVN
def authorizer(self):
return self.auth
def itemtype(self, path_parts, rev):
rev = self._getrev(rev)
basepath = self._getpath(path_parts)
pathtype = self._gettype(basepath, rev)
if pathtype is None:
raise vclib.ItemNotFound(path_parts)
if not vclib.check_path_access(self, path_parts, pathtype, rev):
raise vclib.ItemNotFound(path_parts)
return pathtype
def openfile(self, path_parts, rev, options):
path = self._getpath(path_parts)
if self.itemtype(path_parts, rev) != vclib.FILE: # does auth-check
raise vclib.Error("Path '%s' is not a file." % path)
rev = self._getrev(rev)
fsroot = self._getroot(rev)
revision = str(_get_last_history_rev(fsroot, path))
fp = FileContentsPipe(fsroot, path)
return fp, revision
def listdir(self, path_parts, rev, options):
path = self._getpath(path_parts)
if self.itemtype(path_parts, rev) != vclib.DIR: # does auth-check
raise vclib.Error("Path '%s' is not a directory." % path)
rev = self._getrev(rev)
fsroot = self._getroot(rev)
dirents = fs.dir_entries(fsroot, path)
entries = [ ]
for entry in dirents.values():
if entry.kind == core.svn_node_dir:
kind = vclib.DIR
elif entry.kind == core.svn_node_file:
kind = vclib.FILE
if vclib.check_path_access(self, path_parts + [entry.name], kind, rev):
entries.append(vclib.DirEntry(entry.name, kind))
return entries
def dirlogs(self, path_parts, rev, entries, options):
path = self._getpath(path_parts)
if self.itemtype(path_parts, rev) != vclib.DIR: # does auth-check
raise vclib.Error("Path '%s' is not a directory." % path)
fsroot = self._getroot(self._getrev(rev))
rev = self._getrev(rev)
for entry in entries:
entry_path_parts = path_parts + [entry.name]
if not vclib.check_path_access(self, entry_path_parts, entry.kind, rev):
continue
path = self._getpath(entry_path_parts)
entry_rev = _get_last_history_rev(fsroot, path)
date, author, msg, revprops, changes = self._revinfo(entry_rev)
entry.rev = str(entry_rev)
entry.date = date
entry.author = author
entry.log = msg
if entry.kind == vclib.FILE:
entry.size = fs.file_length(fsroot, path)
lock = fs.get_lock(self.fs_ptr, path)
entry.lockinfo = lock and lock.owner or None
def itemlog(self, path_parts, rev, sortby, first, limit, options):
"""see vclib.Repository.itemlog docstring
Option values recognized by this implementation
svn_show_all_dir_logs
boolean, default false. if set for a directory path, will include
revisions where files underneath the directory have changed
svn_cross_copies
boolean, default false. if set for a path created by a copy, will
include revisions from before the copy
svn_latest_log
boolean, default false. if set will return only newest single log
entry
"""
assert sortby == vclib.SORTBY_DEFAULT or sortby == vclib.SORTBY_REV
path = self._getpath(path_parts)
path_type = self.itemtype(path_parts, rev) # does auth-check
rev = self._getrev(rev)
revs = []
lockinfo = None
# See if this path is locked.
try:
lock = fs.get_lock(self.fs_ptr, path)
if lock:
lockinfo = lock.owner
except NameError:
pass
# If our caller only wants the latest log, we'll invoke
# _log_helper for just the one revision. Otherwise, we go off
# into history-fetching mode. ### TODO: we could stand to have a
# 'limit' parameter here as numeric cut-off for the depth of our
# history search.
if options.get('svn_latest_log', 0):
revision = self._log_helper(path, rev, lockinfo)
if revision:
revision.prev = None
revs.append(revision)
else:
history = self._get_history(path, rev, path_type, first + limit, options)
if len(history) < first:
history = []
if limit:
history = history[first:first+limit]
for hist_rev, hist_path in history:
revision = self._log_helper(hist_path, hist_rev, lockinfo)
if revision:
# If we have unreadable copyfrom data, obscure it.
if revision.copy_path is not None:
cp_parts = _path_parts(revision.copy_path)
if not vclib.check_path_access(self, cp_parts, path_type,
revision.copy_rev):
revision.copy_path = revision.copy_rev = None
revision.prev = None
if len(revs):
revs[-1].prev = revision
revs.append(revision)
return revs
def itemprops(self, path_parts, rev):
path = self._getpath(path_parts)
path_type = self.itemtype(path_parts, rev) # does auth-check
rev = self._getrev(rev)
fsroot = self._getroot(rev)
return fs.node_proplist(fsroot, path)
def annotate(self, path_parts, rev, include_text=False):
path = self._getpath(path_parts)
path_type = self.itemtype(path_parts, rev) # does auth-check
if path_type != vclib.FILE:
raise vclib.Error("Path '%s' is not a file." % path)
rev = self._getrev(rev)
fsroot = self._getroot(rev)
history = self._get_history(path, rev, path_type, 0,
{'svn_cross_copies': 1})
youngest_rev, youngest_path = history[0]
oldest_rev, oldest_path = history[-1]
source = BlameSource(_rootpath2url(self.rootpath, path), youngest_rev,
oldest_rev, include_text, self.config_dir)
return source, youngest_rev
def revinfo(self, rev):
return self._revinfo(rev, 1)
def rawdiff(self, path_parts1, rev1, path_parts2, rev2, type, options={}):
if path_parts1:
p1 = self._getpath(path_parts1)
r1 = self._getrev(rev1)
if not vclib.check_path_access(self, path_parts1, vclib.FILE, rev1):
raise vclib.ItemNotFound(path_parts1)
else:
p1 = None
if path_parts2:
p2 = self._getpath(path_parts2)
r2 = self._getrev(rev2)
if not vclib.check_path_access(self, path_parts2, vclib.FILE, rev2):
raise vclib.ItemNotFound(path_parts2)
else:
if not p1:
raise vclib.ItemNotFound(path_parts2)
p2 = None
args = vclib._diff_args(type, options)
def _date_from_rev(rev):
date, author, msg, revprops, changes = self._revinfo(rev)
return date
try:
if p1:
temp1 = temp_checkout(self, p1, r1)
info1 = p1, _date_from_rev(r1), r1
else:
temp1 = '/dev/null'
info1 = '/dev/null', _date_from_rev(rev1), rev1
if p2:
temp2 = temp_checkout(self, p2, r2)
info2 = p2, _date_from_rev(r2), r2
else:
temp2 = '/dev/null'
info2 = '/dev/null', _date_from_rev(rev2), rev2
return vclib._diff_fp(temp1, temp2, info1, info2, self.diff_cmd, args)
except core.SubversionException, e:
_fix_subversion_exception(e)
if e.apr_err == core.SVN_ERR_FS_NOT_FOUND:
raise vclib.InvalidRevision
if e.apr_err != _SVN_ERR_CEASE_INVOCATION:
raise
def isexecutable(self, path_parts, rev):
props = self.itemprops(path_parts, rev) # does authz-check
return props.has_key(core.SVN_PROP_EXECUTABLE)
def filesize(self, path_parts, rev):
path = self._getpath(path_parts)
if self.itemtype(path_parts, rev) != vclib.FILE: # does auth-check
raise vclib.Error("Path '%s' is not a file." % path)
fsroot = self._getroot(self._getrev(rev))
return fs.file_length(fsroot, path)
##--- helpers ---##
def _revinfo(self, rev, include_changed_paths=0):
"""Internal-use, cache-friendly revision information harvester."""
def _get_changed_paths(fsroot):
"""Return a 3-tuple: found_readable, found_unreadable, changed_paths."""
editor = repos.ChangeCollector(self.fs_ptr, fsroot)
e_ptr, e_baton = delta.make_editor(editor)
repos.svn_repos_replay(fsroot, e_ptr, e_baton)
changedpaths = {}
changes = editor.get_changes()
# Copy the Subversion changes into a new hash, checking
# authorization and converting them into ChangedPath objects.
found_readable = found_unreadable = 0
for path in changes.keys():
change = changes[path]
if change.path:
change.path = _cleanup_path(change.path)
if change.base_path:
change.base_path = _cleanup_path(change.base_path)
is_copy = 0
if not hasattr(change, 'action'): # new to subversion 1.4.0
action = vclib.MODIFIED
if not change.path:
action = vclib.DELETED
elif change.added:
action = vclib.ADDED
replace_check_path = path
if change.base_path and change.base_rev:
replace_check_path = change.base_path
if changedpaths.has_key(replace_check_path) \
and changedpaths[replace_check_path].action == vclib.DELETED:
action = vclib.REPLACED
else:
if change.action == repos.CHANGE_ACTION_ADD:
action = vclib.ADDED
elif change.action == repos.CHANGE_ACTION_DELETE:
action = vclib.DELETED
elif change.action == repos.CHANGE_ACTION_REPLACE:
action = vclib.REPLACED
else:
action = vclib.MODIFIED
if (action == vclib.ADDED or action == vclib.REPLACED) \
and change.base_path \
and change.base_rev:
is_copy = 1
if change.item_kind == core.svn_node_dir:
pathtype = vclib.DIR
elif change.item_kind == core.svn_node_file:
pathtype = vclib.FILE
else:
pathtype = None
parts = _path_parts(path)
if vclib.check_path_access(self, parts, pathtype, rev):
if is_copy and change.base_path and (change.base_path != path):
parts = _path_parts(change.base_path)
if not vclib.check_path_access(self, parts, pathtype,
change.base_rev):
is_copy = 0
change.base_path = None
change.base_rev = None
found_unreadable = 1
changedpaths[path] = SVNChangedPath(path, rev, pathtype,
change.base_path,
change.base_rev, action,
is_copy, change.text_changed,
change.prop_changes)
found_readable = 1
else:
found_unreadable = 1
return found_readable, found_unreadable, changedpaths.values()
def _get_change_copyinfo(fsroot, path, change):
# If we know the copyfrom info, return it...
if hasattr(change, 'copyfrom_known') and change.copyfrom_known:
copyfrom_path = change.copyfrom_path
copyfrom_rev = change.copyfrom_rev
# ...otherwise, if this change could be a copy (that is, it
# contains an add action), query the copyfrom info ...
elif (change.change_kind == fs.path_change_add or
change.change_kind == fs.path_change_replace):
copyfrom_rev, copyfrom_path = fs.copied_from(fsroot, path)
# ...else, there's no copyfrom info.
else:
copyfrom_rev = core.SVN_INVALID_REVNUM
copyfrom_path = None
return copyfrom_path, copyfrom_rev
def _simple_auth_check(fsroot):
"""Return a 2-tuple: found_readable, found_unreadable."""
found_unreadable = found_readable = 0
if hasattr(fs, 'paths_changed2'):
changes = fs.paths_changed2(fsroot)
else:
changes = fs.paths_changed(fsroot)
paths = changes.keys()
for path in paths:
change = changes[path]
pathtype = None
if hasattr(change, 'node_kind'):
if change.node_kind == core.svn_node_file:
pathtype = vclib.FILE
elif change.node_kind == core.svn_node_dir:
pathtype = vclib.DIR
parts = _path_parts(path)
if pathtype is None:
# Figure out the pathtype so we can query the authz subsystem.
if change.change_kind == fs.path_change_delete:
# Deletions are annoying, because they might be underneath
# copies (make their previous location non-trivial).
prev_parts = parts
prev_rev = rev - 1
parent_parts = parts[:-1]
while parent_parts:
parent_path = '/' + self._getpath(parent_parts)
parent_change = changes.get(parent_path)
if not (parent_change and \
(parent_change.change_kind == fs.path_change_add or
parent_change.change_kind == fs.path_change_replace)):
del(parent_parts[-1])
continue
copyfrom_path, copyfrom_rev = \
_get_change_copyinfo(fsroot, parent_path, parent_change)
if copyfrom_path:
prev_rev = copyfrom_rev
prev_parts = _path_parts(copyfrom_path) + \
parts[len(parent_parts):]
break
del(parent_parts[-1])
pathtype = self._gettype(self._getpath(prev_parts), prev_rev)
else:
pathtype = self._gettype(self._getpath(parts), rev)
if vclib.check_path_access(self, parts, pathtype, rev):
found_readable = 1
copyfrom_path, copyfrom_rev = \
_get_change_copyinfo(fsroot, path, change)
if copyfrom_path and copyfrom_path != path:
parts = _path_parts(copyfrom_path)
if not vclib.check_path_access(self, parts, pathtype,
copyfrom_rev):
found_unreadable = 1
else:
found_unreadable = 1
if found_readable and found_unreadable:
break
return found_readable, found_unreadable
def _revinfo_helper(rev, include_changed_paths):
# Get the revision property info. (Would use
# editor.get_root_props(), but something is broken there...)
revprops = fs.revision_proplist(self.fs_ptr, rev)
msg, author, date, revprops = _split_revprops(revprops)
# Optimization: If our caller doesn't care about the changed
# paths, and we don't need them to do authz determinations, let's
# get outta here.
if self.auth is None and not include_changed_paths:
return date, author, msg, revprops, None
# If we get here, then we either need the changed paths because we
# were asked for them, or we need them to do authorization checks.
#
# If we only need them for authorization checks, though, we
# won't bother generating fully populated ChangedPath items (the
# cost is too great).
fsroot = self._getroot(rev)
if include_changed_paths:
found_readable, found_unreadable, changedpaths = \
_get_changed_paths(fsroot)
else:
changedpaths = None
found_readable, found_unreadable = _simple_auth_check(fsroot)
# Filter our metadata where necessary, and return the requested data.
if found_unreadable:
msg = None
if not found_readable:
author = None
date = None
return date, author, msg, revprops, changedpaths
# Consult the revinfo cache first. If we don't have cached info,
# or our caller wants changed paths and we don't have those for
# this revision, go do the real work.
rev = self._getrev(rev)
cached_info = self._revinfo_cache.get(rev)
if not cached_info \
or (include_changed_paths and cached_info[4] is None):
cached_info = _revinfo_helper(rev, include_changed_paths)
self._revinfo_cache[rev] = cached_info
return tuple(cached_info)
def _log_helper(self, path, rev, lockinfo):
rev_root = fs.revision_root(self.fs_ptr, rev)
copyfrom_rev, copyfrom_path = fs.copied_from(rev_root, path)
date, author, msg, revprops, changes = self._revinfo(rev)
if fs.is_file(rev_root, path):
size = fs.file_length(rev_root, path)
else:
size = None
return Revision(rev, date, author, msg, size, lockinfo, path,
copyfrom_path and _cleanup_path(copyfrom_path),
copyfrom_rev)
def _get_history(self, path, rev, path_type, limit=0, options={}):
if self.youngest == 0:
return []
rev_paths = []
fsroot = self._getroot(rev)
show_all_logs = options.get('svn_show_all_dir_logs', 0)
if not show_all_logs:
# See if the path is a file or directory.
kind = fs.check_path(fsroot, path)
if kind is core.svn_node_file:
show_all_logs = 1
# Instantiate a NodeHistory collector object, and use it to collect
# history items for PATH@REV.
history = NodeHistory(self.fs_ptr, show_all_logs, limit)
try:
repos.svn_repos_history(self.fs_ptr, path, history.add_history,
1, rev, options.get('svn_cross_copies', 0))
except core.SubversionException, e:
_fix_subversion_exception(e)
if e.apr_err != _SVN_ERR_CEASE_INVOCATION:
raise
# Now, iterate over those history items, checking for changes of
# location, pruning as necessitated by authz rules.
for hist_rev, hist_path in history:
path_parts = _path_parts(hist_path)
if not vclib.check_path_access(self, path_parts, path_type, hist_rev):
break
rev_paths.append([hist_rev, hist_path])
return rev_paths
def _getpath(self, path_parts):
return '/'.join(path_parts)
def _getrev(self, rev):
if rev is None or rev == 'HEAD':
return self.youngest
try:
if type(rev) == type(''):
while rev[0] == 'r':
rev = rev[1:]
rev = int(rev)
except:
raise vclib.InvalidRevision(rev)
if (rev < 0) or (rev > self.youngest):
raise vclib.InvalidRevision(rev)
return rev
def _getroot(self, rev):
try:
return self._fsroots[rev]
except KeyError:
r = self._fsroots[rev] = fs.revision_root(self.fs_ptr, rev)
return r
def _gettype(self, path, rev):
# Similar to itemtype(), but without the authz check. Returns
# None for missing paths.
try:
kind = fs.check_path(self._getroot(rev), path)
except:
return None
if kind == core.svn_node_dir:
return vclib.DIR
if kind == core.svn_node_file:
return vclib.FILE
return None
##--- custom ---##
def get_youngest_revision(self):
return self.youngest
def get_location(self, path, rev, old_rev):
try:
results = repos.svn_repos_trace_node_locations(self.fs_ptr, path,
rev, [old_rev], _allow_all)
except core.SubversionException, e:
_fix_subversion_exception(e)
if e.apr_err == core.SVN_ERR_FS_NOT_FOUND:
raise vclib.ItemNotFound(path)
raise
try:
old_path = results[old_rev]
except KeyError:
raise vclib.ItemNotFound(path)
return _cleanup_path(old_path)
def created_rev(self, full_name, rev):
return fs.node_created_rev(self._getroot(rev), full_name)
def last_rev(self, path, peg_revision, limit_revision=None):
"""Given PATH, known to exist in PEG_REVISION, find the youngest
revision older than, or equal to, LIMIT_REVISION in which path
exists. Return that revision, and the path at which PATH exists in
that revision."""
# Here's the plan, man. In the trivial case (where PEG_REVISION is
# the same as LIMIT_REVISION), this is a no-brainer. If
# LIMIT_REVISION is older than PEG_REVISION, we can use Subversion's
# history tracing code to find the right location. If, however,
# LIMIT_REVISION is younger than PEG_REVISION, we suffer from
# Subversion's lack of forward history searching. Our workaround,
# ugly as it may be, involves a binary search through the revisions
# between PEG_REVISION and LIMIT_REVISION to find our last live
# revision.
peg_revision = self._getrev(peg_revision)
limit_revision = self._getrev(limit_revision)
try:
if peg_revision == limit_revision:
return peg_revision, path
elif peg_revision > limit_revision:
fsroot = self._getroot(peg_revision)
history = fs.node_history(fsroot, path)
while history:
path, peg_revision = fs.history_location(history)
if peg_revision <= limit_revision:
return peg_revision, _cleanup_path(path)
history = fs.history_prev(history, 1)
return peg_revision, _cleanup_path(path)
else:
orig_id = fs.node_id(self._getroot(peg_revision), path)
while peg_revision != limit_revision:
mid = (peg_revision + 1 + limit_revision) / 2
try:
mid_id = fs.node_id(self._getroot(mid), path)
except core.SubversionException, e:
_fix_subversion_exception(e)
if e.apr_err == core.SVN_ERR_FS_NOT_FOUND:
cmp = -1
else:
raise
else:
### Not quite right. Need a comparison function that only returns
### true when the two nodes are the same copy, not just related.
cmp = fs.compare_ids(orig_id, mid_id)
if cmp in (0, 1):
peg_revision = mid
else:
limit_revision = mid - 1
return peg_revision, path
finally:
pass
def get_symlink_target(self, path_parts, rev):
"""Return the target of the symbolic link versioned at PATH_PARTS
in REV, or None if that object is not a symlink."""
path = self._getpath(path_parts)
rev = self._getrev(rev)
path_type = self.itemtype(path_parts, rev) # does auth-check
fsroot = self._getroot(rev)
# Symlinks must be files with the svn:special property set on them
# and with file contents which read "link SOME_PATH".
if path_type != vclib.FILE:
return None
props = fs.node_proplist(fsroot, path)
if not props.has_key(core.SVN_PROP_SPECIAL):
return None
pathspec = ''
### FIXME: We're being a touch sloppy here, only checking the first line
### of the file.
stream = fs.file_contents(fsroot, path)
try:
pathspec, eof = core.svn_stream_readline(stream, '\n')
finally:
core.svn_stream_close(stream)
if pathspec[:5] != 'link ':
return None
return pathspec[5:]

2641
lib/viewcvs.py Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,75 +0,0 @@
#!/usr/bin/python
import mimetypes
import magic
have_chardet = 0
try:
import chardet
have_chardet = 1
except: pass
class ContentMagic:
def __init__(self, encodings):
self.encodings = encodings.split(':')
self.mime_magic = None
self.errors = []
# Try to load magic
self.mime_magic = magic.open(magic.MAGIC_MIME_TYPE)
self.mime_magic.load()
# returns MIME type
def guess_mime(self, mime, filename, tempfile):
if mime == 'application/octet-stream':
mime = ''
if not mime and filename:
mime = mimetypes.guess_type(filename)[0]
if not mime and tempfile and self.mime_magic:
if type(tempfile) == type(''):
mime = self.mime_magic.file(tempfile)
else:
c = tempfile.read(4096)
mime = self.mime_magic.buffer(c)
return mime
# returns (utf8_content, charset)
def guess_charset(self, content):
# Try UTF-8
charset = 'utf-8'
try: content = content.decode('utf-8')
except: charset = None
if charset is None and have_chardet and len(content) > 64:
# Try to guess with chardet
try:
# Only detect on first 256KB if content is longer
if len(content) > 256*1024:
charset = chardet.detect(content[0:256*1024])
else:
charset = chardet.detect(content)
if charset and charset['encoding']:
charset = charset['encoding']
if charset == 'MacCyrillic':
# Silly MacCyr, try cp1251
try:
content = content.decode('windows-1251')
charset = 'windows-1251'
except: content = content.decode(charset)
else:
content = content.decode(charset)
except: charset = None
# Then try to guess primitively
if charset is None:
for charset in self.encodings:
try:
content = content.decode(charset)
break
except: charset = None
return (content, charset)
# guess and encode return value into UTF-8
def utf8(self, content):
(uni, charset) = self.guess_charset(content)
if charset:
return uni.encode('utf-8')
return content

View File

@@ -1,235 +0,0 @@
# -*-python-*-
#
# Copyright (C) 1999-2013 The ViewCVS Group. All Rights Reserved.
#
# By using this file, you agree to the terms and conditions set forth in
# the LICENSE.html file which can be found at the top level of the ViewVC
# distribution or at http://viewvc.org/license-1.html.
#
# For more information, visit http://viewvc.org/
#
# -----------------------------------------------------------------------
#
# Utilities for controlling processes and pipes on win32
#
# -----------------------------------------------------------------------
import os, sys, traceback, thread
try:
import win32api
except ImportError, e:
raise ImportError, str(e) + """
Did you install the Python for Windows Extensions?
http://sourceforge.net/projects/pywin32/
"""
import win32process, win32pipe, win32con
import win32event, win32file, winerror
import pywintypes, msvcrt
# Buffer size for spooling
SPOOL_BYTES = 4096
# File object to write error messages
SPOOL_ERROR = sys.stderr
#SPOOL_ERROR = open("m:/temp/error.txt", "wt")
def CommandLine(command, args):
"""Convert an executable path and a sequence of arguments into a command
line that can be passed to CreateProcess"""
cmd = "\"" + command.replace("\"", "\"\"") + "\""
for arg in args:
cmd = cmd + " \"" + arg.replace("\"", "\"\"") + "\""
return cmd
def CreateProcess(cmd, hStdInput, hStdOutput, hStdError):
"""Creates a new process which uses the specified handles for its standard
input, output, and error. The handles must be inheritable. 0 can be passed
as a special handle indicating that the process should inherit the current
process's input, output, or error streams, and None can be passed to discard
the child process's output or to prevent it from reading any input."""
# initialize new process's startup info
si = win32process.STARTUPINFO()
si.dwFlags = win32process.STARTF_USESTDHANDLES
if hStdInput == 0:
si.hStdInput = win32api.GetStdHandle(win32api.STD_INPUT_HANDLE)
else:
si.hStdInput = hStdInput
if hStdOutput == 0:
si.hStdOutput = win32api.GetStdHandle(win32api.STD_OUTPUT_HANDLE)
else:
si.hStdOutput = hStdOutput
if hStdError == 0:
si.hStdError = win32api.GetStdHandle(win32api.STD_ERROR_HANDLE)
else:
si.hStdError = hStdError
# create the process
phandle, pid, thandle, tid = win32process.CreateProcess \
( None, # appName
cmd, # commandLine
None, # processAttributes
None, # threadAttributes
1, # bInheritHandles
win32con.NORMAL_PRIORITY_CLASS, # dwCreationFlags
None, # newEnvironment
None, # currentDirectory
si # startupinfo
)
if hStdInput and hasattr(hStdInput, 'Close'):
hStdInput.Close()
if hStdOutput and hasattr(hStdOutput, 'Close'):
hStdOutput.Close()
if hStdError and hasattr(hStdError, 'Close'):
hStdError.Close()
return phandle, pid, thandle, tid
def CreatePipe(readInheritable, writeInheritable):
"""Create a new pipe specifying whether the read and write ends are
inheritable and whether they should be created for blocking or nonblocking
I/O."""
r, w = win32pipe.CreatePipe(None, SPOOL_BYTES)
if readInheritable:
r = MakeInheritedHandle(r)
if writeInheritable:
w = MakeInheritedHandle(w)
return r, w
def File2FileObject(pipe, mode):
"""Make a C stdio file object out of a win32 file handle"""
if mode.find('r') >= 0:
wmode = os.O_RDONLY
elif mode.find('w') >= 0:
wmode = os.O_WRONLY
if mode.find('b') >= 0:
wmode = wmode | os.O_BINARY
if mode.find('t') >= 0:
wmode = wmode | os.O_TEXT
return os.fdopen(msvcrt.open_osfhandle(pipe.Detach(),wmode),mode)
def FileObject2File(fileObject):
"""Get the win32 file handle from a C stdio file object"""
return win32file._get_osfhandle(fileObject.fileno())
def DuplicateHandle(handle):
"""Duplicates a win32 handle."""
proc = win32api.GetCurrentProcess()
return win32api.DuplicateHandle(proc,handle,proc,0,0,win32con.DUPLICATE_SAME_ACCESS)
def MakePrivateHandle(handle, replace = 1):
"""Turn an inherited handle into a non inherited one. This avoids the
handle duplication that occurs on CreateProcess calls which can create
uncloseable pipes."""
### Could change implementation to use SetHandleInformation()...
flags = win32con.DUPLICATE_SAME_ACCESS
proc = win32api.GetCurrentProcess()
if replace: flags = flags | win32con.DUPLICATE_CLOSE_SOURCE
newhandle = win32api.DuplicateHandle(proc,handle,proc,0,0,flags)
if replace: handle.Detach() # handle was already deleted by the last call
return newhandle
def MakeInheritedHandle(handle, replace = 1):
"""Turn a private handle into an inherited one."""
### Could change implementation to use SetHandleInformation()...
flags = win32con.DUPLICATE_SAME_ACCESS
proc = win32api.GetCurrentProcess()
if replace: flags = flags | win32con.DUPLICATE_CLOSE_SOURCE
newhandle = win32api.DuplicateHandle(proc,handle,proc,0,1,flags)
if replace: handle.Detach() # handle was deleted by the last call
return newhandle
def MakeSpyPipe(readInheritable, writeInheritable, outFiles = None, doneEvent = None):
"""Return read and write handles to a pipe that asynchronously writes all of
its input to the files in the outFiles sequence. doneEvent can be None, or a
a win32 event handle that will be set when the write end of pipe is closed.
"""
if outFiles is None:
return CreatePipe(readInheritable, writeInheritable)
r, writeHandle = CreatePipe(0, writeInheritable)
if readInheritable is None:
readHandle, w = None, None
else:
readHandle, w = CreatePipe(readInheritable, 0)
thread.start_new_thread(SpoolWorker, (r, w, outFiles, doneEvent))
return readHandle, writeHandle
def SpoolWorker(srcHandle, destHandle, outFiles, doneEvent):
"""Thread entry point for implementation of MakeSpyPipe"""
try:
buffer = win32file.AllocateReadBuffer(SPOOL_BYTES)
while 1:
try:
#print >> SPOOL_ERROR, "Calling ReadFile..."; SPOOL_ERROR.flush()
hr, data = win32file.ReadFile(srcHandle, buffer)
#print >> SPOOL_ERROR, "ReadFile returned '%s', '%s'" % (str(hr), str(data)); SPOOL_ERROR.flush()
if hr != 0:
raise "win32file.ReadFile returned %i, '%s'" % (hr, data)
elif len(data) == 0:
break
except pywintypes.error, e:
#print >> SPOOL_ERROR, "ReadFile threw '%s'" % str(e); SPOOL_ERROR.flush()
if e.args[0] == winerror.ERROR_BROKEN_PIPE:
break
else:
raise e
#print >> SPOOL_ERROR, "Writing to %i file objects..." % len(outFiles); SPOOL_ERROR.flush()
for f in outFiles:
f.write(data)
#print >> SPOOL_ERROR, "Done writing to file objects."; SPOOL_ERROR.flush()
#print >> SPOOL_ERROR, "Writing to destination %s" % str(destHandle); SPOOL_ERROR.flush()
if destHandle:
#print >> SPOOL_ERROR, "Calling WriteFile..."; SPOOL_ERROR.flush()
hr, bytes = win32file.WriteFile(destHandle, data)
#print >> SPOOL_ERROR, "WriteFile() passed %i bytes and returned %i, %i" % (len(data), hr, bytes); SPOOL_ERROR.flush()
if hr != 0 or bytes != len(data):
raise "win32file.WriteFile() passed %i bytes and returned %i, %i" % (len(data), hr, bytes)
srcHandle.Close()
if doneEvent:
win32event.SetEvent(doneEvent)
if destHandle:
destHandle.Close()
except:
info = sys.exc_info()
SPOOL_ERROR.writelines(apply(traceback.format_exception, info), '')
SPOOL_ERROR.flush()
del info
def NullFile(inheritable):
"""Create a null file handle."""
if inheritable:
sa = pywintypes.SECURITY_ATTRIBUTES()
sa.bInheritHandle = 1
else:
sa = None
return win32file.CreateFile("nul",
win32file.GENERIC_READ | win32file.GENERIC_WRITE,
win32file.FILE_SHARE_READ | win32file.FILE_SHARE_WRITE,
sa, win32file.OPEN_EXISTING, 0, None)

View File

@@ -1,170 +0,0 @@
"""Module to analyze Python source code; for syntax coloring tools.
Interface:
tags = fontify(pytext, searchfrom, searchto)
The PYTEXT argument is a string containing Python source code. The
(optional) arguments SEARCHFROM and SEARCHTO may contain a slice in
PYTEXT.
The returned value is a list of tuples, formatted like this:
[('keyword', 0, 6, None),
('keyword', 11, 17, None),
('comment', 23, 53, None),
...
]
The tuple contents are always like this:
(tag, startindex, endindex, sublist)
TAG is one of 'keyword', 'string', 'comment' or 'identifier'
SUBLIST is not used, hence always None.
"""
# Based on FontText.py by Mitchell S. Chapman,
# which was modified by Zachary Roadhouse,
# then un-Tk'd by Just van Rossum.
# Many thanks for regular expression debugging & authoring are due to:
# Tim (the-incredib-ly y'rs) Peters and Cristian Tismer
# So, who owns the copyright? ;-) How about this:
# Copyright 1996-1997:
# Mitchell S. Chapman,
# Zachary Roadhouse,
# Tim Peters,
# Just van Rossum
__version__ = "0.3.1"
import string, re
# This list of keywords is taken from ref/node13.html of the
# Python 1.3 HTML documentation. ("access" is intentionally omitted.)
keywordsList = ["and", "assert", "break", "class", "continue", "def",
"del", "elif", "else", "except", "exec", "finally",
"for", "from", "global", "if", "import", "in", "is",
"lambda", "not", "or", "pass", "print", "raise",
"return", "try", "while",
]
# A regexp for matching Python comments.
commentPat = "#.*"
# A regexp for matching simple quoted strings.
pat = "q[^q\\n]*(\\[\000-\377][^q\\n]*)*q"
quotePat = string.replace(pat, "q", "'") + "|" + string.replace(pat, 'q', '"')
# A regexp for matching multi-line tripled-quoted strings. (Way to go, Tim!)
pat = """
qqq
[^q]*
(
( \\[\000-\377]
| q
( \\[\000-\377]
| [^q]
| q
( \\[\000-\377]
| [^q]
)
)
)
[^q]*
)*
qqq
"""
pat = string.join(string.split(pat), '') # get rid of whitespace
tripleQuotePat = string.replace(pat, "q", "'") + "|" \
+ string.replace(pat, 'q', '"')
# A regexp which matches all and only Python keywords. This will let
# us skip the uninteresting identifier references.
nonKeyPat = "(^|[^a-zA-Z0-9_.\"'])" # legal keyword-preceding characters
keyPat = nonKeyPat + "(" + string.join(keywordsList, "|") + ")" + nonKeyPat
# Our final syntax-matching regexp is the concatation of the regexp's we
# constructed above.
syntaxPat = keyPat + \
"|" + commentPat + \
"|" + tripleQuotePat + \
"|" + quotePat
syntaxRE = re.compile(syntaxPat)
# Finally, we construct a regexp for matching indentifiers (with
# optional leading whitespace).
idKeyPat = "[ \t]*[A-Za-z_][A-Za-z_0-9.]*"
idRE = re.compile(idKeyPat)
def fontify(pytext, searchfrom=0, searchto=None):
if searchto is None:
searchto = len(pytext)
tags = []
commentTag = 'comment'
stringTag = 'string'
keywordTag = 'keyword'
identifierTag = 'identifier'
start = 0
end = searchfrom
while 1:
# Look for some syntax token we're interested in. If find
# nothing, we're done.
matchobj = syntaxRE.search(pytext, end)
if not matchobj:
break
# If we found something outside our search area, it doesn't
# count (and we're done).
start = matchobj.start()
if start >= searchto:
break
match = matchobj.group(0)
end = start + len(match)
c = match[0]
if c == '#':
# We matched a comment.
tags.append((commentTag, start, end, None))
elif c == '"' or c == '\'':
# We matched a string.
tags.append((stringTag, start, end, None))
else:
# We matched a keyword.
if start != searchfrom:
# there's still a redundant char before and after it, strip!
match = match[1:-1]
start = start + 1
else:
# This is the first keyword in the text.
# Only a space at the end.
match = match[:-1]
end = end - 1
tags.append((keywordTag, start, end, None))
# If this was a defining keyword, look ahead to the
# following identifier.
if match in ["def", "class"]:
matchobj = idRE.search(pytext, end)
if matchobj:
start = matchobj.start()
if start == end and start < searchto:
end = start + len(matchobj.group(0))
tags.append((identifierTag, start, end, None))
return tags
def test(path):
f = open(path)
text = f.read()
f.close()
tags = fontify(text)
for tag, start, end, sublist in tags:
print tag, `text[start:end]`
if __name__ == "__main__":
import sys
test(sys.argv[0])

View File

@@ -1,54 +0,0 @@
TARGETS = python/elx-python java/elx-java
all : $(TARGETS)
CFLAGS = -g -Wpointer-arith -Wwrite-strings -Wshadow -Wall
CPPFLAGS = -Ipython -Ijava -I.
# the scanner depends on tokens generated from python.y
python/scanner.c : python/python.c
# the keywords also need the tokens in python.h
python/py_keywords.c : python/python.c
# we need the scanner tokens in py_scan.h and keywords in py_keywords.h
python/elx-python.o : python/elx-python.c python/py_keywords.c
# we need java.[ch] generated first to get the tokens in java.h
java/j_scan.c : java/java.c java/j_keywords.c
# the keywords also need the tokens in java.h
java/j_keywords.c : java/java.c
# we need the scanner tokens in j_scan.h and keywords in j_keywords.h
java/elx-java.o : java/elx-java.c java/j_keywords.c java/java.c
python/elx-python : python/elx-python.o python/scanner.o python/python.o \
python/py_keywords.o elx-common.o
$(CC) -o $@ $^
java/elx-java : java/elx-java.o java/j_scan.o java/java.o java/j_keywords.o \
elx-common.o
$(CC) -o $@ $^
clean :
rm -f *.o
rm -f python/*.o python/python.[ch] python/py_keywords.[ch]
rm -f java/*.o java/java.[ch] java/j_keywords.[ch] java/j_scan.[ch]
rm -f python/*.output java/*.output
rm -f $(TARGETS)
.SUFFIXES:
.SUFFIXES: .c .y .shilka .o
%.c : %.y
@d="`echo $@ | sed 's/\.c//'`" ; \
echo msta -d -enum -v -o $$d $< ; \
msta -d -enum -v -o $$d $<
%.c : %.shilka
@d="`echo $@ | sed 's,/[^/]*$$,,'`" ; \
f="`echo $< | sed 's,.*/,,'`" ; \
echo shilka -length -no-definitions -interface $$f ; \
(cd $$d && shilka -length -no-definitions -interface $$f)

View File

@@ -1,110 +0,0 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>
#include "elx.h"
#define ELX_ELEMS_EXT ".elx"
#define ELX_SYMBOLS_EXT ".els"
static void usage(const char *progname)
{
fprintf(stderr, "USAGE: %s FILENAME\n", progname);
exit(1);
}
static const char * build_one(const char *base, int len, const char *suffix)
{
int slen = strlen(suffix);
char *fn;
fn = malloc(len + slen + 1);
memcpy(fn, base, len);
memcpy(fn + len, suffix, slen);
fn[len + slen] = '\0';
return fn;
}
elx_context_t *elx_process_args(int argc, const char **argv)
{
elx_context_t *ec;
const char *input_fn;
const char *p;
int len;
/* ### in the future, we can expand this for more options */
if (argc != 2)
{
usage(argv[0]);
/* NOTREACHED */
}
input_fn = argv[1];
p = strrchr(input_fn, '.');
if (p == NULL)
len = strlen(input_fn);
else
len = p - argv[1];
ec = malloc(sizeof(*ec));
ec->input_fn = input_fn;
ec->elx_fn = build_one(input_fn, len, ELX_ELEMS_EXT);
ec->sym_fn = build_one(input_fn, len, ELX_SYMBOLS_EXT);
return ec;
}
void elx_open_files(elx_context_t *ec)
{
const char *fn;
const char *op;
if ((ec->input_fp = fopen(ec->input_fn, "r")) == NULL)
{
fn = ec->input_fn;
op = "reading";
goto error;
}
if ((ec->elx_fp = fopen(ec->elx_fn, "w")) == NULL)
{
fn = ec->elx_fn;
op = "writing";
goto error;
}
if ((ec->sym_fp = fopen(ec->sym_fn, "w")) == NULL)
{
fn = ec->sym_fn;
op = "writing";
goto error;
}
return;
error:
fprintf(stderr, "ERROR: file \"%s\" could not be opened for %s.\n %s\n",
fn, op, strerror(errno));
exit(2);
}
void elx_close_files(elx_context_t *ec)
{
fclose(ec->input_fp);
fclose(ec->elx_fp);
fclose(ec->sym_fp);
}
void elx_issue_token(elx_context_t *ec,
char which, int start, int len,
const char *symbol)
{
fprintf(ec->elx_fp, "%c %d %d\n", which, start, len);
if (ELX_DEFINES_SYM(which))
{
fprintf(ec->sym_fp, "%s %d %s\n", symbol, start, ec->input_fn);
}
}

View File

@@ -1,54 +0,0 @@
#ifndef ELX_H
#define ELX_H
#include <stdio.h>
#ifdef __cplusplus
extern "C" {
#endif /* __cplusplus */
#define ELX_COMMENT 'C' /* a comment */
#define ELX_STRING 'S' /* a string constant */
#define ELX_KEYWORD 'K' /* a language keyword */
#define ELX_GLOBAL_FDEF 'F' /* function defn in global (visible) scope */
#define ELX_LOCAL_FDEF 'L' /* function defn in local (hidden) scope */
#define ELX_METHOD_DEF 'M' /* method definition */
#define ELX_FUNC_REF 'R' /* function reference / call */
#define ELX_DEFINES_SYM(c) ((c) == ELX_GLOBAL_FDEF || (c) == ELX_LOCAL_FDEF \
|| (c) == ELX_METHOD_DEF)
typedef struct
{
/* input filename */
const char *input_fn;
/* output filenames: element extractions, and symbols */
const char *elx_fn;
const char *sym_fn;
/* file pointers for each of the input/output files */
FILE *input_fp;
FILE *elx_fp;
FILE *sym_fp;
} elx_context_t;
elx_context_t *elx_process_args(int argc, const char **argv);
void elx_open_files(elx_context_t *ec);
void elx_close_files(elx_context_t *ec);
void elx_issue_token(elx_context_t *ec,
char which, int start, int len,
const char *symbol);
#ifdef __cplusplus
}
#endif /* __cplusplus */
#endif /* ELX_H */

View File

@@ -1,121 +0,0 @@
#!/usr/bin/env python
#
# generate HTML given an input file and an element file
#
import re
import string
import cgi
import struct
_re_elem = re.compile('([a-zA-Z]) ([0-9]+) ([0-9]+)\n')
CHUNK_SIZE = 98304 # 4096*24
class ElemParser:
"Parse an elements file, extracting the token type, start, and length."
def __init__(self, efile):
self.efile = efile
def get(self):
line = self.efile.readline()
if not line:
return None, None, None
t, s, e = string.split(line)
return t, int(s)-1, int(e)
def unused_get(self):
record = self.efile.read(9)
if not record:
return None, None, None
return struct.unpack('>cii', record)
class Writer:
"Generate output, including copying from another input."
def __init__(self, ifile, ofile):
self.ifile = ifile
self.ofile = ofile
self.buf = ifile.read(CHUNK_SIZE)
self.offset = 0
def write(self, data):
self.ofile.write(data)
def copy(self, pos, amt):
"Copy 'amt' bytes from position 'pos' of input to output."
idx = pos - self.offset
self.ofile.write(cgi.escape(buffer(self.buf, idx, amt)))
amt = amt - (len(self.buf) - idx)
while amt > 0:
self._more()
self.ofile.write(cgi.escape(buffer(self.buf, 0, amt)))
amt = amt - len(self.buf)
def flush(self, pos):
"Flush the rest of the input to the output."
idx = pos - self.offset
self.ofile.write(cgi.escape(buffer(self.buf, idx)))
while 1:
buf = self.ifile.read(CHUNK_SIZE)
if not buf:
break
self.ofile.write(cgi.escape(buf))
def _more(self):
self.offset = self.offset + len(self.buf)
self.buf = self.ifile.read(CHUNK_SIZE)
def generate(input, elems, output, genpage=0):
ep = ElemParser(elems)
w = Writer(input, output)
cur = 0
if genpage:
w.write('''\
<html><head><title>ELX Output Page</title>
<style type="text/css">
.elx_C { color: firebrick; font-style: italic; }
.elx_S { color: #bc8f8f; font-weight: bold; }
.elx_K { color: purple; font-weight: bold }
.elx_F { color: blue; font-weight: bold; }
.elx_L { color: blue; font-weight: bold; }
.elx_M { color: blue; font-weight: bold; }
.elx_R { color: blue; font-weight: bold; }
</style>
</head>
<body>
''')
w.write('<pre>')
while 1:
type, start, length = ep.get()
if type is None:
break
if cur < start:
# print out some plain text up to 'cur'
w.copy(cur, start - cur)
# wrap a bit o' formatting here
w.write('<span class="elx_%s">' % type)
# copy over the token
w.copy(start, length)
# and close up the formatting
w.write('</span>')
cur = start + length
# all done.
w.flush(cur)
w.write('</pre>')
if genpage:
w.write('</body></html>\n')
if __name__ == '__main__':
import sys
generate(open(sys.argv[1]), open(sys.argv[2]), sys.stdout, 1)

Some files were not shown because too many files have changed in this diff Show More