Compare commits

..

No commits in common. "master" and "v0.4.2" have entirely different histories.

95 changed files with 1328 additions and 1960 deletions

View File

@ -1,5 +0,0 @@
*
!cmake
!grive
!libgrive
!CMakeLists.txt

11
.gitignore vendored
View File

@ -14,14 +14,3 @@ bgrive/bgrive
grive/grive
libgrive/btest
*.cmake
debian/debhelper-build-stamp
debian/files
debian/grive.debhelper.log
debian/grive.substvars
debian/grive/
debian/.debhelper
obj-x86_64-linux-gnu/
.idea

View File

@ -1,27 +1,11 @@
cmake_minimum_required(VERSION 2.8)
project(grive2)
include(GNUInstallDirs)
# Grive version. remember to update it for every new release!
set( GRIVE_VERSION "0.5.3" CACHE STRING "Grive version" )
message(WARNING "Version to build: ${GRIVE_VERSION}")
set( GRIVE_VERSION "0.4.2" )
# common compile options
add_definitions( -DVERSION="${GRIVE_VERSION}" )
add_definitions( -D_FILE_OFFSET_BITS=64 -std=c++0x )
if ( APPLE )
add_definitions( -Doff64_t=off_t )
endif ( APPLE )
find_program(
HAVE_SYSTEMD systemd
PATHS /lib/systemd /usr/lib/systemd
NO_DEFAULT_PATH
)
if ( HAVE_SYSTEMD )
add_subdirectory( systemd )
endif( HAVE_SYSTEMD )
add_definitions( -D_FILE_OFFSET_BITS=64 )
add_subdirectory( libgrive )
add_subdirectory( grive )

View File

@ -1,25 +0,0 @@
FROM alpine:3.7 as build
RUN apk add make cmake g++ libgcrypt-dev yajl-dev yajl \
boost-dev curl-dev expat-dev cppunit-dev binutils-dev \
pkgconfig
ADD . /grive2
RUN mkdir /grive2/build \
&& cd /grive2/build \
&& cmake .. \
&& make -j4 install
FROM alpine:3.7
RUN apk add yajl libcurl libgcrypt boost-program_options boost-regex libstdc++ boost-system \
&& apk add boost-filesystem --repository=http://dl-cdn.alpinelinux.org/alpine/edge/main
COPY --from=build /usr/local/bin/grive /bin/grive
RUN chmod 777 /bin/grive \
&& mkdir /data
VOLUME /data
WORKDIR /data
ENTRYPOINT grive

205
README.md
View File

@ -1,14 +1,14 @@
# Grive2 0.5.3
# Grive2 0.4.2
09 Nov 2022, Vitaliy Filippov
28 Dec 2015, Vitaliy Filippov
http://yourcmc.ru/wiki/Grive2
This is the fork of original "Grive" (https://github.com/Grive/grive) Google Drive client
with the support for the new Drive REST API and partial sync.
Grive simply downloads all the files in your Google Drive into the current directory.
After you make some changes to the local files, run
Grive can be considered still beta or pre-beta quality. It simply downloads all the files in your
Google Drive into the current directory. After you make some changes to the local files, run
grive again and it will upload your changes back to your Google Drive. New files created locally
or in Google Drive will be uploaded or downloaded respectively. Deleted files will also be "removed".
Currently Grive will NOT destroy any of your files: it will only move the files to a
@ -16,135 +16,11 @@ directory named .trash or put them in the Google Drive trash. You can always rec
There are a few things that Grive does not do at the moment:
- continously wait for changes in file system or in Google Drive to occur and upload.
A sync is only performed when you run Grive (there are workarounds for almost
continuous sync. See below).
A sync is only performed when you run Grive, and it calculates checksums for all files every time.
- symbolic links support.
- support for Google documents.
These may be added in the future.
Enjoy!
## Usage
When Grive is run for the first time, you should use the "-a" argument to grant
permission to Grive to access to your Google Drive:
```bash
cd $HOME
mkdir google-drive
cd google-drive
grive -a
```
A URL should be printed. Go to the link. You will need to login to your Google
account if you haven't done so. After granting the permission to Grive, the
authorization code will be forwarded to the Grive application and you will be
redirected to a localhost web page confirming the authorization.
If everything works fine, Grive will create .grive and .grive\_state files in your
current directory. It will also start downloading files from your Google Drive to
your current directory.
To resync the direcory, run `grive` in the folder.
```bash
cd $HOME/google-drive
grive
```
### Exclude specific files and folders from sync: .griveignore
Rules are similar to Git's .gitignore, but may differ slightly due to the different
implementation.
- lines that start with # are comments
- leading and trailing spaces ignored unless escaped with \
- non-empty lines without ! in front are treated as "exclude" patterns
- non-empty lines with ! in front are treated as "include" patterns
and have a priority over all "exclude" ones
- patterns are matched against the filenames relative to the grive root
- a/**/b matches any number of subpaths between a and b, including 0
- **/a matches `a` inside any directory
- b/** matches everything inside `b`, but not b itself
- \* matches any number of any characters except /
- ? matches any character except /
- .griveignore itself isn't ignored by default, but you can include it in itself to ignore
### Scheduled syncs and syncs on file change events
There are tools which you can use to enable both scheduled syncs and syncs
when a file changes. Together these gives you an experience almost like the
Google Drive clients on other platforms (it misses the almost instantious
download of changed files in the google drive).
Grive installs such a basic solution which uses inotify-tools together with
systemd timer and services. You can enable it for a folder in your `$HOME`
directory (in this case the `$HOME/google-drive`):
First install the `inotify-tools` (seems to be named like that in all major distros):
test that it works by calling `inotifywait -h`.
Prepare a Google Drive folder in your $HOME directory with `grive -a`.
```bash
# 'google-drive' is the name of your Google Drive folder in your $HOME directory
systemctl --user enable grive@$(systemd-escape google-drive).service
systemctl --user start grive@$(systemd-escape google-drive).service
```
You can enable and start this unit for multiple folders in your `$HOME`
directory if you need to sync with multiple google accounts.
You can also only enable the time based syncing or the changes based syncing
by only directly enabling and starting the corresponding unit:
`grive-changes@$(systemd-escape google-drive).service` or
`grive-timer@$(systemd-escape google-drive).timer`.
### Shared files
Files and folders which are shared with you don't automatically show up in
your folder. They need to be added explicitly to your Google Drive: go to the
Google Drive website, right click on the file or folder and chose 'Add to My
Drive'.
### Different OAuth2 client to workaround over quota and google approval issues
Google recently started to restrict access for unapproved applications:
https://developers.google.com/drive/api/v3/about-auth?hl=ru
Grive2 is currently awaiting approval but it seems it will take forever.
Also even if they approve it the default Client ID supplied with grive may
exceed quota and grive will then fail to sync.
You can supply your own OAuth2 client credentials to work around these problems
by following these steps:
1. Go to https://console.developers.google.com/apis/api/drive.googleapis.com
2. Choose a project (you might need to create one first)
3. Go to https://console.developers.google.com/apis/library/drive.googleapis.com and
"Enable" the Google Drive APIs
4. Go to https://console.cloud.google.com/apis/credentials and click "Create credentials > Help me choose"
5. In the "Find out what credentials you need" dialog, choose:
- Which API are you using: "Google Drive API"
- Where will you be calling the API from: "Other UI (...CLI...)"
- What data will you be accessing: "User Data"
6. In the next steps create a client id (name doesn't matter) and
setup the consent screen (defaults are ok, no need for any URLs)
7. The needed "Client ID" and "Client Secret" are either in the shown download
or can later found by clicking on the created credential on
https://console.developers.google.com/apis/credentials/
8. When you change client ID/secret in an existing Grive folder you must first delete
the old `.grive` configuration file.
9. Call `grive -a --id <client_id> --secret <client_secret>` and follow the steps
to authenticate the OAuth2 client to allow it to access your drive folder.
## Installation
For the detailed instructions, see http://yourcmc.ru/wiki/Grive2#Installation
### Install dependencies
These may be added in the future, possibly the next release.
You need the following libraries:
@ -163,28 +39,13 @@ There are also some optional dependencies:
On a Debian/Ubuntu/Linux Mint machine just run the following command to install all
these packages:
sudo apt-get install git cmake build-essential libgcrypt20-dev libyajl-dev \
libboost-all-dev libcurl4-openssl-dev libexpat1-dev libcppunit-dev binutils-dev \
debhelper zlib1g-dev dpkg-dev pkg-config
Fedora:
sudo dnf install git cmake libgcrypt-devel gcc-c++ libstdc++ yajl-devel boost-devel libcurl-devel expat-devel binutils zlib
sudo apt-get install git cmake build-essential libgcrypt11-dev libyajl-dev \
libboost-all-dev libcurl4-openssl-dev libexpat1-dev libcppunit-dev binutils-dev
FreeBSD:
pkg install git cmake boost-libs yajl libgcrypt pkgconf cppunit libbfd
### Build Debian packages
On a Debian/Ubuntu/Linux Mint you can use `dpkg-buildpackage` utility from `dpkg-dev` package
to build grive. Just clone the repository, `cd` into it and run
dpkg-buildpackage -j4 --no-sign
### Manual build
Grive uses cmake to build. Basic install sequence is
mkdir build
@ -193,45 +54,22 @@ Grive uses cmake to build. Basic install sequence is
make -j4
sudo make install
Alternativly you can define your own client_id and client_secret during build
For the detailed instructions, see http://yourcmc.ru/wiki/Grive2#Installation
mkdir build
cd build
cmake .. "-DAPP_ID:STRING=<client_id>" "-DAPP_SECRET:STRING=<client_secret>"
make -j4
sudo make install
When Grive is run for the first time, you should use the "-a" argument to grant
permission to Grive to access to your Google Drive. A URL should be printed.
Go to the link. You will need to login to your Google account if you haven't
done so. After granting the permission to Grive, the browser will show you
an authenication code. Copy-and-paste that to the standard input of Grive.
If everything works fine, Grive will create .grive and .grive_state files in your
current directory. It will also start downloading files from your Google Drive to
your current directory.
Enjoy!
## Version History
### Grive2 v0.5.3
- Implement Google OAuth loopback IP redirect flow
- Various small fixes
### Grive2 v0.5.1
- Support for .griveignore
- Automatic sync solution based on inotify-tools and systemd
- no-remote-new and upload-only modes
- Ignore regexp does not persist anymore (note that Grive will still track it to not
accidentally delete remote files when changing ignore regexp)
- Added options to limit upload and download speed
- Faster upload of new and changed files. Now Grive uploads files without first calculating
md5 checksum when file is created locally or when its size changes.
- Added -P/--progress-bar option to print ASCII progress bar for each processed file (pull request by @svartkanin)
- Added command-line options to specify your own client_id and client_secret
- Now grive2 skips links, sockets, fifos and other unusual files
- Various small build fixes
### Grive2 v0.5
- Much faster and more correct synchronisation using local modification time and checksum cache (similar to git index)
- Automatic move/rename detection, -m option removed
- force option works again
- Instead of crashing on sync exceptions Grive will give a warning and attempt to sync failed files again during the next run.
- Revision support works again. Grive 0.4.x always created new revisions for all files during sync, regardless of the absence of the --new-rev option.
- Shared files now sync correctly
### Grive2 v0.4.2
- Option to exclude files by perl regexp
@ -251,7 +89,7 @@ Known issues:
First fork release, by Vitaliy Filippov / vitalif at mail*ru
- Support for the new Google Drive REST API (old "Document List" API is shut down by Google 20 April 2015)
- REAL support for partial sync: syncs only one subdirectory with `grive -s subdir`
- REAL support for partial sync: syncs only one subdirectory with `grive -d subdir`
- Major refactoring - a lot of dead code removed, JSON-C is not used anymore, API-specific code is split from non-API-specific
- Some stability fixes from Visa Putkinen https://github.com/visap/grive/commits/visa
- Slightly reduce number of syscalls when reading local files.
@ -267,4 +105,3 @@ New features:
- #87: support for revisions
- #86: ~~partial sync (contributed by justin at tierramedia.com)~~ that's not partial sync,
that's only support for specifying local path on command line

View File

@ -1,6 +1,12 @@
find_library( DL_LIBRARY NAMES dl PATH ${CMAKE_PLATFORM_IMPLICIT_LINK_DIRECTORIES} )
find_library( BFD_LIBRARY NAMES bfd PATH ${CMAKE_PLATFORM_IMPLICIT_LINK_DIRECTORIES} )
if ( BFD_LIBRARY )
if ( DL_LIBRARY AND BFD_LIBRARY )
set( BFD_FOUND TRUE )
endif (DL_LIBRARY AND BFD_LIBRARY)
if ( BFD_FOUND )
message( STATUS "Found libbfd: ${BFD_LIBRARY}")
endif ( BFD_LIBRARY )
endif ( BFD_FOUND )

View File

@ -27,9 +27,6 @@ IF(LIBGCRYPTCONFIG_EXECUTABLE)
EXEC_PROGRAM(${LIBGCRYPTCONFIG_EXECUTABLE} ARGS --cflags RETURN_VALUE _return_VALUE OUTPUT_VARIABLE LIBGCRYPT_CFLAGS)
string(REPLACE "fgrep: warning: fgrep is obsolescent; using grep -F" "" LIBGCRYPT_LIBRARIES "${LIBGCRYPT_LIBRARIES}")
string(STRIP "${LIBGCRYPT_LIBRARIES}" LIBGCRYPT_LIBRARIES)
IF(${LIBGCRYPT_CFLAGS} MATCHES "\n")
SET(LIBGCRYPT_CFLAGS " ")
ENDIF(${LIBGCRYPT_CFLAGS} MATCHES "\n")

View File

@ -1,63 +0,0 @@
#compdef grive
# ------------------------------------------------------------------------------
# Copyright (c) 2015 Github zsh-users - http://github.com/zsh-users
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the zsh-users nor the
# names of its contributors may be used to endorse or promote products
# derived from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL ZSH-USERS BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# ------------------------------------------------------------------------------
# Description
# -----------
#
# Completion script for Grive (https://github.com/vitalif/grive2)
#
# ------------------------------------------------------------------------------
# Authors
# -------
#
# * Doron Behar <https://github.com/doronbehar>
#
# ------------------------------------------------------------------------------
# -*- mode: zsh; sh-indentation: 2; indent-tabs-mode: nil; sh-basic-offset: 2; -*-
# vim: ft=zsh sw=2 ts=2 et
# ------------------------------------------------------------------------------
local curcontext="$curcontext" state line ret=1
typeset -A opt_args
_arguments -C \
'(-h --help)'{-h,--help}'[Produce help message]' \
'(-v --version)'{-v,--version}'[Display Grive version]' \
'(-a --auth)'{-a,--auth}'[Request authorization token]' \
'(-p --path)'{-p,--path}'[Root directory to sync]' \
'(-s --dir)'{-s,--dir}'[Single subdirectory to sync (remembered for next runs)]' \
'(-V --verbose)'{-V,--verbose}'[Verbose mode. Enable more messages than normal.]' \
'(--log-http)--log-http[Log all HTTP responses in this file for debugging.]' \
'(--new-rev)--new-rev[Create,new revisions in server for updated files.]' \
'(-d --debug)'{-d,--debug}'[Enable debug level messages. Implies -v.]' \
'(-l --log)'{-l,--log}'[Set log output filename.]' \
'(-f --force)'{-f,--force}'[Force grive to always download a file from Google Drive instead of uploading it.]' \
'(--dry-run)--dry-run[Only,detect which files need to be uploaded/downloaded,without actually performing them.]' \
'(--ignore)--ignore[Perl,RegExp to ignore files (matched against relative paths, remembered for next runs) ]' \
'*: :_files' && ret=0
return ret

21
debian/changelog vendored
View File

@ -1,24 +1,3 @@
grive2 (0.5.3) unstable; urgency=medium
* Implement Google OAuth loopback IP redirect flow
* Various small fixes
-- Vitaliy Filippov <vitalif@yourcmc.ru> Wed, 09 Nov 2022 12:42:28 +0300
grive2 (0.5.2+git20210315) unstable; urgency=medium
* Newer dev version
* Add systemd unit files and helper script for automatic syncs
* Add possibility to change client id and secret and save it between runs
-- Vitaliy Filippov <vitalif@yourcmc.ru> Wed, 31 Jul 2016 22:04:53 +0300
grive2 (0.5+git20160114) unstable; urgency=medium
* Newer release, with support for faster sync and rename detection
-- Vitaliy Filippov <vitalif@yourcmc.ru> Sun, 03 Jan 2016 12:51:55 +0300
grive2 (0.4.1+git20151011) unstable; urgency=medium
* Add Debian packaging scripts to the official repository

2
debian/compat vendored
View File

@ -1 +1 @@
11
7

2
debian/control vendored
View File

@ -2,7 +2,7 @@ Source: grive2
Section: net
Priority: optional
Maintainer: Vitaliy Filippov <vitalif@mail.ru>
Build-Depends: debhelper, cmake, pkg-config, zlib1g-dev, libcurl4-openssl-dev | libcurl4-gnutls-dev, libboost-filesystem-dev, libboost-program-options-dev, libboost-test-dev, libboost-regex-dev, libexpat1-dev, libgcrypt-dev, libyajl-dev
Build-Depends: debhelper, cmake, pkg-config, zlib1g-dev, libcurl4-openssl-dev, libstdc++6-4.4-dev | libstdc++-4.9-dev | libstdc++-5-dev, libboost-filesystem-dev, libboost-program-options-dev, libboost-test-dev, libboost-regex-dev, libexpat1-dev, binutils-dev, libgcrypt-dev, libyajl-dev
Standards-Version: 3.9.6
Homepage: https://yourcmc.ru/wiki/Grive2

5
debian/rules vendored
View File

@ -1,7 +1,4 @@
#!/usr/bin/make -f
override_dh_auto_configure:
dh_auto_configure -- -DHAVE_SYSTEMD=1
%:
dh $@ --buildsystem=cmake --parallel --builddirectory=build
dh $@ --buildsystem=cmake --parallel

View File

@ -17,28 +17,13 @@ add_executable( grive_executable
)
target_link_libraries( grive_executable
${Boost_LIBRARIES}
grive
)
set(DEFAULT_APP_ID "615557989097-i93d4d1ojpen0m0dso18ldr6orjkidgf.apps.googleusercontent.com")
set(DEFAULT_APP_SECRET "xiM8Apu_WuRRdheNelJcNtOD")
set(APP_ID ${DEFAULT_APP_ID} CACHE STRING "Application Id")
set(APP_SECRET ${DEFAULT_APP_SECRET} CACHE STRING "Application Secret")
target_compile_definitions ( grive_executable
PRIVATE
-DAPP_ID="${APP_ID}"
-DAPP_SECRET="${APP_SECRET}"
)
set_target_properties( grive_executable
PROPERTIES OUTPUT_NAME grive
)
install(TARGETS grive_executable RUNTIME DESTINATION bin)
if ( ${CMAKE_SYSTEM_NAME} MATCHES "FreeBSD" OR ${CMAKE_SYSTEM_NAME} MATCHES "OpenBSD" )
install(FILES doc/grive.1 DESTINATION man/man1 )
else ( ${CMAKE_SYSTEM_NAME} MATCHES "FreeBSD" OR ${CMAKE_SYSTEM_NAME} MATCHES "OpenBSD" )
install(FILES doc/grive.1 DESTINATION share/man/man1 )
endif( ${CMAKE_SYSTEM_NAME} MATCHES "FreeBSD" OR ${CMAKE_SYSTEM_NAME} MATCHES "OpenBSD" )
install(FILES doc/grive.1 DESTINATION share/man/man1 )

View File

@ -2,7 +2,7 @@
.\" First parameter, NAME, should be all caps
.\" Second parameter, SECTION, should be 1-8, maybe w/ subsection
.\" other parameters are allowed: see man(7), man(1)
.TH "GRIVE" 1 "January 3, 2016"
.TH "GRIVE" 1 "June 19, 2012"
.SH NAME
grive \- Google Drive client for GNU/Linux
@ -26,102 +26,33 @@ Requests authorization token from Google
Enable debug level messages. Implies \-V
.TP
\fB\-\-dry-run\fR
Only detect which files need to be uploaded/downloaded, without actually performing changes
Only detects which files are needed for download or upload without doing it
.TP
\fB\-f, \-\-force\fR
Forces
.I grive
to always download a file from Google Drive instead uploading it
.TP
\fB\-u, \-\-upload\-only\fR
Forces
.I grive
to not download anything from Google Drive and only upload local changes to server instead
.TP
\fB\-n, \-\-no\-remote\-new\fR
Forces
.I grive
to download only files that are changed in Google Drive and already exist locally
.TP
\fB\-h\fR, \fB\-\-help\fR
Produces help message
.TP
\fB\-\-ignore\fR <perl_regexp>
Ignore files with relative paths matching this Perl Regular Expression.
.TP
\fB\-l\fR <filename>, \fB\-\-log\fR <filename>
Write log output to
.I <filename>
.TP
\fB\-\-log\-http\fR <filename_prefix>
Log all HTTP responses in files named
.I <filename_prefix>YYYY-MM-DD.HHMMSS.txt
for debugging
.TP
\fB\-\-new\-rev\fR
Create new revisions in server for updated files
.TP
\fB\-p\fR <wc_path>, \fB\-\-path\fR <wc_path>
Use
.I <wc_path>
as the working copy root directory
.TP
\fB\-s\fR <subdir>, \fB\-\-dir\fR <subdir>
Sync a single
.I <subdir>
subdirectory. Internally converted to an ignore regexp.
\fB\-l\fR filename, \fB\-\-log\fR filename
Set log output to
.I filename
.TP
\fB\-v\fR, \fB\-\-version\fR
Displays program version
.TP
\fB\-P\fR, \fB\-\-progress-bar\fR
Print ASCII progress bar for each downloaded/uploaded file.
.TP
\fB\-V\fR, \fB\-\-verbose\fR
Verbose mode. Enables more messages than usual.
.SH .griveignore
.SH AUTHOR
.PP
You may create .griveignore in your Grive root and use it to setup
exclusion/inclusion rules.
The software was developed by Nestal Wan.
.PP
Rules are similar to Git's .gitignore, but may differ slightly due to the different
implementation.
.IP \[bu]
lines that start with # are comments
.IP \[bu]
leading and trailing spaces ignored unless escaped with \\
.IP \[bu]
non-empty lines without ! in front are treated as "exclude" patterns
.IP \[bu]
non-empty lines with ! in front are treated as "include" patterns
and have a priority over all "exclude" ones
.IP \[bu]
patterns are matched against the filenames relative to the grive root
.IP \[bu]
a/**/b matches any number of subpaths between a and b, including 0
.IP \[bu]
**/a matches `a` inside any directory
.IP \[bu]
b/** matches everything inside `b`, but not b itself
.IP \[bu]
* matches any number of any characters except /
.IP \[bu]
? matches any character except /
.IP \[bu]
\[char46]griveignore itself isn't ignored by default, but you can include it in itself to ignore
.SH AUTHORS
.PP
Current maintainer is Vitaliy Filippov.
.PP
Original author was Nestal Wan.
This manpage was written by José Luis Segura Lucas (josel.segura@gmx.es)
.PP
The full list of contributors may be found here
.I http://yourcmc.ru/wiki/Grive2#Full_list_of_contributors
.SH REPORT BUGS
.PP
.I https://github.com/vitalif/grive2/issues
.I https://github.com/Grive/grive
.I https://groups.google.com/forum/?fromgroups#!forum/grive-devel

View File

@ -18,7 +18,6 @@
*/
#include "util/Config.hh"
#include "util/ProgressBar.hh"
#include "base/Drive.hh"
#include "drive2/Syncer2.hh"
@ -46,8 +45,8 @@
#include <iostream>
#include <unistd.h>
const std::string default_id = APP_ID ;
const std::string default_secret = APP_SECRET ;
const std::string client_id = "22314510474.apps.googleusercontent.com" ;
const std::string client_secret = "bl4ufi89h-9MkFlypcI7R785" ;
using namespace gr ;
namespace po = boost::program_options;
@ -67,13 +66,12 @@ void InitGCrypt()
void InitLog( const po::variables_map& vm )
{
std::unique_ptr<log::CompositeLog> comp_log( new log::CompositeLog ) ;
std::unique_ptr<LogBase> def_log( new log::DefaultLog );
LogBase* console_log = comp_log->Add( def_log ) ;
std::auto_ptr<log::CompositeLog> comp_log(new log::CompositeLog) ;
LogBase* console_log = comp_log->Add( std::auto_ptr<LogBase>( new log::DefaultLog ) ) ;
if ( vm.count( "log" ) )
{
std::unique_ptr<LogBase> file_log( new log::DefaultLog( vm["log"].as<std::string>() ) ) ;
std::auto_ptr<LogBase> file_log(new log::DefaultLog( vm["log"].as<std::string>() )) ;
file_log->Enable( log::debug ) ;
file_log->Enable( log::verbose ) ;
file_log->Enable( log::info ) ;
@ -98,7 +96,7 @@ void InitLog( const po::variables_map& vm )
console_log->Enable( log::verbose ) ;
console_log->Enable( log::debug ) ;
}
LogBase::Inst( comp_log.release() ) ;
LogBase::Inst( std::auto_ptr<LogBase>(comp_log.release()) ) ;
}
int Main( int argc, char **argv )
@ -111,11 +109,8 @@ int Main( int argc, char **argv )
( "help,h", "Produce help message" )
( "version,v", "Display Grive version" )
( "auth,a", "Request authorization token" )
( "id,i", po::value<std::string>(), "Authentication ID")
( "secret,e", po::value<std::string>(), "Authentication secret")
( "print-url", "Only print url for request")
( "path,p", po::value<std::string>(), "Path to working copy root")
( "dir,s", po::value<std::string>(), "Single subdirectory to sync")
( "path,p", po::value<std::string>(), "Root directory to sync")
( "dir,s", po::value<std::string>(), "Single subdirectory to sync (remembered for next runs)")
( "verbose,V", "Verbose mode. Enable more messages than normal.")
( "log-http", po::value<std::string>(), "Log all HTTP responses in this file for debugging.")
( "new-rev", "Create new revisions in server for updated files.")
@ -123,26 +118,15 @@ int Main( int argc, char **argv )
( "log,l", po::value<std::string>(), "Set log output filename." )
( "force,f", "Force grive to always download a file from Google Drive "
"instead of uploading it." )
( "upload-only,u", "Do not download anything from Google Drive, only upload local changes" )
( "no-remote-new,n", "Download only files that are changed in Google Drive and already exist locally" )
( "dry-run", "Only detect which files need to be uploaded/downloaded, "
"without actually performing them." )
( "upload-speed,U", po::value<unsigned>(), "Limit upload speed in kbytes per second" )
( "download-speed,D", po::value<unsigned>(), "Limit download speed in kbytes per second" )
( "progress-bar,P", "Enable progress bar for upload/download of files")
( "ignore", po::value<std::string>(), "Perl RegExp to ignore files (matched against relative paths, remembered for next runs)." )
( "move,m", po::value<std::vector<std::string> >()->multitoken(), "Syncs, then moves a file (first argument) to new location (second argument) without reuploading or redownloading." )
;
po::variables_map vm;
try
{
po::store( po::parse_command_line( argc, argv, desc ), vm );
}
catch( po::error &e )
{
std::cerr << "Options are incorrect. Use -h for help\n";
return -1;
}
po::notify( vm );
po::store(po::parse_command_line( argc, argv, desc), vm );
po::notify(vm);
// simple commands that doesn't require log or config
if ( vm.count("help") )
@ -158,67 +142,43 @@ int Main( int argc, char **argv )
}
// initialize logging
InitLog( vm ) ;
InitLog(vm) ;
Config config( vm ) ;
Config config(vm) ;
Log( "config file name %1%", config.Filename(), log::verbose );
std::unique_ptr<http::Agent> http( new http::CurlAgent );
std::auto_ptr<http::Agent> http( new http::CurlAgent );
if ( vm.count( "log-http" ) )
http->SetLog( new http::ResponseLog( vm["log-http"].as<std::string>(), ".txt" ) );
std::unique_ptr<ProgressBar> pb;
if ( vm.count( "progress-bar" ) )
{
pb.reset( new ProgressBar() );
http->SetProgressReporter( pb.get() );
}
if ( vm.count( "auth" ) )
{
std::string id = vm.count( "id" ) > 0
? vm["id"].as<std::string>()
: default_id ;
std::string secret = vm.count( "secret" ) > 0
? vm["secret"].as<std::string>()
: default_secret ;
OAuth2 token( http.get(), id, secret ) ;
if ( vm.count("print-url") )
{
std::cout << token.MakeAuthURL() << std::endl ;
return 0 ;
}
OAuth2 token( http.get(), client_id, client_secret ) ;
std::cout
<< "-----------------------\n"
<< "Please open this URL in your browser to authenticate Grive2:\n\n"
<< "Please go to this URL and get an authentication code:\n\n"
<< token.MakeAuthURL()
<< std::endl ;
if ( !token.GetCode() )
{
std::cout << "Authentication failed\n";
return -1;
}
std::cout
<< "\n-----------------------\n"
<< "Please input the authentication code here: " << std::endl ;
std::string code ;
std::cin >> code ;
token.Auth( code ) ;
// save to config
config.Set( "id", Val( id ) ) ;
config.Set( "secret", Val( secret ) ) ;
config.Set( "refresh_token", Val( token.RefreshToken() ) ) ;
config.Save() ;
}
std::string refresh_token ;
std::string id ;
std::string secret ;
try
{
refresh_token = config.Get("refresh_token").Str() ;
id = config.Get("id").Str() ;
secret = config.Get("secret").Str() ;
}
catch ( Exception& e )
{
@ -229,33 +189,37 @@ int Main( int argc, char **argv )
return -1;
}
OAuth2 token( http.get(), refresh_token, id, secret ) ;
OAuth2 token( http.get(), refresh_token, client_id, client_secret ) ;
AuthAgent agent( token, http.get() ) ;
v2::Syncer2 syncer( &agent );
if ( vm.count( "upload-speed" ) > 0 )
agent.SetUploadSpeed( vm["upload-speed"].as<unsigned>() * 1000 );
if ( vm.count( "download-speed" ) > 0 )
agent.SetDownloadSpeed( vm["download-speed"].as<unsigned>() * 1000 );
Drive drive( &syncer, config.GetAll() ) ;
drive.DetectChanges() ;
if ( vm.count( "dry-run" ) == 0 )
{
// The progress bar should just be enabled when actual file transfers take place
if ( pb )
pb->setShowProgressBar( true ) ;
drive.Update() ;
if ( pb )
pb->setShowProgressBar( false ) ;
drive.SaveState() ;
}
else
drive.DryRun() ;
if ( vm.count ( "move" ) > 0 && vm.count( "dry-run" ) == 0 )
{
if (vm["move"].as<std::vector<std::string> >().size() < 2 )
Log( "Not enough arguments for move. Move failed.", log::error );
else
{
bool success = drive.Move( vm["move"].as<std::vector<std::string> >()[0],
vm["move"].as<std::vector<std::string> >()[1] );
if (success)
Log( "Move successful!", log::info );
else
Log( "Move failed.", log::error);
}
}
config.Save() ;
Log( "Finished!", log::info ) ;
return 0 ;

View File

@ -4,31 +4,30 @@ set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${CMAKE_SOURCE_DIR}/cmake/Modules/")
find_package(LibGcrypt REQUIRED)
find_package(CURL REQUIRED)
find_package(Backtrace)
find_package(EXPAT REQUIRED)
find_package(Boost 1.40.0 COMPONENTS program_options filesystem unit_test_framework regex system REQUIRED)
find_package(BFD)
find_package(CppUnit)
find_package(Iberty)
find_package(ZLIB)
find_package(PkgConfig)
pkg_check_modules(YAJL REQUIRED yajl)
add_definitions(-Wall)
# additional headers if build unit tests
IF ( CPPUNIT_FOUND )
set( OPT_INCS ${CPPUNIT_INCLUDE_DIR} )
ENDIF ( CPPUNIT_FOUND )
# build bfd classes if libbfd and the backtrace library is found
if ( BFD_FOUND AND Backtrace_FOUND )
set( OPT_LIBS ${BFD_LIBRARY} ${Backtrace_LIBRARY} )
# build bfd classes if libbfd is found
if ( BFD_FOUND )
set( OPT_LIBS ${DL_LIBRARY} ${BFD_LIBRARY} )
file( GLOB OPT_SRC
src/bfd/*.cc
)
add_definitions( -DHAVE_BFD )
endif ( BFD_FOUND AND Backtrace_FOUND )
endif ( BFD_FOUND )
if ( IBERTY_FOUND )
set( OPT_LIBS ${OPT_LIBS} ${IBERTY_LIBRARY} )
@ -36,14 +35,21 @@ else ( IBERTY_FOUND )
set( IBERTY_LIBRARY "" )
endif ( IBERTY_FOUND )
if ( ZLIB_FOUND )
set( OPT_LIBS ${OPT_LIBS} ${ZLIB_LIBRARIES} )
endif ( ZLIB_FOUND )
include_directories(
${libgrive_SOURCE_DIR}/src
${libgrive_SOURCE_DIR}/test
${Boost_INCLUDE_DIRS}
${OPT_INCS}
${YAJL_INCLUDE_DIRS}
)
file(GLOB DRIVE_HEADERS
${libgrive_SOURCE_DIR}/src/drive/*.hh
)
file (GLOB PROTOCOL_HEADERS
${libgrive_SOURCE_DIR}/src/protocol/*.hh
)
@ -58,12 +64,14 @@ file (GLOB XML_HEADERS
file (GLOB LIBGRIVE_SRC
src/base/*.cc
src/drive/*.cc
src/drive2/*.cc
src/http/*.cc
src/protocol/*.cc
src/json/*.cc
src/util/*.cc
src/util/log/*.cc
src/xml/*.cc
)
add_definitions(
@ -78,11 +86,9 @@ target_link_libraries( grive
${YAJL_LIBRARIES}
${CURL_LIBRARIES}
${LIBGCRYPT_LIBRARIES}
${Boost_FILESYSTEM_LIBRARY}
${Boost_PROGRAM_OPTIONS_LIBRARY}
${Boost_REGEX_LIBRARY}
${Boost_SYSTEM_LIBRARY}
${Boost_LIBRARIES}
${IBERTY_LIBRARY}
${EXPAT_LIBRARY}
${OPT_LIBS}
)
@ -112,7 +118,9 @@ IF ( CPPUNIT_FOUND )
# list of test source files here
file(GLOB TEST_SRC
test/base/*.cc
test/drive/*.cc
test/util/*.cc
test/xml/*.cc
)
add_executable( unittest
@ -123,6 +131,7 @@ IF ( CPPUNIT_FOUND )
target_link_libraries( unittest
grive
${CPPUNIT_LIBRARY}
${Boost_LIBRARIES}
)
ENDIF ( CPPUNIT_FOUND )
@ -135,13 +144,9 @@ add_executable( btest ${BTEST_SRC} )
target_link_libraries( btest
grive
${Boost_UNIT_TEST_FRAMEWORK_LIBRARY}
${Boost_LIBRARIES}
)
if ( ${CMAKE_SYSTEM_NAME} MATCHES "FreeBSD" )
set( CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-c++11-narrowing" )
endif ( ${CMAKE_SYSTEM_NAME} MATCHES "FreeBSD" )
if ( WIN32 )
else ( WIN32 )
set_target_properties( btest

View File

@ -35,16 +35,22 @@
#include <cstdlib>
#include <fstream>
#include <map>
#include <sstream>
// for debugging only
#include <iostream>
namespace gr {
namespace
{
const std::string state_file = ".grive_state" ;
}
Drive::Drive( Syncer *syncer, const Val& options ) :
m_syncer ( syncer ),
m_root ( options["path"].Str() ),
m_state ( m_root, options ),
m_state ( m_root / state_file, options ),
m_options ( options )
{
assert( m_syncer ) ;
@ -52,7 +58,19 @@ Drive::Drive( Syncer *syncer, const Val& options ) :
void Drive::FromRemote( const Entry& entry )
{
m_state.FromRemote( entry ) ;
// entries from change feed does not have the parent HREF,
// so these checkings are done in normal entries only
Resource *parent = m_state.FindByHref( entry.ParentHref() ) ;
if ( parent != 0 && !parent->IsFolder() )
Log( "warning: entry %1% has parent %2% which is not a folder, ignored",
entry.Title(), parent->Name(), log::verbose ) ;
else if ( parent == 0 || !parent->IsInRootTree() )
Log( "file \"%1%\" parent doesn't exist, ignored", entry.Title(), log::verbose ) ;
else
m_state.FromRemote( entry ) ;
}
void Drive::FromChange( const Entry& entry )
@ -67,16 +85,49 @@ void Drive::FromChange( const Entry& entry )
void Drive::SaveState()
{
m_state.Write() ;
m_state.Write( m_root / state_file ) ;
}
void Drive::SyncFolders( )
{
Log( "Synchronizing folders", log::info ) ;
std::auto_ptr<Feed> feed = m_syncer->GetFolders() ;
while ( feed->GetNext( m_syncer->Agent() ) )
{
// first, get all collections from the query result
for ( Feed::iterator i = feed->begin() ; i != feed->end() ; ++i )
{
const Entry &e = *i ;
if ( e.IsDir() )
{
if ( e.ParentHrefs().size() != 1 )
Log( "folder \"%1%\" has multiple parents, ignored", e.Title(), log::verbose ) ;
else if ( e.Title().find('/') != std::string::npos )
Log( "folder \"%1%\" contains a slash in its name, ignored", e.Title(), log::verbose ) ;
else
m_state.FromRemote( e ) ;
}
}
}
m_state.ResolveEntry() ;
}
void Drive::DetectChanges()
{
Log( "Reading local directories", log::info ) ;
m_state.FromLocal( m_root ) ;
long prev_stamp = m_state.ChangeStamp() ;
Trace( "previous change stamp is %1%", prev_stamp ) ;
SyncFolders( ) ;
Log( "Reading remote server file list", log::info ) ;
std::unique_ptr<Feed> feed = m_syncer->GetAll() ;
std::auto_ptr<Feed> feed = m_syncer->GetAll() ;
while ( feed->GetNext( m_syncer->Agent() ) )
{
@ -84,19 +135,12 @@ void Drive::DetectChanges()
feed->begin(), feed->end(),
boost::bind( &Drive::FromRemote, this, _1 ) ) ;
}
m_state.ResolveEntry() ;
}
// pull the changes feed
// FIXME: unused until Grive will use the feed-based sync instead of reading full tree
void Drive::ReadChanges()
{
long prev_stamp = m_state.ChangeStamp() ;
// pull the changes feed
if ( prev_stamp != -1 )
{
Trace( "previous change stamp is %1%", prev_stamp ) ;
Log( "Detecting changes from last sync", log::info ) ;
std::unique_ptr<Feed> feed = m_syncer->GetChanges( prev_stamp+1 ) ;
feed = m_syncer->GetChanges( prev_stamp+1 ) ;
while ( feed->GetNext( m_syncer->Agent() ) )
{
std::for_each(
@ -106,6 +150,11 @@ void Drive::ReadChanges()
}
}
bool Drive::Move( fs::path old_p, fs::path new_p )
{
return m_state.Move( m_syncer, old_p, new_p, m_options["path"].Str() );
}
void Drive::Update()
{
Log( "Synchronizing files", log::info ) ;

View File

@ -41,6 +41,7 @@ public :
Drive( Syncer *syncer, const Val& options ) ;
void DetectChanges() ;
bool Move( fs::path old_p, fs::path new_p );
void Update() ;
void DryRun() ;
void SaveState() ;
@ -48,7 +49,7 @@ public :
struct Error : virtual Exception {} ;
private :
void ReadChanges() ;
void SyncFolders( ) ;
void FromRemote( const Entry& entry ) ;
void FromChange( const Entry& entry ) ;
void UpdateChangeStamp( ) ;

View File

@ -36,8 +36,7 @@ Entry::Entry( ) :
m_is_dir ( true ),
m_resource_id ( "folder:root" ),
m_change_stamp ( -1 ),
m_is_removed ( false ),
m_size ( 0 )
m_is_removed ( false )
{
}
@ -66,11 +65,6 @@ std::string Entry::MD5() const
return m_md5 ;
}
u64_t Entry::Size() const
{
return m_size ;
}
DateTime Entry::MTime() const
{
return m_mtime ;

View File

@ -19,7 +19,6 @@
#pragma once
#include "util/Types.hh"
#include "util/DateTime.hh"
#include "util/FileSystem.hh"
@ -45,7 +44,6 @@ public :
bool IsDir() const ;
std::string MD5() const ;
DateTime MTime() const ;
u64_t Size() const ;
std::string Name() const ;
@ -82,7 +80,6 @@ protected :
DateTime m_mtime ;
bool m_is_removed ;
u64_t m_size ;
} ;
} // end of namespace gr

View File

@ -30,10 +30,6 @@ Feed::Feed( const std::string &url ):
{
}
Feed::~Feed()
{
}
Feed::iterator Feed::begin() const
{
return m_entries.begin() ;

View File

@ -41,7 +41,6 @@ public :
public :
Feed( const std::string& url );
virtual bool GetNext( http::Agent *http ) = 0 ;
virtual ~Feed() = 0 ;
iterator begin() const ;
iterator end() const ;

View File

@ -18,7 +18,6 @@
*/
#include "Resource.hh"
#include "ResourceTree.hh"
#include "Entry.hh"
#include "Syncer.hh"
@ -28,7 +27,6 @@
#include "util/log/Log.hh"
#include "util/OS.hh"
#include "util/File.hh"
#include "http/Error.hh"
#include <boost/exception/all.hpp>
#include <boost/filesystem.hpp>
@ -47,26 +45,20 @@ namespace gr {
Resource::Resource( const fs::path& root_folder ) :
m_name ( root_folder.string() ),
m_kind ( "folder" ),
m_size ( 0 ),
m_id ( "folder:root" ),
m_href ( "root" ),
m_is_editable( true ),
m_parent ( 0 ),
m_state ( sync ),
m_json ( NULL ),
m_local_exists( true )
m_is_editable( true )
{
}
Resource::Resource( const std::string& name, const std::string& kind ) :
m_name ( name ),
m_kind ( kind ),
m_size ( 0 ),
m_is_editable( true ),
m_parent ( 0 ),
m_state ( unknown ),
m_json ( NULL ),
m_local_exists( false )
m_is_editable( true )
{
}
@ -83,62 +75,72 @@ void Resource::SetState( State new_state )
boost::bind( &Resource::SetState, _1, new_state ) ) ;
}
void Resource::FromRemoteFolder( const Entry& remote )
void Resource::FromRemoteFolder( const Entry& remote, const DateTime& last_change )
{
fs::path path = Path() ;
if ( !remote.IsEditable() )
Log( "folder %1% is read-only", path, log::verbose ) ;
// already sync
if ( m_local_exists && m_kind == "folder" )
if ( fs::is_directory( path ) )
{
Log( "folder %1% is in sync", path, log::verbose ) ;
m_state = sync ;
}
else if ( m_local_exists && m_kind == "file" )
// remote file created after last sync, so remote is newer
else if ( remote.MTime() > last_change )
{
// TODO: handle type change
Log( "%1% changed from folder to file", path, log::verbose ) ;
m_state = sync ;
}
else if ( m_local_exists && m_kind == "bad" )
{
Log( "%1% inaccessible", path, log::verbose ) ;
m_state = sync ;
}
else if ( remote.MTime().Sec() > m_mtime.Sec() ) // FIXME only seconds are stored in local index
{
// remote folder created after last sync, so remote is newer
Log( "folder %1% is created in remote", path, log::verbose ) ;
SetState( remote_new ) ;
if ( fs::exists( path ) )
{
// TODO: handle type change
Log( "%1% changed from folder to file", path, log::verbose ) ;
m_state = sync ;
}
else
{
// make all children as remote_new, if any
Log( "folder %1% is created in remote", path, log::verbose ) ;
SetState( remote_new ) ;
}
}
else
{
Log( "folder %1% is deleted in local", path, log::verbose ) ;
SetState( local_deleted ) ;
if ( fs::exists( path ) )
{
// TODO: handle type chage
Log( "%1% changed from file to folder", path, log::verbose ) ;
m_state = sync ;
}
else
{
Log( "folder %1% is deleted in local", path, log::verbose ) ;
SetState( local_deleted ) ;
}
}
}
/// Update the state according to information (i.e. Entry) from remote. This function
/// compares the modification time and checksum of both copies and determine which
/// one is newer.
void Resource::FromRemote( const Entry& remote )
void Resource::FromRemote( const Entry& remote, const DateTime& last_change )
{
// sync folder
if ( remote.IsDir() && IsFolder() )
FromRemoteFolder( remote ) ;
FromRemoteFolder( remote, last_change ) ;
else
FromRemoteFile( remote ) ;
FromRemoteFile( remote, last_change ) ;
AssignIDs( remote ) ;
assert( m_state != unknown ) ;
if ( m_state == remote_new || m_state == remote_changed )
m_md5 = remote.MD5() ;
m_mtime = remote.MTime() ;
{
m_md5 = remote.MD5() ;
m_mtime = remote.MTime() ;
}
}
void Resource::AssignIDs( const Entry& remote )
@ -151,11 +153,10 @@ void Resource::AssignIDs( const Entry& remote )
m_content = remote.ContentSrc() ;
m_is_editable = remote.IsEditable() ;
m_etag = remote.ETag() ;
m_md5 = remote.MD5() ;
}
}
void Resource::FromRemoteFile( const Entry& remote )
void Resource::FromRemoteFile( const Entry& remote, const DateTime& last_change )
{
assert( m_parent != 0 ) ;
@ -173,21 +174,16 @@ void Resource::FromRemoteFile( const Entry& remote )
m_state = m_parent->m_state ;
}
else if ( m_kind == "bad" )
{
m_state = sync;
}
// local not exists
else if ( !m_local_exists )
else if ( !fs::exists( path ) )
{
Trace( "file %1% change stamp = %2%", Path(), remote.ChangeStamp() ) ;
if ( remote.MTime().Sec() > m_mtime.Sec() || remote.MD5() != m_md5 || remote.ChangeStamp() > 0 )
if ( remote.MTime() > last_change || remote.ChangeStamp() > 0 )
{
Log( "file %1% is created in remote (change %2%)", path,
remote.ChangeStamp(), log::verbose ) ;
m_size = remote.Size();
m_state = remote_new ;
}
else
@ -196,25 +192,31 @@ void Resource::FromRemoteFile( const Entry& remote )
m_state = local_deleted ;
}
}
// remote checksum unknown, assume the file is not changed in remote
else if ( remote.MD5().empty() )
{
Log( "file %1% has unknown checksum in remote. assumed in sync",
Log( "file %1% has unknown checksum in remote. assuned in sync",
Path(), log::verbose ) ;
m_state = sync ;
}
// if checksum is equal, no need to compare the mtime
else if ( remote.MD5() == m_md5 )
{
Log( "file %1% is already in sync", Path(), log::verbose ) ;
m_state = sync ;
}
// use mtime to check which one is more recent
else if ( remote.Size() != m_size || remote.MD5() != GetMD5() )
else
{
assert( m_state != unknown ) ;
// if remote is modified
if ( remote.MTime().Sec() > m_mtime.Sec() )
if ( remote.MTime() > m_mtime )
{
Log( "file %1% is changed in remote", path, log::verbose ) ;
m_size = remote.Size();
m_state = remote_changed ;
}
@ -227,105 +229,33 @@ void Resource::FromRemoteFile( const Entry& remote )
else
Trace( "file %1% state is %2%", m_name, m_state ) ;
}
// if checksum is equal, no need to compare the mtime
else
{
Log( "file %1% is already in sync", Path(), log::verbose ) ;
m_state = sync ;
}
}
void Resource::FromDeleted( Val& state )
{
assert( !m_json );
m_json = &state;
if ( state.Has( "ctime" ) )
m_ctime.Assign( state["ctime"].U64(), 0 );
if ( state.Has( "md5" ) )
m_md5 = state["md5"];
if ( state.Has( "srv_time" ) )
m_mtime.Assign( state[ "srv_time" ].U64(), 0 ) ;
if ( state.Has( "size" ) )
m_size = state[ "size" ].U64();
m_state = both_deleted;
}
/// Update the resource with the attributes of local file or directory. This
/// function will propulate the fields in m_entry.
void Resource::FromLocal( Val& state )
void Resource::FromLocal( const DateTime& last_sync )
{
assert( !m_json );
m_json = &state;
fs::path path = Path() ;
//assert( fs::exists( path ) ) ;
// root folder is always in sync
if ( !IsRoot() )
{
fs::path path = Path() ;
FileType ft ;
try
{
os::Stat( path, &m_ctime, (off64_t*)&m_size, &ft ) ;
}
catch ( os::Error &e )
{
// invalid symlink, unreadable file or something else
int const* eno = boost::get_error_info< boost::errinfo_errno >(e);
Log( "Error accessing %1%: %2%; skipping file", path.string(), strerror( *eno ), log::warning );
m_state = sync;
m_kind = "bad";
return;
}
if ( ft == FT_UNKNOWN )
{
// Skip sockets/FIFOs/etc
Log( "File %1% is not a regular file or directory; skipping file", path.string(), log::warning );
m_state = sync;
m_kind = "bad";
return;
}
m_mtime = os::FileCTime( path ) ;
m_name = path.filename().string() ;
m_kind = ft == FT_DIR ? "folder" : "file";
m_local_exists = true;
bool is_changed;
if ( state.Has( "ctime" ) && (u64_t) m_ctime.Sec() <= state["ctime"].U64() &&
( ft == FT_DIR || state.Has( "md5" ) ) )
{
if ( ft != FT_DIR )
m_md5 = state["md5"];
is_changed = false;
}
// follow parent recursively
if ( m_parent->m_state == local_new || m_parent->m_state == local_deleted )
m_state = local_new ;
// if the file is not created after last sync, assume file is
// remote_deleted first, it will be updated to sync/remote_changed
// in FromRemote()
else
{
if ( ft != FT_DIR )
{
// File is changed locally. TODO: Detect conflicts
is_changed = ( state.Has( "size" ) && m_size != state["size"].U64() ) ||
!state.Has( "md5" ) || GetMD5() != state["md5"].Str();
}
else
is_changed = true;
}
if ( state.Has( "srv_time" ) )
m_mtime.Assign( state[ "srv_time" ].U64(), 0 ) ;
// Upload file if it is changed and remove if not.
// State will be updated to sync/remote_changed in FromRemote()
m_state = is_changed ? local_new : remote_deleted;
if ( m_state == local_new )
{
// local_new means this file is changed in local.
// this means we can't delete any of its parents.
// make sure their state is also set to local_new.
Resource *p = m_parent;
while ( p && p->m_state == remote_deleted )
{
p->m_state = local_new;
p = p->m_parent;
}
}
m_state = ( m_mtime > last_sync ? local_new : remote_deleted ) ;
m_name = path.filename().string() ;
m_kind = IsFolder() ? "folder" : "file" ;
m_md5 = IsFolder() ? "" : crypt::MD5::Get( path ) ;
}
assert( m_state != unknown ) ;
@ -356,7 +286,7 @@ std::string Resource::Kind() const
return m_kind ;
}
DateTime Resource::ServerTime() const
DateTime Resource::MTime() const
{
return m_mtime ;
}
@ -438,14 +368,14 @@ Resource* Resource::FindChild( const std::string& name )
}
// try to change the state to "sync"
void Resource::Sync( Syncer *syncer, ResourceTree *res_tree, const Val& options )
void Resource::Sync( Syncer *syncer, DateTime& sync_time, const Val& options )
{
assert( m_state != unknown ) ;
assert( !IsRoot() || m_state == sync ) ; // root folder is already synced
try
{
SyncSelf( syncer, res_tree, options ) ;
SyncSelf( syncer, options ) ;
}
catch ( File::Error &e )
{
@ -458,103 +388,18 @@ void Resource::Sync( Syncer *syncer, ResourceTree *res_tree, const Val& options
Log( "Error syncing %1%: %2%", Path(), e.what(), log::error );
return;
}
catch ( http::Error &e )
{
int *curlcode = boost::get_error_info< http::CurlCode > ( e ) ;
int *httpcode = boost::get_error_info< http::HttpResponseCode > ( e ) ;
std::string msg;
if ( curlcode )
msg = *( boost::get_error_info< http::CurlErrMsg > ( e ) );
else if ( httpcode )
msg = "HTTP " + boost::to_string( *httpcode );
else
msg = e.what();
Log( "Error syncing %1%: %2%", Path(), msg, log::error );
std::string *url = boost::get_error_info< http::Url > ( e );
std::string *resp_hdr = boost::get_error_info< http::HttpResponseHeaders > ( e );
std::string *resp_txt = boost::get_error_info< http::HttpResponseText > ( e );
http::Header *req_hdr = boost::get_error_info< http::HttpRequestHeaders > ( e );
if ( url )
Log( "Request URL: %1%", *url, log::verbose );
if ( req_hdr )
Log( "Request headers: %1%", req_hdr->Str(), log::verbose );
if ( resp_hdr )
Log( "Response headers: %1%", *resp_hdr, log::verbose );
if ( resp_txt )
Log( "Response text: %1%", *resp_txt, log::verbose );
return;
}
// we want the server sync time, so we will take the server time of the last file uploaded to store as the sync time
// m_mtime is updated to server modified time when the file is uploaded
sync_time = std::max(sync_time, m_mtime);
// if myself is deleted, no need to do the childrens
if ( m_state != local_deleted && m_state != remote_deleted )
{
std::for_each( m_child.begin(), m_child.end(),
boost::bind( &Resource::Sync, _1, syncer, res_tree, options ) ) ;
}
boost::bind( &Resource::Sync, _1, syncer, boost::ref(sync_time), options ) ) ;
}
bool Resource::CheckRename( Syncer* syncer, ResourceTree *res_tree )
{
if ( !IsFolder() && ( m_state == local_new || m_state == remote_new ) )
{
bool is_local = m_state == local_new;
State other = is_local ? local_deleted : remote_deleted;
if ( is_local )
{
// First check size index for locally added files
details::SizeRange moved = res_tree->FindBySize( m_size );
bool found = false;
for ( details::SizeMap::iterator i = moved.first ; i != moved.second; i++ )
{
Resource *m = *i;
if ( m->m_state == other )
{
found = true;
break;
}
}
if ( !found )
{
// Don't check md5 sums if there are no deleted files with same size
return false;
}
}
details::MD5Range moved = res_tree->FindByMD5( GetMD5() );
for ( details::MD5Map::iterator i = moved.first ; i != moved.second; i++ )
{
Resource *m = *i;
if ( m->m_state == other )
{
Resource* from = m_state == local_new || m_state == remote_new ? m : this;
Resource* to = m_state == local_new || m_state == remote_new ? this : m;
Log( "sync %1% moved to %2%. moving %3%", from->Path(), to->Path(),
is_local ? "remote" : "local", log::info );
if ( syncer )
{
if ( is_local )
{
syncer->Move( from, to->Parent(), to->Name() );
to->SetIndex( false );
}
else
{
fs::rename( from->Path(), to->Path() );
to->SetIndex( true );
}
to->m_mtime = from->m_mtime;
to->m_json->Set( "srv_time", Val( from->m_mtime.Sec() ) );
from->DeleteIndex();
}
from->m_state = both_deleted;
to->m_state = sync;
return true;
}
}
}
return false;
}
void Resource::SyncSelf( Syncer* syncer, ResourceTree *res_tree, const Val& options )
void Resource::SyncSelf( Syncer* syncer, const Val& options )
{
assert( !IsRoot() || m_state == sync ) ; // root is always sync
assert( IsRoot() || !syncer || m_parent->IsFolder() ) ;
@ -563,111 +408,69 @@ void Resource::SyncSelf( Syncer* syncer, ResourceTree *res_tree, const Val& opti
const fs::path path = Path() ;
// Detect renames
if ( CheckRename( syncer, res_tree ) )
return;
switch ( m_state )
{
case local_new :
Log( "sync %1% doesn't exist in server, uploading", path, log::info ) ;
// FIXME: (?) do not write new timestamp on failed upload
if ( syncer && syncer->Create( this ) )
{
m_state = sync ;
SetIndex( false );
}
break ;
case local_deleted :
Log( "sync %1% deleted in local. deleting remote", path, log::info ) ;
if ( syncer && !options["no-delete-remote"].Bool() )
{
if ( syncer )
syncer->DeleteRemote( this ) ;
DeleteIndex() ;
}
break ;
case local_changed :
Log( "sync %1% changed in local. uploading", path, log::info ) ;
if ( syncer && syncer->EditContent( this, options["new-rev"].Bool() ) )
{
m_state = sync ;
SetIndex( false );
}
break ;
case remote_new :
if ( options["no-remote-new"].Bool() )
Log( "sync %1% created in remote. skipping", path, log::info ) ;
else
Log( "sync %1% created in remote. creating local", path, log::info ) ;
if ( syncer )
{
Log( "sync %1% created in remote. creating local", path, log::info ) ;
if ( syncer )
{
if ( IsFolder() )
fs::create_directories( path ) ;
else
syncer->Download( this, path ) ;
SetIndex( true ) ;
m_state = sync ;
}
if ( IsFolder() )
fs::create_directories( path ) ;
else
syncer->Download( this, path ) ;
m_state = sync ;
}
break ;
case remote_changed :
assert( !IsFolder() ) ;
if ( options["upload-only"].Bool() )
Log( "sync %1% changed in remote. skipping", path, log::info ) ;
else
Log( "sync %1% changed in remote. downloading", path, log::info ) ;
if ( syncer )
{
Log( "sync %1% changed in remote. downloading", path, log::info ) ;
if ( syncer )
{
syncer->Download( this, path ) ;
SetIndex( true ) ;
m_state = sync ;
}
syncer->Download( this, path ) ;
m_state = sync ;
}
break ;
case remote_deleted :
Log( "sync %1% deleted in remote. deleting local", path, log::info ) ;
if ( syncer )
{
DeleteLocal() ;
DeleteIndex() ;
}
break ;
case both_deleted :
if ( syncer )
DeleteIndex() ;
break ;
case sync :
Log( "sync %1% already in sync", path, log::verbose ) ;
if ( !IsRoot() )
SetIndex( false ) ;
break ;
// shouldn't go here
case unknown :
default :
assert( false ) ;
break ;
default :
break ;
}
if ( syncer && m_json )
{
// Update server time of this file
m_json->Set( "srv_time", Val( m_mtime.Sec() ) );
}
}
void Resource::SetServerTime( const DateTime& time )
{
m_mtime = time ;
}
/// this function doesn't really remove the local file. it renames it.
@ -675,7 +478,7 @@ void Resource::DeleteLocal()
{
static const boost::format trash_file( "%1%-%2%" ) ;
assert( m_parent != NULL ) ;
assert( m_parent != 0 ) ;
Resource* p = m_parent;
fs::path destdir;
while ( !p->IsRoot() )
@ -700,38 +503,6 @@ void Resource::DeleteLocal()
}
}
void Resource::DeleteIndex()
{
(*m_parent->m_json)["tree"].Del( Name() );
m_json = NULL;
}
void Resource::SetIndex( bool re_stat )
{
assert( m_parent && m_parent->m_json != NULL );
if ( !m_json )
m_json = &((*m_parent->m_json)["tree"]).Item( Name() );
FileType ft;
if ( re_stat )
os::Stat( Path(), &m_ctime, NULL, &ft );
else
ft = IsFolder() ? FT_DIR : FT_FILE;
m_json->Set( "ctime", Val( m_ctime.Sec() ) );
if ( ft != FT_DIR )
{
m_json->Set( "md5", Val( m_md5 ) );
m_json->Set( "size", Val( m_size ) );
m_json->Del( "tree" );
}
else
{
// add tree item if it does not exist
m_json->Item( "tree" );
m_json->Del( "md5" );
m_json->Del( "size" );
}
}
Resource::iterator Resource::begin() const
{
return m_child.begin() ;
@ -752,7 +523,7 @@ std::ostream& operator<<( std::ostream& os, Resource::State s )
static const char *state[] =
{
"sync", "local_new", "local_changed", "local_deleted", "remote_new",
"remote_changed", "remote_deleted", "both_deleted"
"remote_changed", "remote_deleted"
} ;
assert( s >= 0 && s < Count(state) ) ;
return os << state[s] ;
@ -765,32 +536,15 @@ std::string Resource::StateStr() const
return ss.str() ;
}
u64_t Resource::Size() const
{
return m_size ;
}
std::string Resource::MD5() const
{
return m_md5 ;
}
std::string Resource::GetMD5()
{
if ( m_md5.empty() && !IsFolder() && m_local_exists )
{
// MD5 checksum is calculated lazily and only when really needed:
// 1) when a local rename is supposed (when there are a new file and a deleted file of the same size)
// 2) when local ctime is changed, but file size isn't
m_md5 = crypt::MD5::Get( Path() );
}
return m_md5 ;
}
bool Resource::IsRoot() const
{
// Root entry does not show up in file feeds, so we check for empty parent (and self-href)
return !m_parent ;
return m_parent == 0 ;
}
bool Resource::HasID() const

View File

@ -19,7 +19,6 @@
#pragma once
#include "util/Types.hh"
#include "util/DateTime.hh"
#include "util/Exception.hh"
#include "util/FileSystem.hh"
@ -30,8 +29,6 @@
namespace gr {
class ResourceTree ;
class Syncer ;
class Val ;
@ -70,14 +67,12 @@ public :
/// We should download the file.
remote_new,
/// Resource exists in both local & remote, but remote is newer.
/// Resource exists in both local & remote, but remote is newer.
remote_changed,
/// Resource delete in remote, need to delete in local
remote_deleted,
/// Both deleted. State is used to remove leftover files from the index after sync.
both_deleted,
/// invalid value
unknown
@ -92,7 +87,7 @@ public :
std::string Name() const ;
std::string Kind() const ;
DateTime ServerTime() const ;
DateTime MTime() const ;
std::string SelfHref() const ;
std::string ContentSrc() const ;
std::string ETag() const ;
@ -109,16 +104,12 @@ public :
bool IsInRootTree() const ;
bool IsRoot() const ;
bool HasID() const ;
u64_t Size() const;
std::string MD5() const ;
std::string GetMD5() ;
void FromRemote( const Entry& remote ) ;
void FromDeleted( Val& state ) ;
void FromLocal( Val& state ) ;
void FromRemote( const Entry& remote, const DateTime& last_change ) ;
void FromLocal( const DateTime& last_sync ) ;
void Sync( Syncer* syncer, ResourceTree *res_tree, const Val& options ) ;
void SetServerTime( const DateTime& time ) ;
void Sync( Syncer* syncer, DateTime& sync_time, const Val& options ) ;
// children access
iterator begin() const ;
@ -137,23 +128,18 @@ private :
private :
void SetState( State new_state ) ;
void FromRemoteFolder( const Entry& remote ) ;
void FromRemoteFile( const Entry& remote ) ;
void FromRemoteFolder( const Entry& remote, const DateTime& last_change ) ;
void FromRemoteFile( const Entry& remote, const DateTime& last_change ) ;
void DeleteLocal() ;
void DeleteIndex() ;
void SetIndex( bool ) ;
bool CheckRename( Syncer* syncer, ResourceTree *res_tree ) ;
void SyncSelf( Syncer* syncer, ResourceTree *res_tree, const Val& options ) ;
void SyncSelf( Syncer* syncer, const Val& options ) ;
private :
std::string m_name ;
std::string m_kind ;
std::string m_md5 ;
DateTime m_mtime ;
DateTime m_ctime ;
u64_t m_size ;
std::string m_id ;
std::string m_href ;
@ -166,8 +152,6 @@ private :
std::vector<Resource*> m_child ;
State m_state ;
Val* m_json ;
bool m_local_exists ;
} ;
} // end of namespace gr::v1

View File

@ -97,21 +97,7 @@ const Resource* ResourceTree::FindByHref( const std::string& href ) const
return i != map.end() ? *i : 0 ;
}
MD5Range ResourceTree::FindByMD5( const std::string& md5 )
{
MD5Map& map = m_set.get<ByMD5>() ;
if ( !md5.empty() )
return map.equal_range( md5 );
return MD5Range( map.end(), map.end() ) ;
}
SizeRange ResourceTree::FindBySize( u64_t size )
{
SizeMap& map = m_set.get<BySize>() ;
return map.equal_range( size );
}
/// Reinsert should be called when the ID/HREF/MD5 were updated
/// Reinsert should be called when the ID/HREF were updated
bool ResourceTree::ReInsert( Resource *coll )
{
Set& s = m_set.get<ByIdentity>() ;
@ -137,11 +123,11 @@ void ResourceTree::Erase( Resource *coll )
s.erase( s.find( coll ) ) ;
}
void ResourceTree::Update( Resource *coll, const Entry& e )
void ResourceTree::Update( Resource *coll, const Entry& e, const DateTime& last_change )
{
assert( coll != 0 ) ;
coll->FromRemote( e ) ;
coll->FromRemote( e, last_change ) ;
ReInsert( coll ) ;
}

View File

@ -33,27 +33,22 @@ namespace gr {
namespace details
{
using namespace boost::multi_index ;
struct ByMD5 {} ;
struct ByID {} ;
struct ByHref {} ;
struct ByIdentity {} ;
struct BySize {} ;
typedef multi_index_container<
Resource*,
indexed_by<
hashed_non_unique<tag<ByHref>, const_mem_fun<Resource, std::string, &Resource::SelfHref> >,
hashed_non_unique<tag<ByMD5>, const_mem_fun<Resource, std::string, &Resource::MD5> >,
hashed_non_unique<tag<BySize>, const_mem_fun<Resource, u64_t, &Resource::Size> >,
hashed_non_unique<tag<ByID>, const_mem_fun<Resource, std::string, &Resource::ResourceID> >,
hashed_unique<tag<ByIdentity>, identity<Resource*> >
>
> Folders ;
typedef Folders::index<ByMD5>::type MD5Map ;
typedef Folders::index<ByID>::type IDMap ;
typedef Folders::index<ByHref>::type HrefMap ;
typedef Folders::index<BySize>::type SizeMap ;
typedef Folders::index<ByIdentity>::type Set ;
typedef std::pair<SizeMap::iterator, SizeMap::iterator> SizeRange ;
typedef std::pair<MD5Map::iterator, MD5Map::iterator> MD5Range ;
}
/*! \brief A simple container for storing folders
@ -73,14 +68,14 @@ public :
Resource* FindByHref( const std::string& href ) ;
const Resource* FindByHref( const std::string& href ) const ;
details::MD5Range FindByMD5( const std::string& md5 ) ;
details::SizeRange FindBySize( u64_t size ) ;
Resource* FindByID( const std::string& id ) ;
bool ReInsert( Resource *coll ) ;
void Insert( Resource *coll ) ;
void Erase( Resource *coll ) ;
void Update( Resource *coll, const Entry& e ) ;
void Update( Resource *coll, const Entry& e, const DateTime& last_change ) ;
Resource* Root() ;
const Resource* Root() const ;

View File

@ -26,38 +26,29 @@
#include "util/Crypt.hh"
#include "util/File.hh"
#include "util/log/Log.hh"
#include "json/Val.hh"
#include "json/JsonParser.hh"
#include <boost/algorithm/string.hpp>
#include <fstream>
namespace gr {
const std::string state_file = ".grive_state" ;
const std::string ignore_file = ".griveignore" ;
const int MAX_IGN = 65536 ;
const char* regex_escape_chars = ".^$|()[]{}*+?\\";
const boost::regex regex_escape_re( "[.^$|()\\[\\]{}*+?\\\\]" );
inline std::string regex_escape( std::string s )
{
return regex_replace( s, regex_escape_re, "\\\\&", boost::format_sed );
}
State::State( const fs::path& root, const Val& options ) :
m_root ( root ),
State::State( const fs::path& filename, const Val& options ) :
m_res ( options["path"].Str() ),
m_cstamp ( -1 )
{
Read() ;
// the "-f" option will make grive always think remote is newer
m_force = options.Has( "force" ) ? options["force"].Bool() : false ;
std::string m_orig_ign = m_ign;
Read( filename ) ;
bool force = options.Has( "force" ) ? options["force"].Bool() : false ;
if ( options.Has( "ignore" ) && options["ignore"].Str() != m_ign )
{
// also "-f" is implicitly turned on when ignore regexp is changed
// because without it grive would think that previously ignored files are deleted locally
if ( !m_ign.empty() )
force = true;
m_ign = options["ignore"].Str();
}
else if ( options.Has( "dir" ) )
{
const boost::regex trim_path( "^/+|/+$" );
@ -65,20 +56,24 @@ State::State( const fs::path& root, const Val& options ) :
if ( !m_dir.empty() )
{
// "-s" is internally converted to an ignore regexp
m_dir = regex_escape( m_dir );
size_t pos = 0;
while ( ( pos = m_dir.find( '/', pos ) ) != std::string::npos )
{
m_dir = m_dir.substr( 0, pos ) + "$|" + m_dir;
pos = pos*2 + 3;
}
std::string ign = "^(?!"+m_dir+"(/|$))";
const boost::regex esc( "[.^$|()\\[\\]{}*+?\\\\]" );
std::string ign = "^(?!"+regex_replace( m_dir, esc, "\\\\&", boost::format_sed )+"(/|$))";
if ( !m_ign.empty() && ign != m_ign )
force = true;
m_ign = ign;
}
}
m_ign_changed = m_orig_ign != "" && m_orig_ign != m_ign;
m_ign_re = boost::regex( m_ign.empty() ? "^\\.(grive$|grive_state$|trash)" : ( m_ign+"|^\\.(grive$|grive_state$|trash)" ) );
// the "-f" option will make grive always think remote is newer
if ( force )
{
m_last_change = DateTime() ;
m_last_sync = DateTime::Now() ;
}
m_ign_re = boost::regex( m_ign.empty() ? "^\\.(grive|grive_state|trash)" : ( m_ign+"|^\\.(grive|grive_state|trash)" ) );
Log( "last server change time: %1%; last sync time: %2%", m_last_change, m_last_sync, log::verbose ) ;
}
State::~State()
@ -89,71 +84,52 @@ State::~State()
/// of local directory.
void State::FromLocal( const fs::path& p )
{
m_res.Root()->FromLocal( m_st ) ;
FromLocal( p, m_res.Root(), m_st.Item( "tree" ) ) ;
FromLocal( p, m_res.Root() ) ;
}
bool State::IsIgnore( const std::string& filename )
{
return regex_search( filename.c_str(), m_ign_re, boost::format_perl );
return regex_search( filename.c_str(), m_ign_re );
}
void State::FromLocal( const fs::path& p, Resource* folder, Val& tree )
void State::FromLocal( const fs::path& p, Resource* folder )
{
assert( folder != 0 ) ;
assert( folder->IsFolder() ) ;
Val::Object leftover = tree.AsObject();
// sync the folder itself
folder->FromLocal( m_last_sync ) ;
for ( fs::directory_iterator i( p ) ; i != fs::directory_iterator() ; ++i )
{
std::string fname = i->path().filename().string() ;
std::string path = ( folder->IsRoot() ? fname : ( folder->RelPath() / fname ).string() );
fs::file_status st = fs::status(i->path());
std::string path = folder->IsRoot() ? fname : ( folder->RelPath() / fname ).string();
if ( IsIgnore( path ) )
Log( "file %1% is ignored by grive", path, log::verbose ) ;
// check for broken symblic links
else if ( st.type() == fs::file_not_found )
Log( "file %1% doesn't exist (broken link?), ignored", i->path(), log::verbose ) ;
else
{
bool is_dir = st.type() == fs::directory_file;
// if the Resource object of the child already exists, it should
// have been so no need to do anything here
Resource *c = folder->FindChild( fname ), *c2 = c ;
if ( !c )
Resource *c = folder->FindChild( fname ) ;
if ( c == 0 )
{
c2 = new Resource( fname, "" ) ;
folder->AddChild( c2 ) ;
c = new Resource( fname, is_dir ? "folder" : "file" ) ;
folder->AddChild( c ) ;
m_res.Insert( c ) ;
}
leftover.erase( fname );
Val& rec = tree.Item( fname );
if ( m_force )
rec.Del( "srv_time" );
c2->FromLocal( rec ) ;
if ( !c )
m_res.Insert( c2 ) ;
if ( c2->IsFolder() )
FromLocal( *i, c2, rec.Item( "tree" ) ) ;
}
}
for( Val::Object::iterator i = leftover.begin(); i != leftover.end(); i++ )
{
std::string path = folder->IsRoot() ? i->first : ( folder->RelPath() / i->first ).string();
if ( IsIgnore( path ) )
Log( "file %1% is ignored by grive", path, log::verbose ) ;
else
{
// Restore state of locally deleted files
Resource *c = folder->FindChild( i->first ), *c2 = c ;
if ( !c )
{
c2 = new Resource( i->first, i->second.Has( "tree" ) ? "folder" : "file" ) ;
folder->AddChild( c2 ) ;
}
Val& rec = tree.Item( i->first );
if ( m_force || m_ign_changed )
rec.Del( "srv_time" );
c2->FromDeleted( rec );
if ( !c )
m_res.Insert( c2 ) ;
c->FromLocal( m_last_sync ) ;
if ( is_dir )
FromLocal( *i, c ) ;
}
}
}
@ -164,7 +140,7 @@ void State::FromRemote( const Entry& e )
std::string k = e.IsDir() ? "folder" : "file";
// common checkings
if ( !e.IsDir() && ( fn.empty() || e.ContentSrc().empty() ) )
if ( !e.IsDir() && (fn.empty() || e.ContentSrc().empty()) )
Log( "%1% \"%2%\" is a google document, ignored", k, e.Name(), log::verbose ) ;
else if ( fn.find('/') != fn.npos )
@ -194,9 +170,9 @@ std::size_t State::TryResolveEntry()
assert( !m_unresolved.empty() ) ;
std::size_t count = 0 ;
std::list<Entry>& en = m_unresolved ;
for ( std::list<Entry>::iterator i = en.begin() ; i != en.end() ; )
std::vector<Entry>& en = m_unresolved ;
for ( std::vector<Entry>::iterator i = en.begin() ; i != en.end() ; )
{
if ( Update( *i ) )
{
@ -216,7 +192,7 @@ void State::FromChange( const Entry& e )
// entries in the change feed is always treated as newer in remote,
// so we override the last sync time to 0
if ( Resource *res = m_res.FindByHref( e.SelfHref() ) )
m_res.Update( res, e ) ;
m_res.Update( res, e, DateTime() ) ;
}
bool State::Update( const Entry& e )
@ -232,17 +208,11 @@ bool State::Update( const Entry& e )
Log( "%1% is ignored by grive", path, log::verbose ) ;
return true;
}
m_res.Update( res, e ) ;
m_res.Update( res, e, m_last_change ) ;
return true;
}
else if ( Resource *parent = m_res.FindByHref( e.ParentHref() ) )
{
if ( !parent->IsFolder() )
{
// https://github.com/vitalif/grive2/issues/148
Log( "%1% is owned by something that's not a directory: href=%2% name=%3%", e.Name(), e.ParentHref(), parent->RelPath(), log::error );
return true;
}
assert( parent->IsFolder() ) ;
std::string path = parent->IsRoot() ? e.Name() : ( parent->RelPath() / e.Name() ).string();
@ -255,10 +225,10 @@ bool State::Update( const Entry& e )
// see if the entry already exist in local
std::string name = e.Name() ;
Resource *child = parent->FindChild( name ) ;
if ( child )
if ( child != 0 )
{
// since we are updating the ID and Href, we need to remove it and re-add it.
m_res.Update( child, e ) ;
m_res.Update( child, e, m_last_change ) ;
}
// folder entry exist in google drive, but not local. we should create
@ -271,7 +241,7 @@ bool State::Update( const Entry& e )
m_res.Insert( child ) ;
// update the state of the resource
m_res.Update( child, e ) ;
m_res.Update( child, e, m_last_change ) ;
}
return true ;
@ -295,126 +265,68 @@ State::iterator State::end()
return m_res.end() ;
}
void State::Read()
void State::Read( const fs::path& filename )
{
m_last_sync.Assign( 0 ) ;
m_last_change.Assign( 0 ) ;
try
{
File st_file( m_root / state_file ) ;
m_st = ParseJson( st_file );
m_cstamp = m_st["change_stamp"].Int() ;
File file( filename ) ;
Val json = ParseJson( file );
Val last_sync = json["last_sync"] ;
Val last_change = json.Has( "last_change" ) ? json["last_change"] : json["last_sync"] ;
m_last_sync.Assign( last_sync["sec"].Int(), last_sync["nsec"].Int() ) ;
m_last_change.Assign( last_change["sec"].Int(), last_change["nsec"].Int() ) ;
m_ign = json.Has( "ignore_regexp" ) ? json["ignore_regexp"].Str() : std::string();
m_cstamp = json["change_stamp"].Int() ;
}
catch ( Exception& )
{
}
try
{
File ign_file( m_root / ignore_file ) ;
char ign[MAX_IGN] = { 0 };
int s = ign_file.Read( ign, MAX_IGN-1 ) ;
ParseIgnoreFile( ign, s );
}
catch ( Exception& e )
{
}
}
std::vector<std::string> split( const boost::regex& re, const char* str, int len )
void State::Write( const fs::path& filename ) const
{
std::vector<std::string> vec;
boost::cregex_token_iterator i( str, str+len, re, -1, boost::format_perl );
boost::cregex_token_iterator j;
while ( i != j )
{
vec.push_back( *i++ );
}
return vec;
}
bool State::ParseIgnoreFile( const char* buffer, int size )
{
const boost::regex re1( "([^\\\\]|^)[\\t\\r ]+$" );
const boost::regex re2( "^[\\t\\r ]+" );
const boost::regex re4( "([^\\\\](\\\\\\\\)*|^)\\\\\\*" );
const boost::regex re5( "([^\\\\](\\\\\\\\)*|^)\\\\\\?" );
std::string exclude_re, include_re;
std::vector<std::string> lines = split( boost::regex( "[\\n\\r]+" ), buffer, size );
for ( int i = 0; i < (int)lines.size(); i++ )
{
std::string str = regex_replace( regex_replace( lines[i], re1, "$1" ), re2, "" );
if ( str[0] == '#' || !str.size() )
{
continue;
}
bool inc = str[0] == '!';
if ( inc )
{
str = str.substr( 1 );
}
std::vector<std::string> parts = split( boost::regex( "/+" ), str.c_str(), str.size() );
for ( int j = 0; j < (int)parts.size(); j++ )
{
if ( parts[j] == "**" )
{
parts[j] = ".*";
}
else if ( parts[j] == "*" )
{
parts[j] = "[^/]*";
}
else
{
parts[j] = regex_escape( parts[j] );
std::string str1;
while (1)
{
str1 = regex_replace( parts[j], re5, "$1[^/]", boost::format_perl );
str1 = regex_replace( str1, re4, "$1[^/]*", boost::format_perl );
if ( str1.size() == parts[j].size() )
break;
parts[j] = str1;
}
}
}
if ( !inc )
{
str = boost::algorithm::join( parts, "/" ) + "(/|$)";
exclude_re = exclude_re + ( exclude_re.size() > 0 ? "|" : "" ) + str;
}
else
{
str = "";
std::string cur;
for ( int j = 0; j < (int)parts.size(); j++ )
{
cur = cur.size() > 0 ? cur + "/" + parts[j] : "^" + parts[j];
str = ( str.size() > 0 ? str + "|" + cur : cur ) + ( j < (int)parts.size()-1 ? "$" : "(/|$)" );
}
include_re = include_re + ( include_re.size() > 0 ? "|" : "" ) + str;
}
}
if ( exclude_re.size() > 0 )
{
m_ign = "^" + ( include_re.size() > 0 ? "(?!" + include_re + ")" : std::string() ) + "(" + exclude_re + ")$";
return true;
}
return false;
}
void State::Write()
{
m_st.Set( "change_stamp", Val( m_cstamp ) ) ;
m_st.Set( "ignore_regexp", Val( m_ign ) ) ;
Val last_sync ;
last_sync.Add( "sec", Val( (int)m_last_sync.Sec() ) );
last_sync.Add( "nsec", Val( (unsigned)m_last_sync.NanoSec() ) );
Val last_change ;
last_change.Add( "sec", Val( (int)m_last_change.Sec() ) );
last_change.Add( "nsec", Val( (unsigned)m_last_change.NanoSec() ) );
Val result ;
result.Add( "last_sync", last_sync ) ;
result.Add( "last_change", last_change ) ;
result.Add( "change_stamp", Val(m_cstamp) ) ;
result.Add( "ignore_regexp", Val(m_ign) ) ;
fs::path filename = m_root / state_file ;
std::ofstream fs( filename.string().c_str() ) ;
fs << m_st ;
fs << result ;
}
void State::Sync( Syncer *syncer, const Val& options )
{
// set the last sync time from the time returned by the server for the last file synced
// if the sync time hasn't changed (i.e. now files have been uploaded)
// set the last sync time to the time on the client
m_res.Root()->Sync( syncer, &m_res, options ) ;
// ideally because we compare server file times to the last sync time
// the last sync time would always be a server time rather than a client time
// TODO - WARNING - do we use the last sync time to compare to client file times
// need to check if this introduces a new problem
DateTime last_change_time = m_last_change;
m_res.Root()->Sync( syncer, last_change_time, options ) ;
if ( last_change_time == m_last_change )
Trace( "nothing changed at the server side since %1%", m_last_change ) ;
else
{
Trace( "updating last server-side change time to %1%", last_change_time ) ;
m_last_change = last_change_time;
}
m_last_sync = DateTime::Now();
}
long State::ChangeStamp() const
@ -428,4 +340,67 @@ void State::ChangeStamp( long cstamp )
m_cstamp = cstamp ;
}
bool State::Move( Syncer* syncer, fs::path old_p, fs::path new_p, fs::path grive_root )
{
//Convert paths to canonical representations
//Also seems to remove trailing / at the end of directory paths
old_p = fs::canonical( old_p );
grive_root = fs::canonical( grive_root );
//new_p is a little special because fs::canonical() requires that the path exists
if ( new_p.string()[ new_p.string().size() - 1 ] == '/') //If new_p ends with a /, remove it
new_p = new_p.parent_path();
new_p = fs::canonical( new_p.parent_path() ) / new_p.filename();
//Fails if source file doesn't exist, or if destination file already
//exists and is not a directory, or if the source and destination are exactly the same
if ( (fs::exists(new_p) && !fs::is_directory(new_p) ) || !fs::exists(old_p) || fs::equivalent( old_p, new_p ) )
return false;
//If new path is an existing directory, move the file into the directory
//instead of trying to rename it
if ( fs::is_directory(new_p) ){
new_p = new_p / old_p.filename();
}
//Get the paths relative to grive root.
//Just finds the substring from the end of the grive_root to the end of the path
//+1s are to exclude slash at beginning of relative path
int start = grive_root.string().size() + 1;
int nLen = new_p.string().size() - (grive_root.string().size() + 1);
int oLen = old_p.string().size() - (grive_root.string().size() + 1);
if ( start + nLen != new_p.string().size() || start + oLen != old_p.string().size() )
return false;
fs::path new_p_rootrel( new_p.string().substr( start, nLen ) );
fs::path old_p_rootrel( old_p.string().substr( start, oLen ) );
//Get resources
Resource* res = m_res.Root();
Resource* newParentRes = m_res.Root();
for ( fs::path::iterator it = old_p_rootrel.begin(); it != old_p_rootrel.end(); ++it )
{
if ( *it != "." && *it != ".." && res != 0 )
res = res->FindChild(it->string());
if ( *it == ".." )
res = res->Parent();
}
for ( fs::path::iterator it = new_p_rootrel.begin(); it != new_p_rootrel.end(); ++it )
{
if ( *it != "." && *it != ".." && *it != new_p.filename() && newParentRes != 0 )
newParentRes = newParentRes->FindChild(it->string());
if ( *it == "..")
res = res->Parent();
}
//These conditions should only occur if everything is not up-to-date
if ( res == 0 || newParentRes == 0 || res->GetState() != Resource::sync ||
newParentRes->GetState() != Resource::sync ||
newParentRes->FindChild( new_p.filename().string() ) != 0 )
return false;
fs::rename(old_p, new_p); //Moves local file
syncer->Move(res, newParentRes, new_p.filename().string()); //Moves server file
return true;
}
} // end of namespace gr

View File

@ -23,13 +23,14 @@
#include "util/DateTime.hh"
#include "util/FileSystem.hh"
#include "json/Val.hh"
#include <memory>
#include <boost/regex.hpp>
namespace gr {
class Val ;
class Entry ;
class Syncer ;
@ -42,15 +43,15 @@ public :
typedef ResourceTree::iterator iterator ;
public :
explicit State( const fs::path& root, const Val& options ) ;
explicit State( const fs::path& filename, const Val& options ) ;
~State() ;
void FromLocal( const fs::path& p ) ;
void FromRemote( const Entry& e ) ;
void ResolveEntry() ;
void Read() ;
void Write() ;
void Read( const fs::path& filename ) ;
void Write( const fs::path& filename ) const ;
Resource* FindByHref( const std::string& href ) ;
Resource* FindByID( const std::string& id ) ;
@ -62,10 +63,10 @@ public :
long ChangeStamp() const ;
void ChangeStamp( long cstamp ) ;
bool Move( Syncer* syncer, fs::path old_p, fs::path new_p, fs::path grive_root );
private :
bool ParseIgnoreFile( const char* buffer, int size ) ;
void FromLocal( const fs::path& p, Resource *folder, Val& tree ) ;
void FromLocal( const fs::path& p, Resource *folder ) ;
void FromChange( const Entry& e ) ;
bool Update( const Entry& e ) ;
std::size_t TryResolveEntry() ;
@ -73,16 +74,14 @@ private :
bool IsIgnore( const std::string& filename ) ;
private :
fs::path m_root ;
ResourceTree m_res ;
DateTime m_last_sync ;
DateTime m_last_change ;
int m_cstamp ;
std::string m_ign ;
boost::regex m_ign_re ;
Val m_st ;
bool m_force ;
bool m_ign_changed ;
std::list<Entry> m_unresolved ;
std::vector<Entry> m_unresolved ;
} ;
} // end of namespace gr

View File

@ -41,11 +41,11 @@ http::Agent* Syncer::Agent() const
void Syncer::Download( Resource *res, const fs::path& file )
{
http::Download dl( file.string(), http::Download::NoChecksum() ) ;
long r = m_http->Get( res->ContentSrc(), &dl, http::Header(), res->Size() ) ;
long r = m_http->Get( res->ContentSrc(), &dl, http::Header() ) ;
if ( r <= 400 )
{
if ( res->ServerTime() != DateTime() )
os::SetFileTime( file, res->ServerTime() ) ;
if ( res->MTime() != DateTime() )
os::SetFileTime( file, res->MTime() ) ;
else
Log( "encountered zero date time after downloading %1%", file, log::warning ) ;
}
@ -56,4 +56,9 @@ void Syncer::AssignIDs( Resource *res, const Entry& remote )
res->AssignIDs( remote );
}
void Syncer::AssignMTime( Resource *res, const DateTime& mtime )
{
res->m_mtime = mtime;
}
} // end of namespace gr

View File

@ -21,7 +21,6 @@
#include "util/FileSystem.hh"
#include <memory>
#include <string>
#include <vector>
#include <iosfwd>
@ -56,9 +55,9 @@ public :
virtual bool Create( Resource *res ) = 0;
virtual bool Move( Resource* res, Resource* newParent, std::string newFilename ) = 0;
virtual std::unique_ptr<Feed> GetFolders() = 0;
virtual std::unique_ptr<Feed> GetAll() = 0;
virtual std::unique_ptr<Feed> GetChanges( long min_cstamp ) = 0;
virtual std::auto_ptr<Feed> GetFolders() = 0;
virtual std::auto_ptr<Feed> GetAll() = 0;
virtual std::auto_ptr<Feed> GetChanges( long min_cstamp ) = 0;
virtual long GetChangeStamp( long min_cstamp ) = 0;
protected:
@ -66,6 +65,7 @@ protected:
http::Agent *m_http;
void AssignIDs( Resource *res, const Entry& remote );
void AssignMTime( Resource *res, const DateTime& mtime );
} ;

View File

@ -49,9 +49,9 @@ SymbolInfo::SymbolInfo( )
m_impl->m_bfd = 0 ;
m_impl->m_symbols = 0 ;
m_impl->m_symbol_count = 0 ;
bfd_init( ) ;
// opening itself
bfd *b = bfd_openr( "/proc/self/exe", 0 ) ;
if ( b == NULL )
@ -60,13 +60,13 @@ SymbolInfo::SymbolInfo( )
<< bfd_errmsg( bfd_get_error() ) << std::endl ;
return ;
}
if ( bfd_check_format( b, bfd_archive ) )
{
bfd_close( b ) ;
return ;
}
char **matching ;
if ( !bfd_check_format_matches( b, bfd_object, &matching ) )
{
@ -78,7 +78,7 @@ SymbolInfo::SymbolInfo( )
std::cerr << bfd_get_filename( b ) << ": Matching formats: " ;
for ( char **p = matching ; *p != 0 ; p++ )
std::cerr << " " << *p ;
std::cerr << std::endl ;
std::free( matching ) ;
}
@ -107,7 +107,7 @@ struct SymbolInfo::BacktraceInfo
const char *m_func_name ;
unsigned int m_lineno ;
unsigned int m_is_found ;
static void Callback( bfd *abfd, asection *section, void* addr ) ;
} ;
@ -117,24 +117,17 @@ void SymbolInfo::BacktraceInfo::Callback( bfd *abfd, asection *section,
BacktraceInfo *info = (BacktraceInfo *)data ;
if ((section->flags & SEC_ALLOC) == 0)
return ;
// bfd_get_section_vma works up to 7b1cfbcf1a27951fb1b3a212995075dd6fdf985b,
// removed in 7c13bc8c91abf291f0206b6608b31955c5ea70d8 (binutils 2.33.1 or so)
// so it's substituted by its implementation to avoid checking for binutils
// version (which at least on Debian SID it's not that easy because the
// version.h is not included with the official package)
bfd_vma vma = section->vma;
bfd_vma vma = bfd_get_section_vma(abfd, section);
unsigned long address = (unsigned long)(info->m_addr);
if ( address < vma )
return;
// bfd_section_size changed between the two objects described above,
// same rationale applies
bfd_size_type size = section->size;
bfd_size_type size = bfd_section_size(abfd, section);
if ( address > (vma + size))
return ;
const SymbolInfo *pthis = info->m_pthis ;
info->m_is_found = bfd_find_nearest_line( abfd, section,
pthis->m_impl->m_symbols,
@ -156,7 +149,7 @@ void SymbolInfo::PrintTrace( void *addr, std::ostream& os, std::size_t idx )
{
this, addr, 0, 0, 0, false
} ;
Dl_info sym ;
bfd_map_over_sections( m_impl->m_bfd,
&SymbolInfo::BacktraceInfo::Callback,
@ -172,7 +165,7 @@ if ( btinfo.m_is_found )
filename.erase( pos, std::strlen( SRC_DIR ) ) ;
#endif
os << "#" << idx << " " << addr << " "
<< filename << ":" << btinfo.m_lineno
<< filename << ":" << btinfo.m_lineno
<< " "
<< (btinfo.m_func_name != 0 ? Demangle(btinfo.m_func_name) : "" )
<< std::endl ;

View File

@ -54,7 +54,7 @@ public :
private :
struct Impl ;
const std::unique_ptr<Impl> m_impl ;
const std::auto_ptr<Impl> m_impl ;
struct BacktraceInfo ;
friend struct BacktraceInfo ;

View File

@ -0,0 +1,34 @@
/*
Common URIs for the old "Document List" Google Docs API
Copyright (C) 2012 Wan Wai Ho
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation version 2
of the License.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#pragma once
#include <string>
namespace gr { namespace v1
{
const std::string feed_base = "https://docs.google.com/feeds/default/private/full" ;
const std::string feed_changes = "https://docs.google.com/feeds/default/private/changes" ;
const std::string feed_metadata = "https://docs.google.com/feeds/metadata/default" ;
const std::string root_href =
"https://docs.google.com/feeds/default/private/full/folder%3Aroot" ;
const std::string root_create =
"https://docs.google.com/feeds/upload/create-session/default/private/full" ;
} }

View File

@ -0,0 +1,86 @@
/*
Item class implementation for the old "Document List" Google Docs API
Copyright (C) 2012 Wan Wai Ho, (C) 2015 Vitaliy Filippov
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation version 2
of the License.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include "Entry1.hh"
#include "CommonUri.hh"
#include "util/Crypt.hh"
#include "util/log/Log.hh"
#include "util/OS.hh"
#include "xml/Node.hh"
#include "xml/NodeSet.hh"
#include <algorithm>
#include <iterator>
namespace gr { namespace v1 {
Entry1::Entry1():
Entry ()
{
m_self_href = root_href;
}
/// construct an entry for remote - Doclist API v3
Entry1::Entry1( const xml::Node& n )
{
Update( n ) ;
}
void Entry1::Update( const xml::Node& n )
{
m_title = n["title"] ;
m_etag = n["@gd:etag"] ;
m_filename = n["docs:suggestedFilename"] ;
m_content_src = n["content"]["@src"] ;
m_self_href = n["link"].Find( "@rel", "self" )["@href"] ;
m_mtime = DateTime( n["updated"] ) ;
m_resource_id = n["gd:resourceId"] ;
m_md5 = n["docs:md5Checksum"] ;
m_is_dir = n["category"].Find( "@scheme", "http://schemas.google.com/g/2005#kind" )["@label"] == "folder" ;
m_is_editable = !n["link"].Find( "@rel", m_is_dir
? "http://schemas.google.com/g/2005#resumable-create-media" : "http://schemas.google.com/g/2005#resumable-edit-media" )
["@href"].empty() ;
// changestamp only appear in change feed entries
xml::NodeSet cs = n["docs:changestamp"]["@value"] ;
m_change_stamp = cs.empty() ? -1 : std::atoi( cs.front().Value().c_str() ) ;
if ( m_change_stamp != -1 )
{
m_self_href = n["link"].Find( "@rel", "http://schemas.google.com/docs/2007#alt-self" )["@href"] ;
}
m_parent_hrefs.clear( ) ;
xml::NodeSet parents = n["link"].Find( "@rel", "http://schemas.google.com/docs/2007#parent" ) ;
for ( xml::NodeSet::iterator i = parents.begin() ; i != parents.end() ; ++i )
{
std::string href = (*i)["@href"];
if ( href == root_href )
href = "root"; // API-independent root href
m_parent_hrefs.push_back( href ) ;
}
// convert to lower case for easy comparison
std::transform( m_md5.begin(), m_md5.end(), m_md5.begin(), tolower ) ;
m_is_removed = !n["gd:deleted"].empty() || !n["docs:removed"].empty() ;
}
} } // end of namespace gr::v1

View File

@ -0,0 +1,42 @@
/*
Item class implementation for the old "Document List" Google Docs API
Copyright (C) 2012 Wan Wai Ho, (C) 2015 Vitaliy Filippov
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation version 2
of the License.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#pragma once
#include "base/Entry.hh"
namespace gr {
namespace xml
{
class Node ;
}
namespace v1 {
class Entry1: public Entry
{
public :
Entry1( ) ;
explicit Entry1( const xml::Node& n ) ;
private :
void Update( const xml::Node& entry ) ;
} ;
} } // end of namespace gr::v1

View File

@ -0,0 +1,64 @@
/*
Item list ("Feed") implementation for the old "Document List" Google Docs API
Copyright (C) 2012 Wan Wai Ho, (C) 2015 Vitaliy Filippov
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation version 2
of the License.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include "CommonUri.hh"
#include "Feed1.hh"
#include "Entry1.hh"
#include "http/Agent.hh"
#include "http/Header.hh"
#include "http/XmlResponse.hh"
#include "xml/NodeSet.hh"
#include <boost/format.hpp>
#include <cassert>
namespace gr { namespace v1 {
Feed1::Feed1( const std::string &url ):
Feed( url )
{
}
bool Feed1::GetNext( http::Agent *http )
{
http::XmlResponse xrsp ;
if ( m_next.empty() )
return false;
http->Get( m_next, &xrsp, http::Header() ) ;
xml::Node m_root = xrsp.Response() ;
xml::NodeSet xe = m_root["entry"] ;
m_entries.clear() ;
for ( xml::NodeSet::iterator i = xe.begin() ; i != xe.end() ; ++i )
{
m_entries.push_back( Entry1( *i ) );
}
xml::NodeSet nss = m_root["link"].Find( "@rel", "next" ) ;
m_next = nss.empty() ? std::string( "" ) : nss["@href"];
return true;
}
} } // end of namespace gr::v1

View File

@ -0,0 +1,40 @@
/*
Item list ("Feed") implementation for the old "Document List" Google Docs API
Copyright (C) 2012 Wan Wai Ho, (C) 2015 Vitaliy Filippov
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation version 2
of the License.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#pragma once
#include "base/Feed.hh"
#include "xml/Node.hh"
#include "xml/NodeSet.hh"
#include <vector>
#include <string>
namespace gr { namespace v1 {
class Feed1: public Feed
{
public :
Feed1( const std::string& url ) ;
bool GetNext( http::Agent *http ) ;
} ;
} } // end of namespace gr::v1

View File

@ -0,0 +1,271 @@
/*
Syncer implementation for the old "Document List" Google Docs API
Copyright (C) 2012 Wan Wai Ho, (C) 2015 Vitaliy Filippov
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation version 2
of the License.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include "base/Resource.hh"
#include "CommonUri.hh"
#include "Entry1.hh"
#include "Feed1.hh"
#include "Syncer1.hh"
#include "http/Agent.hh"
#include "http/Header.hh"
#include "http/StringResponse.hh"
#include "http/XmlResponse.hh"
#include "xml/Node.hh"
#include "xml/NodeSet.hh"
#include "xml/String.hh"
#include "xml/TreeBuilder.hh"
#include "util/File.hh"
#include "util/OS.hh"
#include "util/log/Log.hh"
#include <boost/exception/all.hpp>
#include <cassert>
// for debugging
#include <iostream>
namespace gr { namespace v1 {
// hard coded XML file
const std::string xml_meta =
"<?xml version='1.0' encoding='UTF-8'?>\n"
"<entry xmlns=\"http://www.w3.org/2005/Atom\" xmlns:docs=\"http://schemas.google.com/docs/2007\">"
"<category scheme=\"http://schemas.google.com/g/2005#kind\" "
"term=\"http://schemas.google.com/docs/2007#%1%\"/>"
"<title>%2%</title>"
"</entry>" ;
Syncer1::Syncer1( http::Agent *http ):
Syncer( http )
{
assert( http != 0 ) ;
}
void Syncer1::DeleteRemote( Resource *res )
{
http::StringResponse str ;
try
{
http::Header hdr ;
hdr.Add( "If-Match: " + res->ETag() ) ;
// don't know why, but an update before deleting seems to work always
http::XmlResponse xml ;
m_http->Get( res->SelfHref(), &xml, hdr ) ;
AssignIDs( res, Entry1( xml.Response() ) ) ;
m_http->Request( "DELETE", res->SelfHref(), NULL, &str, hdr ) ;
}
catch ( Exception& e )
{
// don't rethrow here. there are some cases that I don't know why
// the delete will fail.
Trace( "Exception %1% %2%",
boost::diagnostic_information(e),
str.Response() ) ;
}
}
bool Syncer1::EditContent( Resource *res, bool new_rev )
{
assert( res->Parent() ) ;
assert( res->Parent()->GetState() == Resource::sync ) ;
if ( !res->IsEditable() )
{
Log( "Cannot upload %1%: file read-only. %2%", res->Name(), res->StateStr(), log::warning ) ;
return false ;
}
return Upload( res, feed_base + "/" + res->ResourceID() + ( new_rev ? "?new-revision=true" : "" ), false ) ;
}
bool Syncer1::Create( Resource *res )
{
assert( res->Parent() ) ;
assert( res->Parent()->IsFolder() ) ;
assert( res->Parent()->GetState() == Resource::sync ) ;
if ( res->IsFolder() )
{
std::string uri = feed_base ;
if ( !res->Parent()->IsRoot() )
uri += ( "/" + m_http->Escape( res->Parent()->ResourceID() ) + "/contents" ) ;
std::string meta = (boost::format( xml_meta )
% "folder"
% xml::Escape( res->Name() )
).str() ;
http::Header hdr ;
hdr.Add( "Content-Type: application/atom+xml" ) ;
http::XmlResponse xml ;
m_http->Post( uri, meta, &xml, hdr ) ;
AssignIDs( res, Entry1( xml.Response() ) ) ;
return true ;
}
else if ( res->Parent()->IsEditable() )
{
return Upload( res, root_create + (res->Parent()->ResourceID() == "folder:root"
? "" : "/" + res->Parent()->ResourceID() + "/contents") + "?convert=false", true ) ;
}
else
{
Log( "parent of %1% does not exist: cannot upload", res->Name(), log::warning ) ;
return false ;
}
}
bool Syncer1::Upload( Resource *res,
const std::string& link,
bool post )
{
File file( res->Path() ) ;
std::ostringstream xcontent_len ;
xcontent_len << "X-Upload-Content-Length: " << file.Size() ;
http::Header hdr ;
hdr.Add( "Content-Type: application/atom+xml" ) ;
hdr.Add( "X-Upload-Content-Type: application/octet-stream" ) ;
hdr.Add( xcontent_len.str() ) ;
hdr.Add( "If-Match: " + res->ETag() ) ;
hdr.Add( "Expect:" ) ;
std::string meta = (boost::format( xml_meta )
% res->Kind()
% xml::Escape( res->Name() )
).str() ;
bool retrying = false;
while ( true )
{
if ( retrying )
{
file.Seek( 0, SEEK_SET );
os::Sleep( 5 );
}
try
{
http::StringResponse str ;
if ( post )
m_http->Post( link, meta, &str, hdr ) ;
else
m_http->Put( link, meta, &str, hdr ) ;
}
catch ( Exception &e )
{
std::string const *info = boost::get_error_info<xml::TreeBuilder::ExpatApiError>(e);
if ( info && (*info == "XML_Parse") )
{
Log( "Error parsing pre-upload response XML, retrying whole upload in 5s",
log::warning );
retrying = true;
continue;
}
else
{
throw e;
}
}
http::Header uphdr ;
uphdr.Add( "Expect:" ) ;
uphdr.Add( "Accept:" ) ;
// the content upload URL is in the "Location" HTTP header
std::string uplink = m_http->RedirLocation() ;
http::XmlResponse xml ;
long http_code = 0;
try
{
http_code = m_http->Put( uplink, &file, &xml, uphdr ) ;
}
catch ( Exception &e )
{
std::string const *info = boost::get_error_info<xml::TreeBuilder::ExpatApiError>(e);
if ( info && (*info == "XML_Parse") )
{
Log( "Error parsing response XML, retrying whole upload in 5s",
log::warning );
retrying = true;
continue;
}
else
{
throw e;
}
}
if ( http_code == 410 || http_code == 412 )
{
Log( "request failed with %1%, body: %2%, retrying whole upload in 5s", http_code, m_http->LastError(), log::warning ) ;
retrying = true;
continue;
}
if ( retrying )
Log( "upload succeeded on retry", log::warning );
Entry1 responseEntry = Entry1( xml.Response() );
AssignIDs( res, responseEntry ) ;
AssignMTime( res, responseEntry.MTime() );
break;
}
return true ;
}
std::auto_ptr<Feed> Syncer1::GetFolders()
{
return std::auto_ptr<Feed>( new Feed1( feed_base + "/-/folder?max-results=50&showroot=true" ) );
}
std::auto_ptr<Feed> Syncer1::GetAll()
{
return std::auto_ptr<Feed>( new Feed1( feed_base + "?showfolders=true&showroot=true" ) );
}
std::string ChangesFeed( int changestamp )
{
boost::format feed( feed_changes + "?start-index=%1%" ) ;
return changestamp > 0 ? ( feed % changestamp ).str() : feed_changes ;
}
std::auto_ptr<Feed> Syncer1::GetChanges( long min_cstamp )
{
return std::auto_ptr<Feed>( new Feed1( ChangesFeed( min_cstamp ) ) );
}
long Syncer1::GetChangeStamp( long min_cstamp )
{
http::XmlResponse xrsp ;
m_http->Get( ChangesFeed( min_cstamp ), &xrsp, http::Header() ) ;
return std::atoi( xrsp.Response()["docs:largestChangestamp"]["@value"].front().Value().c_str() );
}
} } // end of namespace gr::v1

View File

@ -0,0 +1,52 @@
/*
Syncer implementation for the old "Document List" Google Docs API
Copyright (C) 2012 Wan Wai Ho, (C) 2015 Vitaliy Filippov
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation version 2
of the License.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#pragma once
#include "base/Syncer.hh"
namespace gr {
class Feed;
namespace v1 {
class Syncer1: public Syncer
{
public :
Syncer1( http::Agent *http );
void DeleteRemote( Resource *res );
bool EditContent( Resource *res, bool new_rev );
bool Create( Resource *res );
std::auto_ptr<Feed> GetFolders();
std::auto_ptr<Feed> GetAll();
std::auto_ptr<Feed> GetChanges( long min_cstamp );
long GetChangeStamp( long min_cstamp );
private :
bool Upload( Resource *res, const std::string& link, bool post);
} ;
} } // end of namespace gr::v1

View File

@ -44,7 +44,6 @@ void Entry2::Update( const Val& item )
// changestamp only appears in change feed entries
m_change_stamp = is_chg ? item["id"].Int() : -1 ;
m_is_removed = is_chg && item["deleted"].Bool() ;
m_size = 0 ;
const Val& file = is_chg && !m_is_removed ? item["file"] : item;
@ -76,7 +75,6 @@ void Entry2::Update( const Val& item )
else
{
m_md5 = file["md5Checksum"] ;
m_size = file["fileSize"].U64() ;
m_content_src = file["downloadUrl"] ;
// convert to lower case for easy comparison
std::transform( m_md5.begin(), m_md5.end(), m_md5.begin(), tolower ) ;

View File

@ -36,17 +36,13 @@ Feed2::Feed2( const std::string& url ):
{
}
Feed2::~Feed2()
{
}
bool Feed2::GetNext( http::Agent *http )
{
if ( m_next.empty() )
return false ;
http::ValResponse out ;
http->Get( m_next, &out, http::Header(), 0 ) ;
http->Get( m_next, &out, http::Header() ) ;
Val m_content = out.Response() ;
Val::Array items = m_content["items"].AsArray() ;

View File

@ -31,7 +31,6 @@ class Feed2: public Feed
{
public :
Feed2( const std::string& url ) ;
~Feed2() ;
bool GetNext( http::Agent *http ) ;
} ;

View File

@ -70,7 +70,7 @@ bool Syncer2::EditContent( Resource *res, bool new_rev )
return false ;
}
return Upload( res, new_rev ) ;
return Upload( res ) ;
}
bool Syncer2::Create( Resource *res )
@ -86,7 +86,7 @@ bool Syncer2::Create( Resource *res )
return false ;
}
return Upload( res, false );
return Upload( res );
}
bool Syncer2::Move( Resource* res, Resource* newParentRes, std::string newFilename )
@ -121,15 +121,12 @@ bool Syncer2::Move( Resource* res, Resource* newParentRes, std::string newFilena
http::Header hdr2 ;
hdr2.Add( "Content-Type: application/json" );
http::ValResponse vrsp ;
// Don't change modified date because we're only moving
long http_code = m_http->Put(
feeds::files + "/" + res->ResourceID() + "?modifiedDateBehavior=noChange" + addRemoveParents,
json_meta, &vrsp, hdr2
) ;
long http_code = 0;
//Don't change modified date because we're only moving
http_code = m_http->Put( feeds::files + "/" + res->ResourceID() + "?modifiedDateBehavior=noChange" + addRemoveParents, json_meta, &vrsp, hdr2 ) ;
valr = vrsp.Response();
assert( http_code == 200 && !( valr["id"].Str().empty() ) );
assert( !( valr["id"].Str().empty() ) );
}
return true;
}
@ -140,7 +137,7 @@ std::string to_string( uint64_t n )
return s.str();
}
bool Syncer2::Upload( Resource *res, bool new_rev )
bool Syncer2::Upload( Resource *res )
{
Val meta;
meta.Add( "title", Val( res->Name() ) );
@ -170,7 +167,7 @@ bool Syncer2::Upload( Resource *res, bool new_rev )
else
http_code = m_http->Put( feeds::files + "/" + res->ResourceID(), json_meta, &vrsp, hdr2 ) ;
valr = vrsp.Response();
assert( http_code == 200 && !( valr["id"].Str().empty() ) );
assert( !( valr["id"].Str().empty() ) );
}
else
{
@ -196,8 +193,7 @@ bool Syncer2::Upload( Resource *res, bool new_rev )
http::ValResponse vrsp;
m_http->Request(
res->ResourceID().empty() ? "POST" : "PUT",
upload_base + ( res->ResourceID().empty() ? "" : "/" + res->ResourceID() ) +
"?uploadType=multipart&newRevision=" + ( new_rev ? "true" : "false" ),
upload_base + ( res->ResourceID().empty() ? "" : "/" + res->ResourceID() ) + "?uploadType=multipart",
&multipart, &vrsp, hdr
) ;
valr = vrsp.Response() ;
@ -206,19 +202,19 @@ bool Syncer2::Upload( Resource *res, bool new_rev )
Entry2 responseEntry = Entry2( valr ) ;
AssignIDs( res, responseEntry ) ;
res->SetServerTime( responseEntry.MTime() );
AssignMTime( res, responseEntry.MTime() ) ;
return true ;
}
std::unique_ptr<Feed> Syncer2::GetFolders()
std::auto_ptr<Feed> Syncer2::GetFolders()
{
return std::unique_ptr<Feed>( new Feed2( feeds::files + "?maxResults=100000&q=trashed%3dfalse+and+mimeType%3d%27" + mime_types::folder + "%27" ) );
return std::auto_ptr<Feed>( new Feed2( feeds::files + "?maxResults=1000&q=%27me%27+in+readers+and+trashed%3dfalse+and+mimeType%3d%27" + mime_types::folder + "%27" ) );
}
std::unique_ptr<Feed> Syncer2::GetAll()
std::auto_ptr<Feed> Syncer2::GetAll()
{
return std::unique_ptr<Feed>( new Feed2( feeds::files + "?maxResults=999999999&q=trashed%3dfalse" ) );
return std::auto_ptr<Feed>( new Feed2( feeds::files + "?maxResults=1000&q=%27me%27+in+readers+and+trashed%3dfalse" ) );
}
std::string ChangesFeed( long changestamp, int maxResults = 1000 )
@ -227,15 +223,15 @@ std::string ChangesFeed( long changestamp, int maxResults = 1000 )
return ( changestamp > 0 ? feed % maxResults % changestamp : feed % maxResults ).str() ;
}
std::unique_ptr<Feed> Syncer2::GetChanges( long min_cstamp )
std::auto_ptr<Feed> Syncer2::GetChanges( long min_cstamp )
{
return std::unique_ptr<Feed>( new Feed2( ChangesFeed( min_cstamp ) ) );
return std::auto_ptr<Feed>( new Feed2( ChangesFeed( min_cstamp ) ) );
}
long Syncer2::GetChangeStamp( long min_cstamp )
{
http::ValResponse res ;
m_http->Get( ChangesFeed( min_cstamp, 1 ), &res, http::Header(), 0 ) ;
m_http->Get( ChangesFeed( min_cstamp, 1 ), &res, http::Header() ) ;
return std::atoi( res.Response()["largestChangeId"].Str().c_str() );
}

View File

@ -39,14 +39,14 @@ public :
bool Create( Resource *res );
bool Move( Resource* res, Resource* newParent, std::string newFilename );
std::unique_ptr<Feed> GetFolders();
std::unique_ptr<Feed> GetAll();
std::unique_ptr<Feed> GetChanges( long min_cstamp );
std::auto_ptr<Feed> GetFolders();
std::auto_ptr<Feed> GetAll();
std::auto_ptr<Feed> GetChanges( long min_cstamp );
long GetChangeStamp( long min_cstamp );
private :
bool Upload( Resource *res, bool new_rev );
bool Upload( Resource *res );
} ;

View File

@ -25,11 +25,6 @@ namespace gr {
namespace http {
Agent::Agent()
{
mMaxUpload = mMaxDownload = 0;
}
long Agent::Put(
const std::string& url,
const std::string& data,
@ -52,10 +47,9 @@ long Agent::Put(
long Agent::Get(
const std::string& url,
DataStream *dest,
const Header& hdr,
u64_t downloadFileBytes )
const Header& hdr )
{
return Request( "GET", url, NULL, dest, hdr, downloadFileBytes );
return Request( "GET", url, NULL, dest, hdr );
}
long Agent::Post(
@ -70,14 +64,4 @@ long Agent::Post(
return Request( "POST", url, &s, dest, h );
}
void Agent::SetUploadSpeed( unsigned kbytes )
{
mMaxUpload = kbytes;
}
void Agent::SetDownloadSpeed( unsigned kbytes )
{
mMaxDownload = kbytes;
}
} } // end of namespace

View File

@ -22,7 +22,6 @@
#include <string>
#include "ResponseLog.hh"
#include "util/Types.hh"
#include "util/Progress.hh"
namespace gr {
@ -35,11 +34,7 @@ class Header ;
class Agent
{
protected:
unsigned mMaxUpload, mMaxDownload ;
public :
Agent() ;
virtual ~Agent() {}
virtual ResponseLog* GetLog() const = 0 ;
@ -60,8 +55,7 @@ public :
virtual long Get(
const std::string& url,
DataStream *dest,
const Header& hdr,
u64_t downloadFileBytes = 0 ) ;
const Header& hdr ) ;
virtual long Post(
const std::string& url,
@ -74,11 +68,7 @@ public :
const std::string& url,
SeekStream *in,
DataStream *dest,
const Header& hdr,
u64_t downloadFileBytes = 0 ) = 0 ;
virtual void SetUploadSpeed( unsigned kbytes ) ;
virtual void SetDownloadSpeed( unsigned kbytes ) ;
const Header& hdr ) = 0 ;
virtual std::string LastError() const = 0 ;
virtual std::string LastErrorHeaders() const = 0 ;
@ -87,8 +77,6 @@ public :
virtual std::string Escape( const std::string& str ) = 0 ;
virtual std::string Unescape( const std::string& str ) = 0 ;
virtual void SetProgressReporter( Progress* ) = 0;
} ;
} } // end of namespace

View File

@ -28,10 +28,14 @@
#include <boost/throw_exception.hpp>
// dependent libraries
#include <curl/curl.h>
#include <algorithm>
#include <cassert>
#include <cstring>
#include <limits>
#include <sstream>
#include <streambuf>
#include <iostream>
@ -65,13 +69,12 @@ struct CurlAgent::Impl
std::string error_headers ;
std::string error_data ;
DataStream *dest ;
u64_t total_download, total_upload ;
} ;
static struct curl_slist* SetHeader( CURL* handle, const Header& hdr );
CurlAgent::CurlAgent() : Agent(),
m_pimpl( new Impl ), m_pb( 0 )
CurlAgent::CurlAgent() :
m_pimpl( new Impl )
{
m_pimpl->curl = ::curl_easy_init();
}
@ -84,15 +87,10 @@ void CurlAgent::Init()
::curl_easy_setopt( m_pimpl->curl, CURLOPT_HEADERFUNCTION, &CurlAgent::HeaderCallback ) ;
::curl_easy_setopt( m_pimpl->curl, CURLOPT_HEADERDATA, this ) ;
::curl_easy_setopt( m_pimpl->curl, CURLOPT_HEADER, 0L ) ;
if ( mMaxUpload > 0 )
::curl_easy_setopt( m_pimpl->curl, CURLOPT_MAX_SEND_SPEED_LARGE, mMaxUpload ) ;
if ( mMaxDownload > 0 )
::curl_easy_setopt( m_pimpl->curl, CURLOPT_MAX_RECV_SPEED_LARGE, mMaxDownload ) ;
m_pimpl->error = false;
m_pimpl->error_headers = "";
m_pimpl->error_data = "";
m_pimpl->dest = NULL;
m_pimpl->total_download = m_pimpl->total_upload = 0;
}
CurlAgent::~CurlAgent()
@ -110,11 +108,6 @@ void CurlAgent::SetLog(ResponseLog *log)
m_log.reset( log );
}
void CurlAgent::SetProgressReporter(Progress *progress)
{
m_pb = progress;
}
std::size_t CurlAgent::HeaderCallback( void *ptr, size_t size, size_t nmemb, CurlAgent *pthis )
{
char *str = static_cast<char*>(ptr) ;
@ -135,7 +128,7 @@ std::size_t CurlAgent::HeaderCallback( void *ptr, size_t size, size_t nmemb, Cur
if ( pos != line.npos )
{
std::size_t end_pos = line.find( "\r\n", pos ) ;
pthis->m_pimpl->location = line.substr( pos+loc.size(), end_pos - loc.size() ) ;
pthis->m_pimpl->location = line.substr( loc.size(), end_pos - loc.size() ) ;
}
return size*nmemb ;
@ -146,7 +139,6 @@ std::size_t CurlAgent::Receive( void* ptr, size_t size, size_t nmemb, CurlAgent
assert( pthis != 0 ) ;
if ( pthis->m_log.get() )
pthis->m_log->Write( (const char*)ptr, size*nmemb );
if ( pthis->m_pimpl->error && pthis->m_pimpl->error_data.size() < 65536 )
{
// Do not feed error responses to destination stream
@ -156,22 +148,6 @@ std::size_t CurlAgent::Receive( void* ptr, size_t size, size_t nmemb, CurlAgent
return pthis->m_pimpl->dest->Write( static_cast<char*>(ptr), size * nmemb ) ;
}
int CurlAgent::progress_callback( CurlAgent *pthis, curl_off_t totalDownload, curl_off_t finishedDownload, curl_off_t totalUpload, curl_off_t finishedUpload )
{
// Only report download progress when set explicitly
if ( pthis->m_pb )
{
totalDownload = pthis->m_pimpl->total_download;
if ( !totalUpload )
totalUpload = pthis->m_pimpl->total_upload;
pthis->m_pb->reportProgress(
totalDownload > 0 ? totalDownload : totalUpload,
totalDownload > 0 ? finishedDownload : finishedUpload
);
}
return 0;
}
long CurlAgent::ExecCurl(
const std::string& url,
DataStream *dest,
@ -189,12 +165,6 @@ long CurlAgent::ExecCurl(
struct curl_slist *slist = SetHeader( m_pimpl->curl, hdr ) ;
curl_easy_setopt(curl, CURLOPT_NOPROGRESS, 0L);
#if LIBCURL_VERSION_NUM >= 0x072000
curl_easy_setopt(curl, CURLOPT_XFERINFOFUNCTION, progress_callback);
curl_easy_setopt(curl, CURLOPT_XFERINFODATA, this);
#endif
CURLcode curl_code = ::curl_easy_perform(curl);
curl_slist_free_all(slist);
@ -229,13 +199,11 @@ long CurlAgent::Request(
const std::string& url,
SeekStream *in,
DataStream *dest,
const Header& hdr,
u64_t downloadFileBytes )
const Header& hdr )
{
Trace("HTTP %1% \"%2%\"", method, url ) ;
Init() ;
m_pimpl->total_download = downloadFileBytes ;
CURL *curl = m_pimpl->curl ;
// set common options

View File

@ -24,8 +24,6 @@
#include <memory>
#include <string>
#include <curl/curl.h>
namespace gr {
class DataStream ;
@ -45,15 +43,13 @@ public :
ResponseLog* GetLog() const ;
void SetLog( ResponseLog *log ) ;
void SetProgressReporter( Progress *progress ) ;
long Request(
const std::string& method,
const std::string& url,
SeekStream *in,
DataStream *dest,
const Header& hdr,
u64_t downloadFileBytes = 0 ) ;
const Header& hdr ) ;
std::string LastError() const ;
std::string LastErrorHeaders() const ;
@ -63,8 +59,6 @@ public :
std::string Escape( const std::string& str ) ;
std::string Unescape( const std::string& str ) ;
static int progress_callback( CurlAgent *pthis, curl_off_t totalDownload, curl_off_t finishedDownload, curl_off_t totalUpload, curl_off_t finishedUpload );
private :
static std::size_t HeaderCallback( void *ptr, size_t size, size_t nmemb, CurlAgent *pthis ) ;
static std::size_t Receive( void* ptr, size_t size, size_t nmemb, CurlAgent *pthis ) ;
@ -78,9 +72,8 @@ private :
private :
struct Impl ;
std::unique_ptr<Impl> m_pimpl ;
std::unique_ptr<ResponseLog> m_log ;
Progress* m_pb ;
std::auto_ptr<Impl> m_pimpl ;
std::auto_ptr<ResponseLog> m_log ;
} ;
} } // end of namespace

View File

@ -20,6 +20,7 @@
#include "Download.hh"
// #include "util/SignalHandler.hh"
#include "Error.hh"
#include "util/Crypt.hh"
// boost headers

View File

@ -48,7 +48,7 @@ public :
private :
File m_file ;
std::unique_ptr<crypt::MD5> m_crypt ;
std::auto_ptr<crypt::MD5> m_crypt ;
} ;
} } // end of namespace

View File

@ -22,7 +22,6 @@
#include <algorithm>
#include <iterator>
#include <ostream>
#include <sstream>
namespace gr { namespace http {
@ -35,13 +34,6 @@ void Header::Add( const std::string& str )
m_vec.push_back( str ) ;
}
std::string Header::Str() const
{
std::ostringstream s ;
s << *this ;
return s.str() ;
}
Header::iterator Header::begin() const
{
return m_vec.begin() ;

View File

@ -37,7 +37,6 @@ public :
Header() ;
void Add( const std::string& str ) ;
std::string Str() const ;
iterator begin() const ;
iterator end() const ;

View File

@ -30,7 +30,7 @@ XmlResponse::XmlResponse() : m_tb( new xml::TreeBuilder )
void XmlResponse::Clear()
{
m_tb.reset(new xml::TreeBuilder);
m_tb.reset(new xml::TreeBuilder);
}
std::size_t XmlResponse::Write( const char *data, std::size_t count )

View File

@ -20,10 +20,15 @@
#pragma once
#include "util/DataStream.hh"
#include "xml/TreeBuilder.hh"
#include <memory>
namespace gr { namespace xml
{
class Node ;
class TreeBuilder ;
} }
namespace gr { namespace http {
class XmlResponse : public DataStream
@ -39,7 +44,7 @@ public :
xml::Node Response() const ;
private :
std::unique_ptr<xml::TreeBuilder> m_tb ;
std::auto_ptr<xml::TreeBuilder> m_tb ;
} ;
} } // end of namespace

View File

@ -50,7 +50,7 @@ public :
private :
struct Impl ;
std::unique_ptr<Impl> m_impl ;
std::auto_ptr<Impl> m_impl ;
} ;
} // end of namespace

View File

@ -51,7 +51,7 @@ private :
private :
struct Impl ;
std::unique_ptr<Impl> m_impl ;
std::auto_ptr<Impl> m_impl ;
} ;
std::string WriteJson( const Val& val );

View File

@ -91,18 +91,6 @@ const Val& Val::operator[]( const std::string& key ) const
throw ;
}
Val& Val::operator[]( const std::string& key )
{
Object& obj = As<Object>() ;
Object::iterator i = obj.find(key) ;
if ( i != obj.end() )
return i->second ;
// shut off compiler warning
BOOST_THROW_EXCEPTION(Error() << NoKey_(key)) ;
throw ;
}
const Val& Val::operator[]( std::size_t index ) const
{
const Array& ar = As<Array>() ;
@ -116,14 +104,12 @@ const Val& Val::operator[]( std::size_t index ) const
std::string Val::Str() const
{
if ( Type() == int_type )
return boost::to_string( As<long long>() );
return As<std::string>() ;
}
Val::operator std::string() const
{
return Str();
return As<std::string>() ;
}
int Val::Int() const
@ -133,13 +119,6 @@ int Val::Int() const
return static_cast<int>(As<long long>()) ;
}
unsigned long long Val::U64() const
{
if ( Type() == string_type )
return strtoull( As<std::string>().c_str(), NULL, 10 );
return static_cast<unsigned long long>(As<long long>()) ;
}
double Val::Double() const
{
if ( Type() == string_type )
@ -157,38 +136,17 @@ const Val::Array& Val::AsArray() const
return As<Array>() ;
}
Val::Array& Val::AsArray()
{
return As<Array>() ;
}
const Val::Object& Val::AsObject() const
{
return As<Object>() ;
}
Val::Object& Val::AsObject()
{
return As<Object>() ;
}
bool Val::Has( const std::string& key ) const
{
const Object& obj = As<Object>() ;
return obj.find(key) != obj.end() ;
}
bool Val::Del( const std::string& key )
{
Object& obj = As<Object>() ;
return obj.erase(key) > 0 ;
}
Val& Val::Item( const std::string& key )
{
return As<Object>()[key];
}
bool Val::Get( const std::string& key, Val& val ) const
{
const Object& obj = As<Object>() ;
@ -207,16 +165,6 @@ void Val::Add( const std::string& key, const Val& value )
As<Object>().insert( std::make_pair(key, value) ) ;
}
void Val::Set( const std::string& key, const Val& value )
{
Object& obj = As<Object>();
Object::iterator i = obj.find(key);
if (i == obj.end())
obj.insert(std::make_pair(key, value));
else
i->second = value;
}
void Val::Add( const Val& json )
{
As<Array>().push_back( json ) ;

View File

@ -94,29 +94,24 @@ public :
TypeEnum Type() const ;
const Val& operator[]( const std::string& key ) const ;
const Val& operator[]( std::size_t index ) const ;
// shortcuts for As<>()
std::string Str() const ;
int Int() const ;
unsigned long long U64() const ;
long Long() const ;
double Double() const ;
bool Bool() const ;
const Array& AsArray() const ;
Array& AsArray() ;
const Object& AsObject() const ;
Object& AsObject() ;
// shortcuts for objects
Val& operator[]( const std::string& key ) ; // get updatable ref or throw
const Val& operator[]( const std::string& key ) const ; // get const ref or throw
Val& Item( const std::string& key ) ; // insert if not exists and get
bool Has( const std::string& key ) const ; // check if exists
bool Get( const std::string& key, Val& val ) const ; // get or return false
void Add( const std::string& key, const Val& val ) ; // insert or do nothing
void Set( const std::string& key, const Val& val ) ; // insert or update
bool Del( const std::string& key ); // delete or do nothing
bool Has( const std::string& key ) const ;
bool Get( const std::string& key, Val& val ) const ;
void Add( const std::string& key, const Val& val ) ;
// shortcuts for array (and array of objects)
const Val& operator[]( std::size_t index ) const ;
void Add( const Val& json ) ;
std::vector<Val> Select( const std::string& key ) const ;
@ -130,7 +125,7 @@ private :
template <typename T>
struct Impl ;
std::unique_ptr<Base> m_base ;
std::auto_ptr<Base> m_base ;
private :
void Select( const Object& obj, const std::string& key, std::vector<Val>& result ) const ;
@ -194,29 +189,35 @@ Val& Val::Assign( const T& t )
template <typename T>
const T& Val::As() const
{
const Impl<T> *impl = dynamic_cast<const Impl<T> *>( m_base.get() ) ;
if ( !impl )
try
{
const Impl<T> *impl = &dynamic_cast<const Impl<T>&>( *m_base ) ;
return impl->val ;
}
catch ( std::exception& e )
{
TypeEnum dest = Type2Enum<T>::type ;
BOOST_THROW_EXCEPTION(
Error() << SrcType_( Type() ) << DestType_( dest )
Error() << SrcType_(Type()) << DestType_(dest)
) ;
}
return impl->val ;
}
template <typename T>
T& Val::As()
{
Impl<T> *impl = dynamic_cast<Impl<T> *>( m_base.get() ) ;
if ( !impl )
try
{
Impl<T> *impl = &dynamic_cast<Impl<T>&>( *m_base ) ;
return impl->val ;
}
catch ( std::exception& e )
{
TypeEnum dest = Type2Enum<T>::type ;
BOOST_THROW_EXCEPTION(
Error() << SrcType_( Type() ) << DestType_( dest )
Error() << SrcType_(Type()) << DestType_(dest)
) ;
}
return impl->val ;
}
template <typename T>

View File

@ -32,7 +32,6 @@ namespace gr {
using namespace http ;
AuthAgent::AuthAgent( OAuth2& auth, Agent *real_agent ) :
Agent(),
m_auth ( auth ),
m_agent ( real_agent )
{
@ -48,21 +47,6 @@ void AuthAgent::SetLog( http::ResponseLog *log )
return m_agent->SetLog( log );
}
void AuthAgent::SetProgressReporter( Progress *progress )
{
m_agent->SetProgressReporter( progress );
}
void AuthAgent::SetUploadSpeed( unsigned kbytes )
{
m_agent->SetUploadSpeed( kbytes );
}
void AuthAgent::SetDownloadSpeed( unsigned kbytes )
{
m_agent->SetDownloadSpeed( kbytes );
}
http::Header AuthAgent::AppendHeader( const http::Header& hdr ) const
{
http::Header h(hdr) ;
@ -76,18 +60,16 @@ long AuthAgent::Request(
const std::string& url,
SeekStream *in,
DataStream *dest,
const http::Header& hdr,
u64_t downloadFileBytes )
const http::Header& hdr )
{
long response;
Header auth;
m_interval = 0;
do
{
auth = AppendHeader( hdr );
if ( in )
in->Seek( 0, 0 );
response = m_agent->Request( method, url, in, dest, auth, downloadFileBytes );
response = m_agent->Request( method, url, in, dest, auth );
} while ( CheckRetry( response ) );
return CheckHttpResponse( response, url, auth );
}
@ -128,17 +110,7 @@ bool AuthAgent::CheckRetry( long response )
os::Sleep( 5 ) ;
return true ;
}
// HTTP 403 is the result of API rate limiting. attempt exponential backoff and try again
else if ( response == 429 || ( response == 403 && (
m_agent->LastError().find("\"reason\": \"userRateLimitExceeded\",") != std::string::npos ||
m_agent->LastError().find("\"reason\": \"rateLimitExceeded\",") != std::string::npos ) ) )
{
m_interval = m_interval <= 0 ? 1 : ( m_interval < 64 ? m_interval*2 : 120 );
Log( "request failed due to rate limiting: %1% (body: %2%). retrying in %3% seconds",
response, m_agent->LastError(), m_interval, log::warning ) ;
os::Sleep( m_interval ) ;
return true ;
}
// HTTP 401 Unauthorized. the auth token has been expired. refresh it
else if ( response == 401 )
{

View File

@ -44,8 +44,7 @@ public :
const std::string& url,
SeekStream *in,
DataStream *dest,
const http::Header& hdr,
u64_t downloadFileBytes = 0 ) ;
const http::Header& hdr ) ;
std::string LastError() const ;
std::string LastErrorHeaders() const ;
@ -55,11 +54,6 @@ public :
std::string Escape( const std::string& str ) ;
std::string Unescape( const std::string& str ) ;
void SetUploadSpeed( unsigned kbytes ) ;
void SetDownloadSpeed( unsigned kbytes ) ;
void SetProgressReporter( Progress *progress ) ;
private :
http::Header AppendHeader( const http::Header& hdr ) const ;
bool CheckRetry( long response ) ;
@ -71,7 +65,6 @@ private :
private :
OAuth2& m_auth ;
http::Agent* m_agent ;
int m_interval ;
} ;
} // end of namespace

View File

@ -25,13 +25,6 @@
#include "http/Header.hh"
#include "util/log/Log.hh"
#include <netinet/in.h>
#include <sys/socket.h>
#include <string.h>
#include <errno.h>
#include <fcntl.h>
#include <poll.h>
// for debugging
#include <iostream>
@ -44,8 +37,8 @@ OAuth2::OAuth2(
const std::string& refresh_code,
const std::string& client_id,
const std::string& client_secret ) :
m_refresh( refresh_code ),
m_agent( agent ),
m_refresh( refresh_code ),
m_client_id( client_id ),
m_client_secret( client_secret )
{
@ -57,29 +50,18 @@ OAuth2::OAuth2(
const std::string& client_id,
const std::string& client_secret ) :
m_agent( agent ),
m_port( 0 ),
m_socket( -1 ),
m_client_id( client_id ),
m_client_secret( client_secret )
{
}
OAuth2::~OAuth2()
{
if ( m_socket >= 0 )
{
close( m_socket );
m_socket = -1;
}
}
bool OAuth2::Auth( const std::string& auth_code )
void OAuth2::Auth( const std::string& auth_code )
{
std::string post =
"code=" + auth_code +
"&client_id=" + m_client_id +
"&client_secret=" + m_client_secret +
"&redirect_uri=http%3A%2F%2Flocalhost:" + std::to_string( m_port ) + "%2Fauth" +
"&redirect_uri=" + "urn:ietf:wg:oauth:2.0:oob" +
"&grant_type=authorization_code" ;
http::ValResponse resp ;
@ -95,120 +77,24 @@ bool OAuth2::Auth( const std::string& auth_code )
{
Log( "Failed to obtain auth token: HTTP %1%, body: %2%",
code, m_agent->LastError(), log::error ) ;
return false;
BOOST_THROW_EXCEPTION( AuthFailed() );
}
return true;
}
std::string OAuth2::MakeAuthURL()
{
if ( !m_port )
{
sockaddr_storage addr = { 0 };
addr.ss_family = AF_INET;
m_socket = socket( AF_INET, SOCK_STREAM, 0 );
if ( m_socket < 0 )
throw std::runtime_error( std::string("socket: ") + strerror(errno) );
if ( bind( m_socket, (sockaddr*)&addr, sizeof( addr ) ) < 0 )
{
close( m_socket );
m_socket = -1;
throw std::runtime_error( std::string("bind: ") + strerror(errno) );
}
socklen_t len = sizeof( addr );
if ( getsockname( m_socket, (sockaddr *)&addr, &len ) == -1 )
{
close( m_socket );
m_socket = -1;
throw std::runtime_error( std::string("getsockname: ") + strerror(errno) );
}
m_port = ntohs(((sockaddr_in*)&addr)->sin_port);
if ( listen( m_socket, 128 ) < 0 )
{
close( m_socket );
m_socket = -1;
m_port = 0;
throw std::runtime_error( std::string("listen: ") + strerror(errno) );
}
}
return "https://accounts.google.com/o/oauth2/auth"
"?scope=" + m_agent->Escape( "https://www.googleapis.com/auth/drive" ) +
"&redirect_uri=http%3A%2F%2Flocalhost:" + std::to_string( m_port ) + "%2Fauth" +
"?scope=" +
m_agent->Escape( "https://www.googleapis.com/auth/userinfo.email" ) + "+" +
m_agent->Escape( "https://www.googleapis.com/auth/userinfo.profile" ) + "+" +
m_agent->Escape( "https://docs.google.com/feeds/" ) + "+" +
m_agent->Escape( "https://docs.googleusercontent.com/" ) + "+" +
m_agent->Escape( "https://spreadsheets.google.com/feeds/" ) +
"&redirect_uri=urn:ietf:wg:oauth:2.0:oob"
"&response_type=code"
"&client_id=" + m_client_id ;
}
bool OAuth2::GetCode( )
{
sockaddr_storage addr = { 0 };
int peer_fd = -1;
while ( peer_fd < 0 )
{
socklen_t peer_addr_size = sizeof( addr );
peer_fd = accept( m_socket, (sockaddr*)&addr, &peer_addr_size );
if ( peer_fd == -1 && errno != EAGAIN && errno != EINTR )
throw std::runtime_error( std::string("accept: ") + strerror(errno) );
}
fcntl( peer_fd, F_SETFL, fcntl( peer_fd, F_GETFL, 0 ) | O_NONBLOCK );
struct pollfd pfd = (struct pollfd){
.fd = peer_fd,
.events = POLLIN|POLLRDHUP,
};
char buf[4096];
std::string request;
while ( true )
{
pfd.revents = 0;
poll( &pfd, 1, -1 );
if ( pfd.revents & POLLRDHUP )
break;
int r = 1;
while ( r > 0 )
{
r = read( peer_fd, buf, sizeof( buf ) );
if ( r > 0 )
request += std::string( buf, r );
else if ( r == 0 )
break;
else if ( errno != EAGAIN && errno != EINTR )
throw std::runtime_error( std::string("read: ") + strerror(errno) );
}
if ( r == 0 || ( r < 0 && request.find( "\n" ) > 0 ) ) // GET ... HTTP/1.1\r\n
break;
}
bool ok = false;
if ( request.substr( 0, 10 ) == "GET /auth?" )
{
std::string line = request;
int p = line.find( "\n" );
if ( p > 0 )
line = line.substr( 0, p );
p = line.rfind( " " );
if ( p > 0 )
line = line.substr( 0, p );
p = line.find( "code=" );
if ( p > 0 )
line = line.substr( p+5 );
p = line.find( "&" );
if ( p > 0 )
line = line.substr( 0, p );
ok = Auth( line );
}
std::string response = ( ok
? "Authenticated successfully. Please close the page"
: "Authentication error. Please try again" );
response = "HTTP/1.1 200 OK\r\n"
"Content-Type: text/html; charset=utf-8\r\n"
"Connection: close\r\n"
"\r\n"+
response+
"\r\n";
write( peer_fd, response.c_str(), response.size() );
close( peer_fd );
return ok;
}
void OAuth2::Refresh( )
{
std::string post =

View File

@ -41,15 +41,13 @@ public :
const std::string& refresh_code,
const std::string& client_id,
const std::string& client_secret ) ;
~OAuth2( ) ;
std::string Str() const ;
std::string MakeAuthURL() ;
bool Auth( const std::string& auth_code ) ;
void Auth( const std::string& auth_code ) ;
void Refresh( ) ;
bool GetCode( ) ;
std::string RefreshToken( ) const ;
std::string AccessToken( ) const ;
@ -61,9 +59,7 @@ private :
std::string m_access ;
std::string m_refresh ;
http::Agent* m_agent ;
int m_port ;
int m_socket ;
const std::string m_client_id ;
const std::string m_client_secret ;
} ;

View File

@ -23,7 +23,7 @@
namespace gr {
ConcatStream::ConcatStream() :
m_size( 0 ), m_pos( 0 ), m_cur( 0 )
m_cur( 0 ), m_size( 0 ), m_pos( 0 )
{
}
@ -63,13 +63,13 @@ off_t ConcatStream::Seek( off_t offset, int whence )
offset += m_pos;
else if ( whence == 2 )
offset += Size();
if ( (u64_t)offset > Size() )
if ( offset > Size() )
offset = Size();
m_cur = 0;
m_pos = offset;
if ( m_streams.size() )
{
while ( (u64_t)offset > m_sizes[m_cur] )
while ( offset > m_sizes[m_cur] )
m_cur++;
m_streams[m_cur]->Seek( offset - ( m_cur > 0 ? m_sizes[m_cur-1] : 0 ), 0 );
}
@ -90,7 +90,7 @@ void ConcatStream::Append( SeekStream *stream )
{
if ( stream )
{
u64_t size = stream->Size();
off_t size = stream->Size();
if ( size > 0 )
{
// "fix" stream size at the moment of adding so further changes of underlying files

View File

@ -41,9 +41,9 @@ public :
private :
std::vector<SeekStream*> m_streams ;
std::vector<u64_t> m_sizes ;
u64_t m_size, m_pos ;
size_t m_cur ;
std::vector<off_t> m_sizes ;
off_t m_size, m_pos ;
int m_cur ;
} ;
} // end of namespace

View File

@ -38,10 +38,6 @@ const std::string default_root_folder = ".";
Config::Config( const po::variables_map& vm )
{
if ( vm.count( "id" ) > 0 )
m_cmd.Add( "id", Val( vm["id"].as<std::string>() ) ) ;
if ( vm.count( "secret" ) > 0 )
m_cmd.Add( "secret", Val( vm["secret"].as<std::string>() ) ) ;
m_cmd.Add( "new-rev", Val(vm.count("new-rev") > 0) ) ;
m_cmd.Add( "force", Val(vm.count("force") > 0 ) ) ;
m_cmd.Add( "path", Val(vm.count("path") > 0
@ -50,11 +46,8 @@ Config::Config( const po::variables_map& vm )
m_cmd.Add( "dir", Val(vm.count("dir") > 0
? vm["dir"].as<std::string>()
: "" ) ) ;
if ( vm.count( "ignore" ) > 0 )
if ( vm.count("ignore") > 0 )
m_cmd.Add( "ignore", Val( vm["ignore"].as<std::string>() ) );
m_cmd.Add( "no-remote-new", Val( vm.count( "no-remote-new" ) > 0 || vm.count( "upload-only" ) > 0 ) );
m_cmd.Add( "upload-only", Val( vm.count( "upload-only" ) > 0 ) );
m_cmd.Add( "no-delete-remote", Val( vm.count( "no-delete-remote" ) > 0 ) );
m_path = GetPath( fs::path(m_cmd["path"].Str()) ) ;
m_file = Read( ) ;
@ -84,7 +77,7 @@ void Config::Save( )
void Config::Set( const std::string& key, const Val& value )
{
m_file.Set( key, value ) ;
m_file.Add( key, value ) ;
}
Val Config::Get( const std::string& key ) const

View File

@ -24,6 +24,7 @@
#include "MemMap.hh"
#include <iomanip>
#include <sstream>
// dependent libraries
#include <gcrypt.h>

View File

@ -50,7 +50,7 @@ public :
private :
struct Impl ;
std::unique_ptr<Impl> m_impl ;
std::auto_ptr<Impl> m_impl ;
} ;
} } // end of namespace gr

View File

@ -33,6 +33,7 @@
#include <cassert>
#include <cstdlib>
#include <cstring>
#include <iostream>
#include <iomanip>
#include <time.h>

View File

@ -26,6 +26,7 @@
#include <cstdlib>
#include <iterator>
#include <sstream>
namespace gr {

View File

@ -33,10 +33,6 @@
#include <sys/types.h>
#include <fcntl.h>
#if defined(__FreeBSD__) || defined(__OpenBSD__)
#include <unistd.h>
#endif
#ifdef WIN32
#include <io.h>
typedef int ssize_t ;

View File

@ -178,7 +178,7 @@ public :
private :
typedef impl::FuncImpl<Type> Impl ;
std::unique_ptr<Impl> m_pimpl ;
std::auto_ptr<Impl> m_pimpl ;
} ;
} // end of namespace

View File

@ -39,12 +39,12 @@
namespace gr { namespace os {
void Stat( const fs::path& filename, DateTime *t, off_t *size, FileType *ft )
DateTime FileCTime( const fs::path& filename )
{
Stat( filename.string(), t, size, ft ) ;
return FileCTime( filename.string() ) ;
}
void Stat( const std::string& filename, DateTime *t, off64_t *size, FileType *ft )
DateTime FileCTime( const std::string& filename )
{
struct stat s = {} ;
if ( ::stat( filename.c_str(), &s ) != 0 )
@ -57,18 +57,11 @@ void Stat( const std::string& filename, DateTime *t, off64_t *size, FileType *ft
) ;
}
if ( t )
{
#if defined __NetBSD__ || ( defined __APPLE__ && defined __DARWIN_64_BIT_INO_T )
*t = DateTime( s.st_ctimespec.tv_sec, s.st_ctimespec.tv_nsec ) ;
#if defined __APPLE__ && defined __DARWIN_64_BIT_INO_T
return DateTime( s.st_ctimespec.tv_sec, s.st_ctimespec.tv_nsec ) ;
#else
*t = DateTime( s.st_ctim.tv_sec, s.st_ctim.tv_nsec);
return DateTime( s.st_ctim.tv_sec, s.st_ctim.tv_nsec);
#endif
}
if ( size )
*size = s.st_size;
if ( ft )
*ft = S_ISDIR( s.st_mode ) ? FT_DIR : ( S_ISREG( s.st_mode ) ? FT_FILE : FT_UNKNOWN ) ;
}
void SetFileTime( const fs::path& filename, const DateTime& t )

View File

@ -29,18 +29,12 @@ namespace gr {
class DateTime ;
class Path ;
enum FileType { FT_FILE = 1, FT_DIR = 2, FT_UNKNOWN = 3 } ;
#ifndef off64_t
#define off64_t off_t
#endif
namespace os
{
struct Error : virtual Exception {} ;
void Stat( const std::string& filename, DateTime *t, off64_t *size, FileType *ft ) ;
void Stat( const fs::path& filename, DateTime *t, off64_t *size, FileType *ft ) ;
DateTime FileCTime( const std::string& filename ) ;
DateTime FileCTime( const fs::path& filename ) ;
void SetFileTime( const std::string& filename, const DateTime& t ) ;
void SetFileTime( const fs::path& filename, const DateTime& t ) ;

View File

@ -1,14 +0,0 @@
#pragma once
#include "util/Types.hh"
namespace gr {
class Progress
{
public:
virtual void reportProgress(u64_t total, u64_t processed) = 0;
};
}
;

View File

@ -1,87 +0,0 @@
#include <sys/ioctl.h>
#include <math.h>
#include <unistd.h>
#include <stdio.h>
#include "ProgressBar.hh"
namespace gr
{
ProgressBar::ProgressBar(): showProgressBar(false), last(1000)
{
}
ProgressBar::~ProgressBar()
{
}
void ProgressBar::setShowProgressBar(bool showProgressBar)
{
this->showProgressBar = showProgressBar;
}
unsigned short int ProgressBar::determineTerminalSize()
{
struct winsize w;
ioctl(STDOUT_FILENO, TIOCGWINSZ, &w);
return w.ws_col;
}
void ProgressBar::printBytes(u64_t bytes)
{
if (bytes >= 1024*1024*1024)
printf("%.3f GB", (double)bytes/1024/1024/1024);
else if (bytes >= 1024*1024)
printf("%.3f MB", (double)bytes/1024/1024);
else
printf("%.3f KB", (double)bytes/1024);
}
void ProgressBar::reportProgress(u64_t total, u64_t processed)
{
if (showProgressBar && total)
{
// libcurl seems to process more bytes then the actual file size :)
if (processed > total)
processed = total;
double fraction = (double)processed/total;
int point = fraction*1000;
if (this->last < 1000 || point != this->last)
{
// do not print 100% progress multiple times (it will duplicate the progressbar)
this->last = point;
// 10 for prefix of percent and 26 for suffix of file size
int availableSize = determineTerminalSize() - 36;
int totalDots;
if (availableSize > 100)
totalDots = 100;
else if (availableSize < 0)
totalDots = 10;
else
totalDots = availableSize;
int dotz = round(fraction * totalDots);
int count = 0;
// delete previous output line
printf("\r [%3.0f%%] [", fraction * 100);
for (; count < dotz - 1; count++)
putchar('=');
putchar('>');
for (; count < totalDots - 1; count++)
putchar(' ');
printf("] ");
printBytes(processed);
putchar('/');
printBytes(total);
printf("\33[K\r");
if (point == 1000)
putchar('\n');
fflush(stdout);
}
}
}
}

View File

@ -1,25 +0,0 @@
#pragma once
#include "util/Progress.hh"
namespace gr {
class ProgressBar: public Progress
{
public:
ProgressBar();
virtual ~ProgressBar();
void reportProgress(u64_t total, u64_t processed);
void setShowProgressBar(bool showProgressBar);
private:
static void printBytes(u64_t bytes);
static unsigned short int determineTerminalSize();
bool showProgressBar;
int last;
};
}
;

View File

@ -51,7 +51,7 @@ off_t StringStream::Seek( off_t offset, int whence )
offset += m_pos;
else if ( whence == 2 )
offset += Size();
if ( (u64_t)offset > Size() )
if ( offset > Size() )
offset = Size();
m_pos = (size_t)offset;
return m_pos;

View File

@ -39,7 +39,7 @@ CompositeLog::~CompositeLog()
std::for_each( m_logs.begin(), m_logs.end(), Destroy() ) ;
}
LogBase* CompositeLog::Add( std::unique_ptr<LogBase>& log )
LogBase* CompositeLog::Add( std::auto_ptr<LogBase> log )
{
m_logs.push_back( log.get() ) ;
return log.release() ;

View File

@ -32,7 +32,7 @@ public :
CompositeLog() ;
~CompositeLog() ;
LogBase* Add( std::unique_ptr<LogBase>& log ) ;
LogBase* Add( std::auto_ptr<LogBase> log ) ;
void Log( const log::Fmt& msg, log::Serverity s ) ;

View File

@ -40,12 +40,12 @@ public :
}
} ;
LogBase* LogBase::Inst( LogBase *log )
LogBase* LogBase::Inst( std::auto_ptr<LogBase> log )
{
static std::unique_ptr<LogBase> inst( new MockLog ) ;
static std::auto_ptr<LogBase> inst( new MockLog ) ;
if ( log != 0 )
inst.reset( log ) ;
if ( log.get() != 0 )
inst = log ;
assert( inst.get() != 0 ) ;
return inst.get() ;

View File

@ -65,7 +65,7 @@ public :
virtual bool Enable( log::Serverity s, bool enable = true ) = 0 ;
virtual bool IsEnabled( log::Serverity s ) const = 0 ;
static LogBase* Inst( LogBase *log = 0 ) ;
static LogBase* Inst( std::auto_ptr<LogBase> log = std::auto_ptr<LogBase>() ) ;
virtual ~LogBase() ;
protected :
@ -115,12 +115,6 @@ void Log(
LogBase::Inst()->Log( log::Fmt(fmt) % p1 % p2 % p3 % p4, s ) ;
}
template <typename P1, typename P2, typename P3, typename P4, typename P5>
void Log( const std::string& fmt, const P1& p1, const P2& p2, const P3& p3, const P4& p4, const P5& p5, log::Serverity s = log::info )
{
LogBase::Inst()->Log( log::Fmt(fmt) % p1 % p2 % p3 % p4 % p5, s ) ;
}
void Trace( const std::string& str ) ;
template <typename P1>

View File

@ -23,6 +23,7 @@
#include "Node.hh"
#include "util/log/Log.hh"
#include <expat.h>
#include <cassert>
#include <iostream>

View File

@ -55,7 +55,7 @@ private :
private :
struct Impl ;
std::unique_ptr<Impl> m_impl ;
std::auto_ptr<Impl> m_impl ;
} ;
} } // end of namespace

View File

@ -21,6 +21,7 @@
#include "util/log/DefaultLog.hh"
#include "drive/EntryTest.hh"
#include "base/ResourceTest.hh"
#include "base/ResourceTreeTest.hh"
#include "base/StateTest.hh"
@ -28,15 +29,16 @@
#include "util/FunctionTest.hh"
#include "util/ConfigTest.hh"
#include "util/SignalHandlerTest.hh"
//#include "xml/NodeTest.hh"
#include "xml/NodeTest.hh"
int main( int argc, char **argv )
{
using namespace grut ;
gr::LogBase::Inst( new gr::log::DefaultLog ) ;
gr::LogBase::Inst( std::auto_ptr<gr::LogBase>(new gr::log::DefaultLog) ) ;
CppUnit::TextUi::TestRunner runner;
runner.addTest( Entry1Test::suite( ) ) ;
runner.addTest( StateTest::suite( ) ) ;
runner.addTest( ResourceTest::suite( ) ) ;
runner.addTest( ResourceTreeTest::suite( ) ) ;
@ -44,7 +46,7 @@ int main( int argc, char **argv )
runner.addTest( FunctionTest::suite( ) ) ;
runner.addTest( ConfigTest::suite( ) ) ;
runner.addTest( SignalHandlerTest::suite( ) ) ;
//runner.addTest( NodeTest::suite( ) ) ;
runner.addTest( NodeTest::suite( ) ) ;
runner.run();
return 0 ;

View File

@ -23,15 +23,15 @@
#include "base/Resource.hh"
#include "drive2/Entry2.hh"
#include "json/Val.hh"
#include "drive/Entry1.hh"
#include "xml/Node.hh"
#include <iostream>
namespace grut {
using namespace gr ;
using namespace gr::v2 ;
using namespace gr::v1 ;
ResourceTest::ResourceTest( )
{
@ -39,8 +39,8 @@ ResourceTest::ResourceTest( )
void ResourceTest::TestRootPath()
{
std::string rootFolder = "/home/usr/grive/grive";
Resource root( rootFolder ) ;
std::string rootFolder = "/home/usr/grive/grive";
Resource root(rootFolder) ;
CPPUNIT_ASSERT( root.IsRoot() ) ;
GRUT_ASSERT_EQUAL( root.Path(), fs::path( rootFolder ) ) ;
}
@ -51,23 +51,19 @@ void ResourceTest::TestNormal( )
Resource subject( "entry.xml", "file" ) ;
root.AddChild( &subject ) ;
GRUT_ASSERT_EQUAL( subject.IsRoot(), false ) ;
GRUT_ASSERT_EQUAL( subject.Path(), fs::path( TEST_DATA ) / "entry.xml" ) ;
Val st;
st.Add( "srv_time", Val( DateTime( "2012-05-09T16:13:22.401Z" ).Sec() ) );
subject.FromLocal( st ) ;
subject.FromLocal( DateTime() ) ;
GRUT_ASSERT_EQUAL( subject.MD5(), "c0742c0a32b2c909b6f176d17a6992d0" ) ;
GRUT_ASSERT_EQUAL( subject.StateStr(), "local_new" ) ;
Val entry;
entry.Set( "modifiedDate", Val( std::string( "2012-05-09T16:13:22.401Z" ) ) );
entry.Set( "md5Checksum", Val( std::string( "DIFFERENT" ) ) );
xml::Node entry = xml::Node::Element( "entry" ) ;
entry.AddElement( "updated" ).AddText( "2012-05-09T16:13:22.401Z" ) ;
Entry2 remote( entry ) ;
GRUT_ASSERT_EQUAL( "different", remote.MD5() ) ;
subject.FromRemote( remote ) ;
Entry1 remote( entry ) ;
subject.FromRemote( remote, DateTime() ) ;
GRUT_ASSERT_EQUAL( "local_changed", subject.StateStr() ) ;
}
} // end of namespace grut

View File

@ -19,7 +19,6 @@
#include "json/Val.hh"
#include <boost/test/unit_test.hpp>
#include <iostream>
using namespace gr ;
@ -34,11 +33,11 @@ BOOST_FIXTURE_TEST_SUITE( ValTest, Fixture )
BOOST_AUTO_TEST_CASE( TestSimpleTypes )
{
BOOST_CHECK_EQUAL( Val::Null().Type(), Val::null_type ) ;
BOOST_CHECK( Val::Null().Is<void>() ) ;
Val null ;
BOOST_CHECK_EQUAL( null.Type(), Val::null_type ) ;
BOOST_CHECK( null.Is<void>() ) ;
Val i( 100 ) ;
BOOST_CHECK_EQUAL( i.Str(), "100" );
BOOST_CHECK_EQUAL( i.As<long long>(), 100 ) ;
BOOST_CHECK_EQUAL( i.Type(), Val::int_type ) ;
}

View File

@ -0,0 +1 @@
{ "change_stamp": "", "rtree": { "name": ".", "id": "folder:root", "href": "https:\/\/docs.google.com\/feeds\/default\/private\/full\/folder%3Aroot", "md5": "", "kind": "folder", "mtime": { "sec": 0, "nsec": 0 }, "child": [ { "name": "entry.xml", "id": "", "href": "", "md5": "c0742c0a32b2c909b6f176d17a6992d0", "kind": "file", "mtime": { "sec": 1336796872, "nsec": 404985662 }, "child": [ ] } ] } }

View File

@ -0,0 +1,59 @@
/*
grive: an GPL program to sync a local directory with Google Drive
Copyright (C) 2012 Wan Wai Ho
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation version 2
of the License.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include "EntryTest.hh"
#include "Assert.hh"
#include "drive/Entry1.hh"
#include "xml/Node.hh"
#include "xml/NodeSet.hh"
#include "xml/TreeBuilder.hh"
#include <iostream>
namespace grut {
using namespace gr ;
using namespace gr::v1 ;
Entry1Test::Entry1Test( )
{
}
void Entry1Test::TestXml( )
{
xml::Node root = xml::TreeBuilder::ParseFile( TEST_DATA "entry.xml" ) ;
CPPUNIT_ASSERT( !root["entry"].empty() ) ;
Entry1 subject( root["entry"].front() ) ;
GRUT_ASSERT_EQUAL( "snes", subject.Title() ) ;
GRUT_ASSERT_EQUAL( "\"WxYPGE8CDyt7ImBk\"", subject.ETag() ) ;
GRUT_ASSERT_EQUAL( "https://docs.google.com/feeds/default/private/full/folder%3A0B5KhdsbryVeGMl83OEV1ZVc3cUE",
subject.SelfHref() ) ;
GRUT_ASSERT_EQUAL( 1U, subject.ParentHrefs().size() ) ;
GRUT_ASSERT_EQUAL( "https://docs.google.com/feeds/default/private/full/folder%3A0B5KhdsbryVeGNEZjdUxzZHl3Sjg",
subject.ParentHrefs().front() ) ;
GRUT_ASSERT_EQUAL( true, subject.IsDir() ) ;
}
} // end of namespace grut

View File

@ -0,0 +1,41 @@
/*
grive: an GPL program to sync a local directory with Google Drive
Copyright (C) 2012 Wan Wai Ho
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation version 2
of the License.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#pragma once
#include <cppunit/TestFixture.h>
#include <cppunit/extensions/HelperMacros.h>
namespace grut {
class Entry1Test : public CppUnit::TestFixture
{
public :
Entry1Test( ) ;
// declare suit function
CPPUNIT_TEST_SUITE( Entry1Test ) ;
CPPUNIT_TEST( TestXml ) ;
CPPUNIT_TEST_SUITE_END();
private :
void TestXml( ) ;
} ;
} // end of namespace

View File

@ -1,27 +0,0 @@
SET(GRIVE_SYNC_SH_BINARY "${CMAKE_INSTALL_FULL_LIBEXECDIR}/grive/grive-sync.sh")
CONFIGURE_FILE(grive-changes@.service.in grive-changes@.service @ONLY)
CONFIGURE_FILE(grive-timer@.service.in grive-timer@.service @ONLY)
install(
FILES
grive@.service
${CMAKE_BINARY_DIR}/systemd/grive-changes@.service
${CMAKE_BINARY_DIR}/systemd/grive-timer@.service
DESTINATION
lib/systemd/user
)
install(
FILES
grive-timer@.timer
DESTINATION
lib/systemd/user
)
install(
PROGRAMS
grive-sync.sh
DESTINATION
${CMAKE_INSTALL_FULL_LIBEXECDIR}/grive
)

View File

@ -1,11 +0,0 @@
[Unit]
Description=Google drive sync (changed files)
[Service]
ExecStart=@GRIVE_SYNC_SH_BINARY@ listen "%i"
Type=simple
Restart=always
RestartSec=30
[Install]
WantedBy=default.target

View File

@ -1,122 +0,0 @@
#!/bin/bash
# Copyright (C) 2009 Przemyslaw Pawelczyk <przemoc@gmail.com>
# (C) 2017 Jan Schulz <jasc@gmx.net>
##
## This script is licensed under the terms of the MIT license.
## https://opensource.org/licenses/MIT
# Fail on all errors
set -o pipefail
# We always start in the current users home directory so that names always start there
cd ~
### ARGUMENT PARSING ###
SCRIPT="${0}"
DIRECTORY=$(systemd-escape --unescape -- "$2")
if [[ -z "$DIRECTORY" ]] || [[ ! -d "$DIRECTORY" ]] ; then
echo "Need a directory name in the current users home directory as second argument. Aborting."
exit 1
fi
if [[ -z "${1}" ]] ; then
echo "Need a command as first argument. Aborting."
exit 1
else
if [[ "sync" == "${1}" ]] ; then
COMMAND=sync
elif [[ "listen" == "${1}" ]] ; then
COMMAND=listen
else
echo "Unknown command. Aborting."
exit 1
fi
fi
### LOCKFILE BOILERPLATE ###
LOCKFILE="/run/user/"$(id -u)"/"$(basename "$0")"_"${DIRECTORY//\//_}""
LOCKFD=99
# PRIVATE
_lock() { flock -"$1" "$LOCKFD"; }
_no_more_locking() { _lock u; _lock xn && rm -f "$LOCKFILE"; }
_prepare_locking() { eval "exec "$LOCKFD">\""$LOCKFILE"\""; trap _no_more_locking EXIT; }
# ON START
_prepare_locking
# PUBLIC
exlock_now() { _lock xn; } # obtain an exclusive lock immediately or fail
exlock() { _lock x; } # obtain an exclusive lock
shlock() { _lock s; } # obtain a shared lock
unlock() { _lock u; } # drop a lock
### SYNC SCRIPT ###
# Idea: only let one script run, but if the sync script is called a second time
# make sure we sync a second time, too
sync_directory() {
_directory="${1}"
reset_timer_and_exit() { echo "Retriggered google drive sync ('${_directory}')" && touch -m $LOCKFILE && exit; }
exlock_now || reset_timer_and_exit
if ping -c1 -W1 -q accounts.google.com >/dev/null 2>&1; then
true
# pass
else
echo "Google drive server not reachable, NOT syncing..."
unlock
exit 0
fi
TIME_AT_START=0
TIME_AT_END=1
while [[ "${TIME_AT_START}" -lt "${TIME_AT_END}" ]]; do
echo "Syncing '${_directory}'..."
TIME_AT_START="$(stat -c %Y "$LOCKFILE")"
grive -p "${_directory}" 2>&1 | grep -v -E "^Reading local directories$|^Reading remote server file list$|^Synchronizing files$|^Finished!$"
TIME_AT_END="$(stat -c %Y "$LOCKFILE")"
echo "Sync of '${_directory}' done."
done
# always exit ok, so that we never go into a wrong systemd state
unlock
exit 0
}
### LISTEN TO CHANGES IN DIRECTORY ###
listen_directory() {
_directory="${1}"
type inotifywait >/dev/null 2>&1 || { echo >&2 "I require inotifywait but it's not installed. Aborting."; exit 1; }
echo "Listening for changes in '${_directory}'"
while true #run indefinitely
do
# Use a different call to not need to change exit into return
inotifywait -q -r -e modify,attrib,close_write,move,create,delete --exclude ".grive_state|.grive" "${_directory}" > /dev/null 2>&1 && ${SCRIPT} sync $(systemd-escape "${_directory}")
#echo ${SCRIPT} "${_directory}"
done
# always exit ok, so that we never go into a wrong systemd state
exit 0
}
if [[ "${COMMAND}" == listen ]] ; then
listen_directory "${DIRECTORY}"
else
sync_directory "${DIRECTORY}"
fi
# always exit ok, so that we never go into a wrong systemd state
exit 0

View File

@ -1,6 +0,0 @@
[Unit]
Description=Google drive sync (executed by timer unit)
After=network-online.target
[Service]
ExecStart=@GRIVE_SYNC_SH_BINARY@ sync "%i"

View File

@ -1,11 +0,0 @@
[Unit]
Description=Google drive sync (fixed intervals)
[Timer]
OnCalendar=*:0/5
OnBootSec=3min
OnUnitActiveSec=5min
Unit=grive-timer@%i.service
[Install]
WantedBy=timers.target

View File

@ -1,13 +0,0 @@
[Unit]
Description=Google drive sync (main)
Requires=grive-timer@%i.timer grive-changes@%i.service
# dummy service
[Service]
Type=oneshot
ExecStart=/bin/true
# This service shall be considered active after start
RemainAfterExit=yes
[Install]
WantedBy=default.target