Compare commits

...

92 commits

Author SHA1 Message Date
Markus Unterwaditzer
c93cffdf72 Merge branch '0.16-maintenance' 2018-06-13 18:54:51 +02:00
Markus Unterwaditzer
42564de75c Merge branch '0.16-maintenance' 2018-06-13 18:53:25 +02:00
Markus Unterwaditzer
6e0e674fe3 Merge branch '0.16-maintenance' 2018-06-13 18:50:01 +02:00
Markus Unterwaditzer
648cd1ae98 Merge branch '0.16-maintenance' 2018-06-13 18:39:33 +02:00
Markus Unterwaditzer
aee513a39f Merge branch '0.16-maintenance' 2018-06-13 18:12:13 +02:00
Markus Unterwaditzer
556ec88578 remove useless normalization, see #745 2018-06-13 15:18:34 +02:00
Markus Unterwaditzer
579b2ca5d9 stylefix 2018-06-07 21:56:44 +02:00
Markus Unterwaditzer
511f427a77 remove dead code 2018-06-07 18:35:24 +02:00
Markus Unterwaditzer
07cbd58aaf cheap fix for google storage for now 2018-06-07 18:27:47 +02:00
Markus Unterwaditzer
8c67763a1b fix link in docs 2018-06-07 18:20:56 +02:00
Markus Unterwaditzer
c31e27a88a bump cache version again 2018-06-07 00:30:53 +02:00
Markus Unterwaditzer
9324fa4a74
Implement http storage in rust (#730)
* Port http storage to rust (#729)

* Port http storage to rust

* implement rest of parameters as far as possible

* stylefixes

* rustup

* fix invalid timestamp

* fix header file

* Fix compilation errors

* basic impl of dav

* dockerize xandikos

* add xandikos build

* Fix circleci build

* Fix circleci config

* fix nextcloud port

* stylefix

* implement upload, upload, delete in rust

* fix exc handling

* python stylefixes

* move caldav.list to rust

* fix exc again (fastmail)

* stylefixes

* add basic logging, fix fastmail

* stylefixes

* fix tests for etag=None (icloud)

* overwrite busted cargo-install-update

* install clippy from git

* fix rustfmt

* rustfmt

* clear cache
2018-06-06 14:16:25 +02:00
Markus Unterwaditzer
f401078c57 Bump minimal shippai version 2018-04-26 22:30:34 +02:00
Markus Unterwaditzer
12bf226a41 Update shippai 2018-04-24 20:58:35 +02:00
Markus Unterwaditzer
a61d51bc8f Migrate to newer shippai version 2018-04-24 20:17:59 +02:00
Markus Unterwaditzer
ec79d8b18e Don't install shippai devel version 2018-04-24 15:28:25 +02:00
Markus Unterwaditzer
4f3fd09f87 fix syntax error in makefile 2018-04-24 13:53:11 +02:00
Romain
b5eefc9bf5 Add double quote in exemple config files (#732)
* nextcloud.rst : add double quote to not forget them

Add double quote to not forget them, and avoid the message :
warning: Soon, all strings have to be in double quotes. Please replace UserName with "UserName"

* fastmail.rst : add double quote to not forget them

Add double quote to not forget them, and avoid the message :
warning: Soon, all strings have to be in double quotes. Please replace UserName with "UserName"

* icloud.rst : add double quote to not forget them

Add double quote to not forget them, and avoid the message :
warning: Soon, all strings have to be in double quotes. Please replace UserName with "UserName"

* todoman.rst : add double quote to not forget them

Add double quote to not forget them, and avoid the message :
warning: Soon, all strings have to be in double quotes. Please replace UserName with "UserName"

* xandikos.rst : add double quote to not forget them

Add double quote to not forget them, and avoid the message :
warning: Soon, all strings have to be in double quotes. Please replace UserName with "UserName"

* davmail.rst : add double quote to not forget them

Add double quote to not forget them, and avoid the message :
warning: Soon, all strings have to be in double quotes. Please replace UserName with "UserName"

* partial-sync.rst : add double quote to not forget them

Add double quote to not forget them, and avoid the message :
warning: Soon, all strings have to be in double quotes. Please replace UserName with "UserName"
2018-04-24 11:21:03 +02:00
Markus Unterwaditzer
59e822707d Fix hypothesis devel URL 2018-04-24 11:20:49 +02:00
Markus Unterwaditzer
8cedf13fdf
Reenable davical (#728) 2018-03-28 16:55:24 +02:00
Markus Unterwaditzer
d26258807e replace ring with sha2 crate 2018-03-21 20:53:59 +01:00
Markus Unterwaditzer
003ee86a2d update rust-atomicwrites 2018-03-21 19:43:46 +01:00
Markus Unterwaditzer
07eff1b418 rustup 2018-03-20 13:42:10 +01:00
Markus Unterwaditzer
73714afcdb Remove unnecessary build dep 2018-03-20 13:42:01 +01:00
Markus Unterwaditzer
69f4e4f3bc fix circleci build 2018-03-18 20:12:15 +01:00
Markus Unterwaditzer
379086eb04 install less in ci 2018-03-18 01:10:06 +01:00
Markus Unterwaditzer
cba48f1d9e let build fail if not properly formatted 2018-03-17 20:37:35 +01:00
Markus Unterwaditzer
53d55fced4 Remove unused imports 2018-03-16 18:30:37 +01:00
Markus Unterwaditzer
168d999359 Remove useless makefile target 2018-03-16 18:27:05 +01:00
Markus Unterwaditzer
50c1151921 Make docs build independent of app 2018-03-16 18:11:55 +01:00
Markus Unterwaditzer
85bc7ed169
Implement filesystem storage in rust (#724)
* Implement filesystem storage in rust

* Fix circleci

* stylefixes
2018-03-15 21:07:45 +01:00
Markus Unterwaditzer
06d59f59a5
Refactor rust errors (#722)
Refactor rust errors
2018-03-03 22:43:28 +01:00
Markus Unterwaditzer
3f41f9cf41 Install click-log devel version 2018-02-16 20:38:09 +01:00
Markus Unterwaditzer
cd2fd53e48 Credit packagecloud
Because we asked packagecloud for more bandwidth, they asked us to
credit them in the README
2018-02-16 19:39:49 +01:00
Markus Unterwaditzer
ba3c27322f ensure nightly in rustup 2018-02-14 22:40:19 +01:00
Markus Unterwaditzer
e35e23238e Re-add nightly? 2018-02-14 22:08:01 +01:00
Markus Unterwaditzer
2ceafac27a Remove nightly flag 2018-02-14 21:02:57 +01:00
Markus Unterwaditzer
916fc4eb30
Skip external storage tests if no creds (#718) 2018-02-14 20:43:33 +01:00
Markus Unterwaditzer
7e9fa7463e
Add iCloud to circleci (#717)
fix #714
2018-02-14 20:42:32 +01:00
Markus Unterwaditzer
535911c9fd Remove unsupported zesty 2018-02-14 19:44:53 +01:00
Markus Unterwaditzer
8f2734c33e
Singlefile storage in rust (#698)
* Singlefile storage in rust

* add NOW

* Avoid global item
2018-02-14 19:15:11 +01:00
Markus Unterwaditzer
4d3860d449
Test radicale and xandikos again (#715) 2018-02-10 16:11:06 +01:00
Markus Unterwaditzer
9c3a2b48e9 Unify badges 2018-02-09 20:53:14 +01:00
Markus Unterwaditzer
2a2457e364
CI refactor (#713)
* Switch to CircleCI

* add circleci badge
2018-02-09 20:50:48 +01:00
Hugo Osvaldo Barrera
855f29cc35 Update link to official Arch package (#710)
There's now an official Arch package
2018-02-06 09:25:33 +01:00
Markus Unterwaditzer
cc37e6a312 Merge branch '0.16-maintenance' 2018-02-05 17:01:46 +01:00
Markus Unterwaditzer
01573f0d66 Merge branch '0.16-maintenance' 2018-02-05 15:54:17 +01:00
Markus Unterwaditzer
c1aec4527c Remove useless path change 2018-01-23 23:16:37 +01:00
Markus Unterwaditzer
b1ec9c26c7 Fix unused formatting string 2018-01-22 01:02:44 +01:00
Markus Unterwaditzer
82f47737a0 Revert use of hypothesis 2018-01-21 23:23:08 +01:00
Markus Unterwaditzer
45d76c889c Remove remotestorage leftovers 2018-01-21 20:51:30 +01:00
Markus Unterwaditzer
c92b4f38eb Update copyright year 2018-01-21 00:11:24 +01:00
Markus Unterwaditzer
47b2a43a0e Disable davical 2018-01-19 11:18:46 +01:00
Markus Unterwaditzer
2d0527ecf0 Skip davical test skipper 2018-01-19 11:17:58 +01:00
Markus Unterwaditzer
991076d12a stylefixes 2018-01-18 23:30:47 +01:00
Markus Unterwaditzer
f58f06d2b5 Remove hypothesis from system test 2018-01-18 23:25:49 +01:00
Markus Unterwaditzer
b1cddde635 Remove baikal and owncloud from docs, see #489 2018-01-18 23:18:42 +01:00
Markus Unterwaditzer
41f64e2dca
Dockerize nextcloud (#704)
* Dockerize nextcloud

* Remove ownCloud and baikal, fix #489

* Remove branch from travis conf
2018-01-18 23:10:53 +01:00
Markus Unterwaditzer
401c441acb Add slowest tests to testrun 2018-01-15 21:23:09 +01:00
Markus Unterwaditzer
f1310883b9 Screw git hooks 2018-01-05 18:25:00 +01:00
Markus Unterwaditzer
afa8031eec Improve handling of malformed items 2018-01-05 18:14:32 +01:00
Markus Unterwaditzer
50604f24f1 Add simple doc for todoman 2018-01-05 16:34:26 +01:00
Amanda Hickman
cd6cb92b59 Little spelling fix (#695)
* Fixed spelling of "occurred"

* Fix spelling of occurred.

* fixed one lingering misspelling
2018-01-03 15:52:55 +01:00
Markus Unterwaditzer
39c2df99eb Update legalities 2017-12-25 21:50:29 +01:00
Markus Unterwaditzer
7fdff404e6 No wheels 2017-12-04 20:16:29 +01:00
Markus Unterwaditzer
1bdde25c0c Fix etesync build 2017-12-04 19:52:02 +01:00
Markus Unterwaditzer
b32932bd13 Relax recurrence tests 2017-12-03 14:00:21 +01:00
Markus Unterwaditzer
22d009b824 Remove unnecessary filter 2017-11-27 19:52:15 +01:00
Markus Unterwaditzer
792dbc171f Fix missing XML header, see #688 2017-11-25 14:15:14 +01:00
Markus Unterwaditzer
5700c4688b
rustup (#686)
* rustup

* rust-vobject upgrade
2017-11-07 21:58:17 +01:00
Markus Unterwaditzer
3984f547ce
Update nextcloud (#684) 2017-11-05 15:59:42 +01:00
Markus Unterwaditzer
9769dab02e
Update owncloud (#685) 2017-11-05 15:59:34 +01:00
Markus Unterwaditzer
bd2e09a84b Small refactor in dav.py 2017-10-26 02:22:18 +02:00
Markus Unterwaditzer
f7b6e67095 Ignore new flake8 linters 2017-10-26 01:41:43 +02:00
Markus Unterwaditzer
a2c509adf5 rustup, fix broken struct export 2017-10-25 22:36:28 +02:00
Markus Unterwaditzer
28fdf42238 Fix #681 2017-10-21 17:23:41 +02:00
Markus Unterwaditzer
0d3b028b17 Cache rust artifacts 2017-10-19 23:47:20 +02:00
Markus Unterwaditzer
f8e65878d8 Update rust installation instructions 2017-10-19 23:41:43 +02:00
Markus Unterwaditzer
75e83cd0f6 Commit cargo.lock 2017-10-19 23:27:29 +02:00
Malte Kiefer
96a8ab35c3 fixed typo (#678)
fixed typo
2017-10-13 19:34:37 +02:00
Markus Unterwaditzer
619373a8e8 Rust: new item module 2017-10-11 13:53:10 +02:00
Markus Unterwaditzer
cbb15e1895 Move all target to top again 2017-10-11 13:28:00 +02:00
Markus Unterwaditzer
325304c50f Lazy-load component in item 2017-10-11 12:01:52 +02:00
Markus Unterwaditzer
bdbfc360ff Move item hashing into rust 2017-10-10 00:52:58 +02:00
Markus Unterwaditzer
c17fa308fb Adapt virtualenv steps to always select python3 2017-10-06 18:32:17 +02:00
Markus Unterwaditzer
81f7472e3a Update installation instructions for Rust dependencies 2017-10-06 18:30:10 +02:00
Markus Unterwaditzer
69543b8615 Install rust on readthedocs 2017-10-05 17:45:19 +02:00
Markus Unterwaditzer
1b7cb4e656 Use rust-vobject (#675)
Use rust-vobject
2017-10-04 22:41:18 +02:00
Markus Unterwaditzer
7bdb22a207 Fix Ubuntu package name of Python 3. 2017-10-03 22:48:13 +02:00
Markus Unterwaditzer
cb41a9df28 Add fast_finish to Travis 2017-10-03 20:59:43 +02:00
Markus Unterwaditzer
33f96f5eca Fix broken link 2017-10-03 13:13:44 +02:00
Markus Unterwaditzer
178ac237ad Fix installation link 2017-10-03 11:29:51 +02:00
94 changed files with 5744 additions and 1826 deletions

243
.circleci/config.yml Normal file
View file

@ -0,0 +1,243 @@
version: 2
references:
basic_env: &basic_env
CI: true
restore_caches: &restore_caches
restore_cache:
keys:
- cache3-{{ arch }}-{{ .Branch }}
save_caches: &save_caches
save_cache:
key: cache3-{{ arch }}-{{ .Branch }}
paths:
- "rust/target/"
- "~/.cargo/"
- "~/.cache/pip/"
- "~/.rustup/"
basic_setup: &basic_setup
run: . scripts/circleci-install.sh
jobs:
nextcloud:
docker:
- image: circleci/python:3.6
environment:
<<: *basic_env
NEXTCLOUD_HOST: localhost:80
DAV_SERVER: nextcloud
- image: nextcloud
environment:
SQLITE_DATABASE: nextcloud
NEXTCLOUD_ADMIN_USER: asdf
NEXTCLOUD_ADMIN_PASSWORD: asdf
steps:
- checkout
- *restore_caches
- *basic_setup
- run: make -e install-dev install-test
- *save_caches
- run: wget -O - --retry-connrefused http://localhost:80/
- run: make -e storage-test
fastmail:
docker:
- image: circleci/python:3.6
environment:
<<: *basic_env
DAV_SERVER: fastmail
steps:
- checkout
- *restore_caches
- *basic_setup
- run: make -e install-dev install-test
- *save_caches
- run: make -e storage-test
icloud:
docker:
- image: circleci/python:3.6
environment:
<<: *basic_env
DAV_SERVER: icloud
steps:
- checkout
- *restore_caches
- *basic_setup
- run: make -e install-dev install-test
- *save_caches
- run: make -e storage-test
davical:
docker:
- image: circleci/python:3.6
environment:
<<: *basic_env
DAV_SERVER: davical
steps:
- checkout
- *restore_caches
- *basic_setup
- run: make -e install-dev install-test
- *save_caches
- run: make -e storage-test
xandikos:
docker:
- image: circleci/python:3.6
environment:
<<: *basic_env
DAV_SERVER: xandikos
- image: vdirsyncer/xandikos:0.0.1
steps:
- checkout
- *restore_caches
- *basic_setup
- run: make -e install-dev install-test
- *save_caches
- run: wget -O - --retry-connrefused http://localhost:5001/
- run: make -e storage-test
style:
docker:
- image: circleci/python:3.6
environment:
<<: *basic_env
steps:
- checkout
- *restore_caches
- *basic_setup
- run: make -e install-style
- *save_caches
- run: make -e style
py34-minimal:
docker:
- image: circleci/python:3.4
environment:
<<: *basic_env
REQUIREMENTS: minimal
steps:
- checkout
- *restore_caches
- *basic_setup
- run: make -e install-dev install-test
- *save_caches
- run: make -e test
py34-release:
docker:
- image: circleci/python:3.4
environment:
<<: *basic_env
REQUIREMENTS: release
steps:
- checkout
- *restore_caches
- *basic_setup
- run: make -e install-dev install-test
- *save_caches
- run: make -e test
py34-devel:
docker:
- image: circleci/python:3.4
environment:
<<: *basic_env
REQUIREMENTS: devel
steps:
- checkout
- *restore_caches
- *basic_setup
- run: make -e install-dev install-test
- *save_caches
- run: make -e test
py36-minimal:
docker:
- image: circleci/python:3.6
environment:
<<: *basic_env
REQUIREMENTS: minimal
steps:
- checkout
- *restore_caches
- *basic_setup
- run: make -e install-dev install-test
- *save_caches
- run: make -e test
py36-release:
docker:
- image: circleci/python:3.6
environment:
<<: *basic_env
REQUIREMENTS: release
steps:
- checkout
- *restore_caches
- *basic_setup
- run: make -e install-dev install-test
- *save_caches
- run: make -e test
py36-devel:
docker:
- image: circleci/python:3.6
environment:
<<: *basic_env
REQUIREMENTS: devel
steps:
- checkout
- *restore_caches
- *basic_setup
- run: make -e install-dev install-test
- *save_caches
- run: make -e test
rust:
docker:
- image: circleci/python:3.6
environment:
<<: *basic_env
REQUIREMENTS: release
steps:
- checkout
- *restore_caches
- *basic_setup
- run: make -e install-dev install-test
- *save_caches
- run: make -e rust-test
workflows:
version: 2
test_all:
jobs:
- nextcloud
- fastmail
- icloud
- davical
- xandikos
- style
- py34-minimal
- py34-release
- py34-devel
- py36-minimal
- py36-release
- py36-devel
- rust

2
.gitignore vendored
View file

@ -13,4 +13,6 @@ env
dist
docs/_build/
vdirsyncer/version.py
vdirsyncer/_native*
.hypothesis
codecov.sh

9
.gitmodules vendored
View file

@ -1,9 +0,0 @@
[submodule "tests/storage/servers/baikal"]
path = tests/storage/servers/baikal
url = https://github.com/vdirsyncer/baikal-testserver
[submodule "tests/storage/servers/owncloud"]
path = tests/storage/servers/owncloud
url = https://github.com/vdirsyncer/owncloud-testserver
[submodule "tests/storage/servers/nextcloud"]
path = tests/storage/servers/nextcloud
url = https://github.com/vdirsyncer/nextcloud-testserver

View file

@ -1,120 +0,0 @@
{
"branches": {
"only": [
"auto",
"master",
"/^.*-maintenance$/"
]
},
"cache": "pip",
"dist": "trusty",
"git": {
"submodules": false
},
"install": [
". scripts/travis-install.sh",
"pip install -U pip setuptools",
"pip install wheel",
"make -e install-dev",
"make -e install-$BUILD"
],
"language": "python",
"matrix": {
"include": [
{
"env": "BUILD=style",
"python": "3.6"
},
{
"env": "BUILD=test DAV_SERVER=radicale REQUIREMENTS=devel ",
"python": "3.4"
},
{
"env": "BUILD=test DAV_SERVER=xandikos REQUIREMENTS=devel ",
"python": "3.4"
},
{
"env": "BUILD=test DAV_SERVER=radicale REQUIREMENTS=release ",
"python": "3.4"
},
{
"env": "BUILD=test DAV_SERVER=xandikos REQUIREMENTS=release ",
"python": "3.4"
},
{
"env": "BUILD=test DAV_SERVER=radicale REQUIREMENTS=minimal ",
"python": "3.4"
},
{
"env": "BUILD=test DAV_SERVER=xandikos REQUIREMENTS=minimal ",
"python": "3.4"
},
{
"env": "BUILD=test DAV_SERVER=radicale REQUIREMENTS=devel ",
"python": "3.5"
},
{
"env": "BUILD=test DAV_SERVER=xandikos REQUIREMENTS=devel ",
"python": "3.5"
},
{
"env": "BUILD=test DAV_SERVER=radicale REQUIREMENTS=release ",
"python": "3.5"
},
{
"env": "BUILD=test DAV_SERVER=xandikos REQUIREMENTS=release ",
"python": "3.5"
},
{
"env": "BUILD=test DAV_SERVER=radicale REQUIREMENTS=minimal ",
"python": "3.5"
},
{
"env": "BUILD=test DAV_SERVER=xandikos REQUIREMENTS=minimal ",
"python": "3.5"
},
{
"env": "BUILD=test DAV_SERVER=radicale REQUIREMENTS=devel ",
"python": "3.6"
},
{
"env": "BUILD=test DAV_SERVER=xandikos REQUIREMENTS=devel ",
"python": "3.6"
},
{
"env": "BUILD=test DAV_SERVER=radicale REQUIREMENTS=release ",
"python": "3.6"
},
{
"env": "BUILD=test DAV_SERVER=xandikos REQUIREMENTS=release ",
"python": "3.6"
},
{
"env": "BUILD=test DAV_SERVER=fastmail REQUIREMENTS=release ",
"if": "NOT (type IN (pull_request))",
"python": "3.6"
},
{
"env": "BUILD=test DAV_SERVER=radicale REQUIREMENTS=minimal ",
"python": "3.6"
},
{
"env": "BUILD=test DAV_SERVER=xandikos REQUIREMENTS=minimal ",
"python": "3.6"
},
{
"env": "BUILD=test ETESYNC_TESTS=true REQUIREMENTS=latest",
"python": "3.6"
},
{
"env": "BUILD=test",
"language": "generic",
"os": "osx"
}
]
},
"script": [
"make -e $BUILD"
],
"sudo": true
}

View file

@ -14,5 +14,9 @@ In alphabetical order:
- Michael Adler
- Thomas Weißschuh
Additionally `FastMail sponsored a paid account for testing
<https://github.com/pimutils/vdirsyncer/issues/571>`_. Thanks!
Special thanks goes to:
* `FastMail <https://github.com/pimutils/vdirsyncer/issues/571>`_ sponsors a
paid account for testing their servers.
* `Packagecloud <https://packagecloud.io/>`_ provide repositories for
vdirsyncer's Debian packages.

View file

@ -9,6 +9,13 @@ Package maintainers and users who have to manually update their installation
may want to subscribe to `GitHub's tag feed
<https://github.com/pimutils/vdirsyncer/tags.atom>`_.
Version 0.17.0
==============
- Fix bug where collection discovery under DAV-storages would produce invalid
XML. See :gh:`688`.
- ownCloud and Baikal are no longer tested.
Version 0.16.6
==============

View file

@ -1,4 +1,4 @@
Copyright (c) 2014-2016 by Markus Unterwaditzer & contributors. See
Copyright (c) 2014-2018 by Markus Unterwaditzer & contributors. See
AUTHORS.rst for more details.
Some rights reserved.
@ -31,3 +31,10 @@ LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE AND DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.
== etesync ==
I, Tom Hacohen, hereby grant a license for EteSync's journal-manager
(https://github.com/etesync/journal-manager) to be used as a dependency in
vdirsyncer's test suite for the purpose of testing vdirsyncer without having
the copyleft section of the AGPL apply to it (vdirsyncer).

View file

@ -1,7 +1,7 @@
# See the documentation on how to run the tests:
# https://vdirsyncer.pimutils.org/en/stable/contributing.html
# Which DAV server to run the tests against (radicale, xandikos, skip, owncloud, nextcloud, ...)
# Which DAV server to run the tests against (radicale, xandikos, skip, nextcloud, ...)
export DAV_SERVER := skip
# release (install release versions of dependencies)
@ -20,9 +20,15 @@ export ETESYNC_TESTS := false
# systemwide.
export CI := false
# Enable debug symbols and backtrace printing for rust lib
export RUST_BACKTRACE := $(CI)
# Whether to generate coverage data while running tests.
export COVERAGE := $(CI)
# Log everything
export RUST_LOG := vdirsyncer_rustext=debug
# Additional arguments that should be passed to py.test.
PYTEST_ARGS =
@ -36,8 +42,7 @@ ifeq ($(COVERAGE), true)
endif
ifeq ($(ETESYNC_TESTS), true)
TEST_EXTRA_PACKAGES += git+https://github.com/etesync/journal-manager
TEST_EXTRA_PACKAGES += django djangorestframework wsgi_intercept drf-nested-routers
TEST_EXTRA_PACKAGES += django-etesync-journal django djangorestframework wsgi_intercept drf-nested-routers
endif
PYTEST = py.test $(PYTEST_ARGS)
@ -45,23 +50,34 @@ PYTEST = py.test $(PYTEST_ARGS)
export TESTSERVER_BASE := ./tests/storage/servers/
CODECOV_PATH = /tmp/codecov.sh
ifeq ($(CI), true)
test:
curl -s https://codecov.io/bash > $(CODECOV_PATH)
$(PYTEST) tests/unit/
bash $(CODECOV_PATH) -c -F unit
$(PYTEST) tests/system/
bash $(CODECOV_PATH) -c -F system
$(PYTEST) tests/storage/
bash $(CODECOV_PATH) -c -F storage
else
test:
$(PYTEST)
endif
all:
$(error Take a look at https://vdirsyncer.pimutils.org/en/stable/tutorial.html#installation)
ifeq ($(CI), true)
codecov.sh:
curl -s https://codecov.io/bash > $@
else
codecov.sh:
echo > $@
endif
rust-test:
cd rust/ && cargo test --release
test: unit-test system-test storage-test
unit-test: codecov.sh
$(PYTEST) tests/unit/
bash codecov.sh -c -F unit
system-test: codecov.sh
$(PYTEST) tests/system/
bash codecov.sh -c -F system
storage-test: codecov.sh
$(PYTEST) tests/storage/
bash codecov.sh -c -F storage
install-servers:
set -ex; \
for server in $(DAV_SERVER); do \
@ -75,24 +91,24 @@ install-test: install-servers
pip install -Ur test-requirements.txt
set -xe && if [ "$$REQUIREMENTS" = "devel" ]; then \
pip install -U --force-reinstall \
git+https://github.com/DRMacIver/hypothesis \
'git+https://github.com/HypothesisWorks/hypothesis#egg=hypothesis&subdirectory=hypothesis-python' \
git+https://github.com/kennethreitz/requests \
git+https://github.com/pytest-dev/pytest; \
fi
[ -z "$(TEST_EXTRA_PACKAGES)" ] || pip install $(TEST_EXTRA_PACKAGES)
install-style: install-docs
pip install -U flake8 flake8-import-order 'flake8-bugbear>=17.3.0' autopep8
pip install -U flake8 flake8-import-order 'flake8-bugbear>=17.3.0'
rustup component add rustfmt-preview
cargo install --force --git https://github.com/rust-lang-nursery/rust-clippy clippy
style:
flake8
! git grep -i syncroniz */*
! git grep -i 'text/icalendar' */*
sphinx-build -W -b html ./docs/ ./docs/_build/html/
python3 scripts/make_travisconf.py | diff -b .travis.yml -
travis-conf:
python3 scripts/make_travisconf.py > .travis.yml
cd rust/ && cargo +nightly clippy
cd rust/ && cargo +nightly fmt --all -- --check
install-docs:
pip install -Ur docs-requirements.txt
@ -104,33 +120,26 @@ linkcheck:
sphinx-build -W -b linkcheck ./docs/ ./docs/_build/linkcheck/
release:
python setup.py sdist bdist_wheel upload
python setup.py sdist upload
release-deb:
sh scripts/release-deb.sh debian jessie
sh scripts/release-deb.sh debian stretch
sh scripts/release-deb.sh ubuntu trusty
sh scripts/release-deb.sh ubuntu xenial
sh scripts/release-deb.sh ubuntu zesty
install-dev:
pip install -e .
pip install -ve .
[ "$(ETESYNC_TESTS)" = "false" ] || pip install -Ue .[etesync]
set -xe && if [ "$(REQUIREMENTS)" = "devel" ]; then \
pip install -U --force-reinstall \
git+https://github.com/mitsuhiko/click \
git+https://github.com/click-contrib/click-log \
git+https://github.com/kennethreitz/requests; \
elif [ "$(REQUIREMENTS)" = "minimal" ]; then \
pip install -U --force-reinstall $$(python setup.py --quiet minimal_requirements); \
fi
install-git-hooks: install-style
echo "make style-autocorrect" > .git/hooks/pre-commit
chmod +x .git/hooks/pre-commit
style-autocorrect:
git diff --cached --name-only | egrep '\.py$$' | xargs --no-run-if-empty autopep8 -ri
ssh-submodule-urls:
git submodule foreach "\
echo -n 'Old: '; \
@ -139,4 +148,16 @@ ssh-submodule-urls:
echo -n 'New URL: '; \
git remote get-url origin"
.PHONY: docs
install-rust:
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain nightly
rustup update nightly
rust/vdirsyncer_rustext.h:
cd rust/ && cargo build # hack to work around cbindgen bugs
CARGO_EXPAND_TARGET_DIR=rust/target/ cbindgen -c rust/cbindgen.toml rust/ > $@
docker/xandikos:
docker build -t vdirsyncer/xandikos:0.0.1 $@
docker push vdirsyncer/xandikos:0.0.1
.PHONY: docs rust/vdirsyncer_rustext.h docker/xandikos

View file

@ -20,8 +20,8 @@ It aims to be for calendars and contacts what `OfflineIMAP
.. _programs: https://vdirsyncer.pimutils.org/en/latest/tutorials/
.. image:: https://travis-ci.org/pimutils/vdirsyncer.svg?branch=master
:target: https://travis-ci.org/pimutils/vdirsyncer
.. image:: https://circleci.com/gh/pimutils/vdirsyncer.svg?style=shield
:target: https://circleci.com/gh/pimutils/vdirsyncer
.. image:: https://codecov.io/github/pimutils/vdirsyncer/coverage.svg?branch=master
:target: https://codecov.io/github/pimutils/vdirsyncer?branch=master
@ -29,6 +29,9 @@ It aims to be for calendars and contacts what `OfflineIMAP
.. image:: https://badge.waffle.io/pimutils/vdirsyncer.svg?label=ready&title=Ready
:target: https://waffle.io/pimutils/vdirsyncer
.. image:: https://img.shields.io/badge/deb-packagecloud.io-844fec.svg
:target: https://packagecloud.io/pimutils/vdirsyncer
Links of interest
=================

View file

@ -43,7 +43,7 @@ fileext = ".vcf"
[storage bob_contacts_remote]
type = "carddav"
url = "https://owncloud.example.com/remote.php/carddav/"
url = "https://nextcloud.example.com/"
#username =
# The password can also be fetched from the system password storage, netrc or a
# custom command. See http://vdirsyncer.pimutils.org/en/stable/keyring.html
@ -65,6 +65,6 @@ fileext = ".ics"
[storage bob_calendar_remote]
type = "caldav"
url = "https://owncloud.example.com/remote.php/caldav/"
url = "https://nextcloud.example.com/"
#username =
#password =

18
docker-compose.yml Normal file
View file

@ -0,0 +1,18 @@
version: '2'
services:
nextcloud:
image: nextcloud
ports:
- '5000:80'
environment:
- SQLITE_DATABASE=nextcloud
- NEXTCLOUD_ADMIN_USER=asdf
- NEXTCLOUD_ADMIN_PASSWORD=asdf
xandikos:
build:
context: .
dockerfile: docker/xandikos/Dockerfile
ports:
- '5001:5001'

View file

@ -0,0 +1,13 @@
# Original file copyright 2017 Jelmer Vernooij
FROM ubuntu:latest
RUN apt-get update && apt-get -y install xandikos locales
EXPOSE 8080
RUN locale-gen en_US.UTF-8
ENV PYTHONIOENCODING=utf-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
CMD xandikos -d /tmp/dav -l 0.0.0.0 -p 5001 --autocreate

View file

@ -515,19 +515,10 @@ leads to an error.
of the normalized item content.
:param url: URL to the ``.ics`` file.
:param username: Username for authentication.
:param password: Password for authentication.
:param verify: Verify SSL certificate, default True. This can also be a
local path to a self-signed SSL certificate. See :ref:`ssl-tutorial`
for more information.
:param verify_fingerprint: Optional. SHA1 or MD5 fingerprint of the
expected server certificate. See :ref:`ssl-tutorial` for more
information.
:param auth: Optional. Either ``basic``, ``digest`` or ``guess``. The
default is preemptive Basic auth, sending credentials even if server
didn't request them. This saves from an additional roundtrip per
request. Consider setting ``guess`` if this causes issues with your
server.
:param username: Username for HTTP basic authentication.
:param password: Password for HTTP basic authentication.
:param useragent: Default ``vdirsyncer``.
:param verify_cert: Add one new root certificate file in PEM format. Useful
for servers with self-signed certificates.
:param auth_cert: Optional. Either a path to a certificate with a client
certificate and the key or a list of paths to the files with them.
:param useragent: Default ``vdirsyncer``.

View file

@ -10,9 +10,9 @@ OS/distro packages
The following packages are user-contributed and were up-to-date at the time of
writing:
- `ArchLinux (AUR) <https://aur.archlinux.org/packages/vdirsyncer>`_
- `ArchLinux <https://www.archlinux.org/packages/community/any/vdirsyncer/>`_
- `Ubuntu and Debian, x86_64-only
<https://packagecloud.io/pimutils/vdirsyncer/install>`_ (packages also exist
<https://packagecloud.io/pimutils/vdirsyncer>`_ (packages also exist
in the official repositories but may be out of date)
- `GNU Guix <https://www.gnu.org/software/guix/package-list.html#vdirsyncer>`_
- `OS X (homebrew) <http://braumeister.org/formula/vdirsyncer>`_
@ -44,12 +44,17 @@ following things are installed:
- Python 3.4+ and pip.
- ``libxml`` and ``libxslt``
- ``zlib``
- Linux or OS X. **Windows is not supported, see :gh:`535`.**
- `Rust <https://www.rust-lang.org/>`_, the programming language, together with
its package manager ``cargo``.
- Linux or OS X. **Windows is not supported**, see :gh:`535`.
On Linux systems, using the distro's package manager is the best
way to do this, for example, using Ubuntu::
On Linux systems, using the distro's package manager is the best way to do
this, for example, using Ubuntu (last tried on Trusty)::
sudo apt-get install libxml2 libxslt1.1 zlib1g python
sudo apt-get install python3 python3-pip libffi-dev
Rust may need to be installed separately, as the packages in Ubuntu are usually
out-of-date. I recommend `rustup <https://rustup.rs/>`_ for that.
Then you have several options. The following text applies for most Python
software by the way.
@ -59,11 +64,14 @@ The dirty, easy way
The easiest way to install vdirsyncer at this point would be to run::
pip install --user --ignore-installed vdirsyncer
pip3 install -v --user --ignore-installed vdirsyncer
- ``--user`` is to install without root rights (into your home directory)
- ``--ignore-installed`` is to work around Debian's potentially broken packages
(see :ref:`debian-urllib3`).
(see :ref:`debian-urllib3`). You can try to omit it if you run into other
problems related to certificates, for example.
Your executable is then in ``~/.local/bin/``.
This method has a major flaw though: Pip doesn't keep track of the files it
installs. Vdirsyncer's files would be located somewhere in
@ -79,9 +87,9 @@ There is a way to install Python software without scattering stuff across
your filesystem: virtualenv_. There are a lot of resources on how to use it,
the simplest possible way would look something like::
virtualenv ~/vdirsyncer_env
~/vdirsyncer_env/bin/pip install vdirsyncer
alias vdirsyncer="~/vdirsyncer_env/bin/vdirsyncer
virtualenv --python python3 ~/vdirsyncer_env
~/vdirsyncer_env/bin/pip install -v vdirsyncer
alias vdirsyncer="$HOME/vdirsyncer_env/bin/vdirsyncer"
You'll have to put the last line into your ``.bashrc`` or ``.bash_profile``.

View file

@ -32,15 +32,15 @@ Paste this into your vdirsyncer config::
[storage holidays_public]
type = "http"
# The URL to your iCalendar file.
url = ...
url = "..."
[storage holidays_private]
type = "caldav"
# The direct URL to your calendar.
url = ...
url = "..."
# The credentials to your CalDAV server
username = ...
password = ...
username = "..."
password = "..."
Then run ``vdirsyncer discover holidays`` and ``vdirsyncer sync holidays``, and
your previously created calendar should be filled with events.
@ -66,7 +66,3 @@ For such purposes you can set the ``partial_sync`` parameter to ``ignore``::
partial_sync = ignore
See :ref:`the config docs <partial_sync_def>` for more information.
.. _nextCloud: https://nextcloud.com/
.. _Baikal: http://sabre.io/baikal/
.. _DAViCal: http://www.davical.org/

View file

@ -53,7 +53,8 @@ pairs of storages should actually be synchronized is defined in :ref:`pair
section <pair_config>`. This format is copied from OfflineIMAP, where storages
are called repositories and pairs are called accounts.
The following example synchronizes ownCloud's addressbooks to ``~/.contacts/``::
The following example synchronizes addressbooks from a :doc:`NextCloud
<tutorials/nextcloud>` to ``~/.contacts/``::
[pair my_contacts]
@ -70,7 +71,7 @@ The following example synchronizes ownCloud's addressbooks to ``~/.contacts/``::
type = "carddav"
# We can simplify this URL here as well. In theory it shouldn't matter.
url = "https://owncloud.example.com/remote.php/carddav/"
url = "https://nextcloud.example.com/"
username = "bob"
password = "asdf"
@ -162,13 +163,13 @@ let's switch to a different base example. This time we'll synchronize calendars:
[storage my_calendars_remote]
type = "caldav"
url = "https://owncloud.example.com/remote.php/caldav/"
url = "https://nextcloud.example.com/"
username = "bob"
password = "asdf"
Run ``vdirsyncer discover`` for discovery. Then you can use ``vdirsyncer
metasync`` to synchronize the ``color`` property between your local calendars
in ``~/.calendars/`` and your ownCloud. Locally the color is just represented
in ``~/.calendars/`` and your NextCloud. Locally the color is just represented
as a file called ``color`` within the calendar folder.
.. _collections_tutorial:

View file

@ -1,10 +0,0 @@
======
Baikal
======
Vdirsyncer is continuously tested against the latest version of Baikal_.
- Baikal up to ``0.2.7`` also uses an old version of SabreDAV, with the same
issue as ownCloud, see :gh:`160`. This issue is fixed in later versions.
.. _Baikal: http://baikal-server.com/

View file

@ -86,7 +86,7 @@ Crontab
On the end we create a crontab, so that vdirsyncer syncs automatically
every 30 minutes our contacts::
contab -e
crontab -e
On the end of that file enter this line::

View file

@ -17,7 +17,7 @@ Exchange server you might get confronted with weird errors of all sorts
type = "caldav"
url = "http://localhost:1080/users/user@example.com/calendar/"
username = "user@example.com"
password = ...
password = "..."
- Older versions of DavMail handle URLs case-insensitively. See :gh:`144`.
- DavMail is handling malformed data on the Exchange server very poorly. In

View file

@ -11,13 +11,13 @@ the settings to use::
[storage cal]
type = "caldav"
url = "https://caldav.messagingengine.com/"
username = ...
password = ...
username = "..."
password = "..."
[storage card]
type = "carddav"
url = "https://carddav.messagingengine.com/"
username = ...
password = ...
username = "..."
password = "..."
.. _FastMail: https://www.fastmail.com/

View file

@ -11,14 +11,14 @@ Vdirsyncer is regularly tested against iCloud_.
[storage cal]
type = "caldav"
url = "https://caldav.icloud.com/"
username = ...
password = ...
username = "..."
password = "..."
[storage card]
type = "carddav"
url = "https://contacts.icloud.com/"
username = ...
password = ...
username = "..."
password = "..."
Problems:

View file

@ -52,12 +52,10 @@ Servers
.. toctree::
:maxdepth: 1
baikal
davmail
fastmail
google
icloud
nextcloud
owncloud
radicale
xandikos

View file

@ -1,14 +1,14 @@
=========
nextCloud
NextCloud
=========
Vdirsyncer is continuously tested against the latest version of nextCloud_::
Vdirsyncer is continuously tested against the latest version of NextCloud_::
[storage cal]
type = "caldav"
url = "https://nextcloud.example.com/"
username = ...
password = ...
username = "..."
password = "..."
[storage card]
type = "carddav"
@ -17,4 +17,4 @@ Vdirsyncer is continuously tested against the latest version of nextCloud_::
- WebCAL-subscriptions can't be discovered by vdirsyncer. See `this relevant
issue <https://github.com/nextcloud/calendar/issues/63>`_.
.. _nextCloud: https://nextcloud.com/
.. _NextCloud: https://nextcloud.com/

View file

@ -1,26 +0,0 @@
.. _owncloud_setup:
========
ownCloud
========
Vdirsyncer is continuously tested against the latest version of ownCloud_::
[storage cal]
type = "caldav"
url = "https://example.com/remote.php/dav/"
username = ...
password = ...
[storage card]
type = "carddav"
url = "https://example.com/remote.php/dav/"
username = ...
password = ...
- *Versions older than 7.0.0:* ownCloud uses SabreDAV, which had problems
detecting collisions and race-conditions. The problems were reported and are
fixed in SabreDAV's repo, and the corresponding fix is also in ownCloud since
7.0.0. See :gh:`16` for more information.
.. _ownCloud: https://owncloud.org/

View file

@ -10,4 +10,61 @@ todoman_ is a CLI task manager supporting :doc:`vdir </vdir>`. Its interface is
similar to the ones of Taskwarrior or the todo.txt CLI app. You can use
:storage:`filesystem` with it.
.. _todoman: https://hugo.barrera.io/journal/2015/03/30/introducing-todoman/
.. _todoman: http://todoman.readthedocs.io/
Setting up vdirsyncer
=====================
For this tutorial we will use NextCloud.
Assuming a config like this::
[general]
status_path = "~/.vdirsyncer/status/"
[pair calendars]
conflict_resolution = "b wins"
a = "calendars_local"
b = "calendars_dav"
collections = ["from b"]
metadata = ["color", "displayname"]
[storage calendars_local]
type = "filesystem"
path = "~/.calendars/"
fileext = ".ics"
[storage calendars_dav]
type = "caldav"
url = "https://nextcloud.example.net/"
username = "..."
password = "..."
``vdirsyncer sync`` will then synchronize the calendars of your NextCloud_
instance to subfolders of ``~/.calendar/``.
.. _NextCloud: https://nextcloud.com/
Setting up todoman
==================
Write this to ``~/.config/todoman/todoman.conf``::
[main]
path = ~/.calendars/*
The glob_ pattern in ``path`` will match all subfolders in ``~/.calendars/``,
which is exactly the tasklists we want. Now you can use ``todoman`` as
described in its documentation_ and run ``vdirsyncer sync`` to synchronize the changes to NextCloud.
.. _glob: https://en.wikipedia.org/wiki/Glob_(programming)
.. _documentation: http://todoman.readthedocs.io/
Other clients
=============
The following client applications also synchronize over CalDAV:
- The Tasks-app found on iOS
- `OpenTasks for Android <https://github.com/dmfs/opentasks>`_
- The `Tasks <https://apps.nextcloud.com/apps/tasks>`_-app for NextCloud's web UI

View file

@ -11,13 +11,13 @@ point vdirsyncer against the root of Xandikos like this::
[storage cal]
type = "caldav"
url = "https://xandikos.example.com/"
username = ...
password = ...
username = "..."
password = "..."
[storage card]
type = "carddav"
url = "https://xandikos.example.com/"
username = ...
password = ...
username = "..."
password = "..."
.. _Xandikos: https://github.com/jelmer/xandikos

1
rust/.gitignore vendored Normal file
View file

@ -0,0 +1 @@
target/

1493
rust/Cargo.lock generated Normal file

File diff suppressed because it is too large Load diff

23
rust/Cargo.toml Normal file
View file

@ -0,0 +1,23 @@
[package]
name = "vdirsyncer-rustext"
version = "0.1.0"
authors = ["Markus Unterwaditzer <markus@unterwaditzer.net>"]
[lib]
name = "vdirsyncer_rustext"
crate-type = ["cdylib"]
[dependencies]
vobject = "0.4.2"
sha2 = "0.7.0"
failure = "0.1"
shippai = "0.2.3"
atomicwrites = "0.2.0"
uuid = { version = "0.6", features = ["v4"] }
libc = "0.2"
log = "0.4"
reqwest = "0.8"
quick-xml = "0.12.0"
url = "1.7"
chrono = "0.4.0"
env_logger = "0.5"

4
rust/cbindgen.toml Normal file
View file

@ -0,0 +1,4 @@
language = "C"
[parse]
expand = ["vdirsyncer-rustext"]

59
rust/src/errors.rs Normal file
View file

@ -0,0 +1,59 @@
use failure;
pub type Fallible<T> = Result<T, failure::Error>;
shippai_export!();
#[derive(Debug, Fail, Shippai)]
pub enum Error {
#[fail(display = "The item cannot be parsed")]
ItemUnparseable,
#[fail(display = "Unexpected version {}, expected {}", found, expected)]
UnexpectedVobjectVersion { found: String, expected: String },
#[fail(display = "Unexpected component {}, expected {}", found, expected)]
UnexpectedVobject { found: String, expected: String },
#[fail(display = "Item '{}' not found", href)]
ItemNotFound { href: String },
#[fail(display = "The href '{}' is already taken", href)]
ItemAlreadyExisting { href: String },
#[fail(
display = "A wrong etag for '{}' was provided. Another client's requests might \
conflict with vdirsyncer.",
href
)]
WrongEtag { href: String },
#[fail(
display = "The mtime for '{}' has unexpectedly changed. Please close other programs\
accessing this file.",
filepath
)]
MtimeMismatch { filepath: String },
#[fail(
display = "The item '{}' has been rejected by the server because the vobject type was unexpected",
href
)]
UnsupportedVobject { href: String },
#[fail(display = "This storage is read-only.")]
ReadOnly,
}
pub unsafe fn export_result<V>(
res: Result<V, failure::Error>,
c_err: *mut *mut ShippaiError,
) -> Option<V> {
match res {
Ok(v) => Some(v),
Err(e) => {
*c_err = Box::into_raw(Box::new(e.into()));
None
}
}
}

256
rust/src/item.rs Normal file
View file

@ -0,0 +1,256 @@
use vobject;
use sha2::{Digest, Sha256};
use std::fmt::Write;
use errors::*;
#[derive(Clone)]
pub enum Item {
Parsed(vobject::Component),
Unparseable(String), // FIXME: maybe use https://crates.io/crates/terminated
}
impl Item {
pub fn from_raw(raw: String) -> Self {
match vobject::parse_component(&raw) {
Ok(x) => Item::Parsed(x),
// Don't chain vobject error here because it cannot be stored/cloned FIXME
_ => Item::Unparseable(raw),
}
}
pub fn from_component(component: vobject::Component) -> Self {
Item::Parsed(component)
}
/// Global identifier of the item, across storages, doesn't change after a modification of the
/// item.
pub fn get_uid(&self) -> Option<String> {
// FIXME: Cache
if let Item::Parsed(ref c) = *self {
let mut stack: Vec<&vobject::Component> = vec![c];
while let Some(vobj) = stack.pop() {
if let Some(prop) = vobj.get_only("UID") {
return Some(prop.value_as_string());
}
stack.extend(vobj.subcomponents.iter());
}
}
None
}
pub fn with_uid(&self, uid: &str) -> Fallible<Self> {
if let Item::Parsed(ref component) = *self {
let mut new_component = component.clone();
change_uid(&mut new_component, uid);
Ok(Item::from_raw(vobject::write_component(&new_component)))
} else {
Err(Error::ItemUnparseable.into())
}
}
/// Raw unvalidated content of the item
pub fn get_raw(&self) -> String {
match *self {
Item::Parsed(ref component) => vobject::write_component(component),
Item::Unparseable(ref x) => x.to_owned(),
}
}
/// Component of item if parseable
pub fn get_component(&self) -> Fallible<&vobject::Component> {
match *self {
Item::Parsed(ref component) => Ok(component),
_ => Err(Error::ItemUnparseable.into()),
}
}
/// Component of item if parseable
pub fn into_component(self) -> Fallible<vobject::Component> {
match self {
Item::Parsed(component) => Ok(component),
_ => Err(Error::ItemUnparseable.into()),
}
}
/// Used for etags
pub fn get_hash(&self) -> Fallible<String> {
// FIXME: cache
if let Item::Parsed(ref component) = *self {
Ok(hash_component(component))
} else {
Err(Error::ItemUnparseable.into())
}
}
/// Used for generating hrefs and matching up items during synchronization. This is either the
/// UID or the hash of the item's content.
pub fn get_ident(&self) -> Fallible<String> {
if let Some(x) = self.get_uid() {
return Ok(x);
}
// We hash the item instead of directly using its raw content, because
// 1. The raw content might be really large, e.g. when it's a contact
// with a picture, which bloats the status file.
//
// 2. The status file would contain really sensitive information.
self.get_hash()
}
pub fn is_parseable(&self) -> bool {
if let Item::Parsed(_) = *self {
true
} else {
false
}
}
}
fn change_uid(c: &mut vobject::Component, uid: &str) {
let mut stack = vec![c];
while let Some(component) = stack.pop() {
match component.name.as_ref() {
"VEVENT" | "VTODO" | "VJOURNAL" | "VCARD" => {
if !uid.is_empty() {
component.set(vobject::Property::new("UID", uid));
} else {
component.remove("UID");
}
}
_ => (),
}
stack.extend(component.subcomponents.iter_mut());
}
}
fn hash_component(c: &vobject::Component) -> String {
let mut new_c = c.clone();
{
let mut stack = vec![&mut new_c];
while let Some(component) = stack.pop() {
// PRODID is changed by radicale for some reason after upload
component.remove("PRODID");
// Sometimes METHOD:PUBLISH is added by WebCAL providers, for us it doesn't make a difference
component.remove("METHOD");
// X-RADICALE-NAME is used by radicale, because hrefs don't really exist in their filesystem backend
component.remove("X-RADICALE-NAME");
// Those are from the VCARD specification and is supposed to change when the
// item does -- however, we can determine that ourselves
component.remove("REV");
component.remove("LAST-MODIFIED");
component.remove("CREATED");
// Some iCalendar HTTP calendars generate the DTSTAMP at request time, so
// this property always changes when the rest of the item didn't. Some do
// the same with the UID.
//
// - Google's read-only calendar links
// - http://www.feiertage-oesterreich.at/
component.remove("DTSTAMP");
component.remove("UID");
if component.name == "VCALENDAR" {
// CALSCALE's default value is gregorian
let calscale = component.get_only("CALSCALE").map(|x| x.value_as_string());
if let Some(x) = calscale {
if x == "GREGORIAN" {
component.remove("CALSCALE");
}
}
// Apparently this is set by Horde?
// https://github.com/pimutils/vdirsyncer/issues/318
// Also Google sets those properties
component.remove("X-WR-CALNAME");
component.remove("X-WR-TIMEZONE");
component.subcomponents.retain(|c| c.name != "VTIMEZONE");
}
stack.extend(component.subcomponents.iter_mut());
}
}
// FIXME: Possible optimization: Stream component to hasher instead of allocating new string
let raw = vobject::write_component(&new_c);
let mut lines: Vec<_> = raw.lines().collect();
lines.sort();
let mut hasher = Sha256::default();
hasher.input(lines.join("\r\n").as_bytes());
let digest = hasher.result();
let mut rv = String::new();
for &byte in digest.as_ref() {
write!(&mut rv, "{:x}", byte).unwrap();
}
rv
}
pub mod exports {
use super::Item;
use errors::*;
use std::ffi::{CStr, CString};
use std::os::raw::c_char;
use std::ptr;
const EMPTY_STRING: *const c_char = b"\0" as *const u8 as *const c_char;
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_get_uid(c: *mut Item) -> *const c_char {
match (*c).get_uid() {
Some(x) => CString::new(x).unwrap().into_raw(),
None => EMPTY_STRING,
}
}
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_get_raw(c: *mut Item) -> *const c_char {
CString::new((*c).get_raw()).unwrap().into_raw()
}
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_item_from_raw(s: *const c_char) -> *mut Item {
let cstring = CStr::from_ptr(s);
Box::into_raw(Box::new(Item::from_raw(
cstring.to_str().unwrap().to_owned(),
)))
}
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_free_item(c: *mut Item) {
let _: Box<Item> = Box::from_raw(c);
}
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_with_uid(
c: *mut Item,
uid: *const c_char,
err: *mut *mut ShippaiError,
) -> *mut Item {
let uid_cstring = CStr::from_ptr(uid);
if let Some(x) = export_result((*c).with_uid(uid_cstring.to_str().unwrap()), err) {
Box::into_raw(Box::new(x))
} else {
ptr::null_mut()
}
}
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_get_hash(
c: *mut Item,
err: *mut *mut ShippaiError,
) -> *const c_char {
if let Some(x) = export_result((*c).get_hash(), err) {
CString::new(x).unwrap().into_raw()
} else {
ptr::null_mut()
}
}
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_item_is_parseable(c: *mut Item) -> bool {
(*c).is_parseable()
}
}

40
rust/src/lib.rs Normal file
View file

@ -0,0 +1,40 @@
#![cfg_attr(feature = "cargo-clippy", allow(single_match))]
extern crate atomicwrites;
#[macro_use]
extern crate failure;
#[macro_use]
extern crate shippai;
extern crate libc;
extern crate uuid;
extern crate vobject;
#[macro_use]
extern crate log;
extern crate chrono;
extern crate env_logger;
extern crate quick_xml;
extern crate reqwest;
extern crate sha2;
extern crate url;
pub mod errors;
mod item;
mod storage;
pub mod exports {
use std::ffi::CStr;
use std::os::raw::c_char;
pub use super::item::exports::*;
pub use super::storage::exports::*;
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_free_str(s: *const c_char) {
CStr::from_ptr(s);
}
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_init_logger() {
::env_logger::init();
}
}

465
rust/src/storage/dav/mod.rs Normal file
View file

@ -0,0 +1,465 @@
mod parser;
use chrono;
use std::collections::BTreeSet;
use std::io::{BufReader, Read};
use std::str::FromStr;
use quick_xml;
use reqwest;
use reqwest::header::{ContentType, ETag, EntityTag, IfMatch, IfNoneMatch};
use url::Url;
use super::http::{handle_http_error, send_request, HttpConfig};
use super::utils::generate_href;
use super::Storage;
use errors::*;
use item::Item;
#[inline]
fn propfind() -> reqwest::Method {
reqwest::Method::Extension("PROPFIND".to_owned())
}
#[inline]
fn report() -> reqwest::Method {
reqwest::Method::Extension("REPORT".to_owned())
}
static CALDAV_DT_FORMAT: &'static str = "%Y%m%dT%H%M%SZ";
struct DavStorage {
pub url: String,
pub http_config: HttpConfig,
pub http: Option<reqwest::Client>,
}
impl DavStorage {
pub fn new(url: &str, http_config: HttpConfig) -> Self {
DavStorage {
url: format!("{}/", url.trim_right_matches('/')),
http_config,
http: None,
}
}
}
impl DavStorage {
#[inline]
pub fn get_http(&mut self) -> Fallible<reqwest::Client> {
if let Some(ref http) = self.http {
return Ok(http.clone());
}
let client = self.http_config.clone().into_connection()?.build()?;
self.http = Some(client.clone());
Ok(client)
}
#[inline]
pub fn send_request(&mut self, request: reqwest::Request) -> Fallible<reqwest::Response> {
let url = request.url().to_string();
handle_http_error(&url, send_request(&self.get_http()?, request)?)
}
pub fn get(&mut self, href: &str) -> Fallible<(Item, String)> {
let base = Url::parse(&self.url)?;
let url = base.join(href)?;
if href != url.path() {
Err(Error::ItemNotFound {
href: href.to_owned(),
})?;
}
let request = self.get_http()?.get(url).build()?;
let mut response = self.send_request(request)?;
let mut s = String::new();
response.read_to_string(&mut s)?;
let etag = match response.headers().get::<ETag>() {
Some(x) => format!("\"{}\"", x.tag()),
None => Err(DavError::EtagNotFound)?,
};
Ok((Item::from_raw(s), etag))
}
pub fn list<'a>(
&'a mut self,
mimetype_contains: &'a str,
) -> Fallible<Box<Iterator<Item = (String, String)> + 'a>> {
let mut headers = reqwest::header::Headers::new();
headers.set(ContentType::xml());
headers.set_raw("Depth", "1");
let request = self
.get_http()?
.request(propfind(), &self.url)
.headers(headers)
.body(
r#"<?xml version="1.0" encoding="utf-8" ?>
<D:propfind xmlns:D="DAV:">
<D:prop>
<D:resourcetype/>
<D:getcontenttype/>
<D:getetag/>
</D:prop>
</D:propfind>"#,
)
.build()?;
let response = self.send_request(request)?;
self.parse_prop_response(response, mimetype_contains)
}
fn parse_prop_response<'a>(
&'a mut self,
response: reqwest::Response,
mimetype_contains: &'a str,
) -> Fallible<Box<Iterator<Item = (String, String)> + 'a>> {
let buf_reader = BufReader::new(response);
let xml_reader = quick_xml::Reader::from_reader(buf_reader);
let mut parser = parser::ListingParser::new(xml_reader);
let base = Url::parse(&self.url)?;
let mut seen_hrefs = BTreeSet::new();
Ok(Box::new(
parser
.get_all_responses()?
.into_iter()
.filter_map(move |response| {
if response.has_collection_tag {
return None;
}
if !response.mimetype?.contains(mimetype_contains) {
return None;
}
let href = base.join(&response.href?).ok()?.path().to_owned();
if seen_hrefs.contains(&href) {
return None;
}
seen_hrefs.insert(href.clone());
Some((href, response.etag?))
}),
))
}
fn put(
&mut self,
href: &str,
item: &Item,
mimetype: &str,
etag: Option<&str>,
) -> Fallible<(String, String)> {
let base = Url::parse(&self.url)?;
let url = base.join(href)?;
let mut request = self.get_http()?.request(reqwest::Method::Put, url);
request.header(ContentType(reqwest::mime::Mime::from_str(mimetype)?));
if let Some(etag) = etag {
request.header(IfMatch::Items(vec![EntityTag::new(
false,
etag.trim_matches('"').to_owned(),
)]));
} else {
request.header(IfNoneMatch::Any);
}
let raw = item.get_raw();
let response = send_request(&self.get_http()?, request.body(raw).build()?)?;
match (etag, response.status()) {
(Some(_), reqwest::StatusCode::PreconditionFailed) => Err(Error::WrongEtag {
href: href.to_owned(),
})?,
(None, reqwest::StatusCode::PreconditionFailed) => Err(Error::ItemAlreadyExisting {
href: href.to_owned(),
})?,
_ => (),
}
let response = assert_multistatus_success(handle_http_error(href, response)?)?;
// The server may not return an etag under certain conditions:
//
// An origin server MUST NOT send a validator header field (Section
// 7.2), such as an ETag or Last-Modified field, in a successful
// response to PUT unless the request's representation data was saved
// without any transformation applied to the body (i.e., the
// resource's new representation data is identical to the
// representation data received in the PUT request) and the validator
// field value reflects the new representation.
//
// -- https://tools.ietf.org/html/rfc7231#section-4.3.4
//
// In such cases we return a constant etag. The next synchronization
// will then detect an etag change and will download the new item.
let etag = match response.headers().get::<ETag>() {
Some(x) => format!("\"{}\"", x.tag()),
None => "".to_owned(),
};
Ok((response.url().path().to_owned(), etag))
}
fn delete(&mut self, href: &str, etag: &str) -> Fallible<()> {
let base = Url::parse(&self.url)?;
let url = base.join(href)?;
let request = self
.get_http()?
.request(reqwest::Method::Delete, url)
.header(IfMatch::Items(vec![EntityTag::new(
false,
etag.trim_matches('"').to_owned(),
)]))
.build()?;
let response = send_request(&self.get_http()?, request)?;
if response.status() == reqwest::StatusCode::PreconditionFailed {
Err(Error::WrongEtag {
href: href.to_owned(),
})?;
}
assert_multistatus_success(handle_http_error(href, response)?)?;
Ok(())
}
}
fn assert_multistatus_success(r: reqwest::Response) -> Fallible<reqwest::Response> {
// TODO
Ok(r)
}
struct CarddavStorage {
inner: DavStorage,
}
impl CarddavStorage {
pub fn new(url: &str, http_config: HttpConfig) -> Self {
CarddavStorage {
inner: DavStorage::new(url, http_config),
}
}
}
impl Storage for CarddavStorage {
fn list<'a>(&'a mut self) -> Fallible<Box<Iterator<Item = (String, String)> + 'a>> {
self.inner.list("vcard")
}
fn get(&mut self, href: &str) -> Fallible<(Item, String)> {
self.inner.get(href)
}
fn upload(&mut self, item: Item) -> Fallible<(String, String)> {
let href = format!("{}.vcf", generate_href(&item.get_ident()?));
self.inner.put(&href, &item, "text/vcard", None)
}
fn update(&mut self, href: &str, item: Item, etag: &str) -> Fallible<String> {
self.inner
.put(&href, &item, "text/vcard", Some(etag))
.map(|x| x.1)
}
fn delete(&mut self, href: &str, etag: &str) -> Fallible<()> {
self.inner.delete(href, etag)
}
}
struct CaldavStorage {
inner: DavStorage,
start_date: Option<chrono::DateTime<chrono::Utc>>, // FIXME: store as Option<(start, end)>
end_date: Option<chrono::DateTime<chrono::Utc>>,
item_types: Vec<&'static str>,
}
impl CaldavStorage {
pub fn new(
url: &str,
http_config: HttpConfig,
start_date: Option<chrono::DateTime<chrono::Utc>>,
end_date: Option<chrono::DateTime<chrono::Utc>>,
item_types: Vec<&'static str>,
) -> Self {
CaldavStorage {
inner: DavStorage::new(url, http_config),
start_date,
end_date,
item_types,
}
}
#[inline]
fn get_caldav_filters(&self) -> Vec<String> {
let mut item_types = self.item_types.clone();
let mut timefilter = "".to_owned();
if let (Some(start), Some(end)) = (self.start_date, self.end_date) {
timefilter = format!(
"<C:time-range start=\"{}\" end=\"{}\" />",
start.format(CALDAV_DT_FORMAT),
end.format(CALDAV_DT_FORMAT)
);
if item_types.is_empty() {
item_types.push("VTODO");
item_types.push("VEVENT");
}
}
item_types
.into_iter()
.map(|item_type| {
format!(
"<C:comp-filter name=\"VCALENDAR\">\
<C:comp-filter name=\"{}\">{}</C:comp-filter>\
</C:comp-filter>",
item_type, timefilter
)
})
.collect()
}
}
impl Storage for CaldavStorage {
fn list<'a>(&'a mut self) -> Fallible<Box<Iterator<Item = (String, String)> + 'a>> {
let filters = self.get_caldav_filters();
if filters.is_empty() {
// If we don't have any filters (which is the default), taking the
// risk of sending a calendar-query is not necessary. There doesn't
// seem to be a widely-usable way to send calendar-queries with the
// same semantics as a PROPFIND request... so why not use PROPFIND
// instead?
//
// See https://github.com/dmfs/tasks/issues/118 for backstory.
self.inner.list("text/calendar")
} else {
let mut rv = vec![];
let mut headers = reqwest::header::Headers::new();
headers.set(ContentType::xml());
headers.set_raw("Depth", "1");
for filter in filters {
let data =
format!(
"<?xml version=\"1.0\" encoding=\"utf-8\" ?>\
<C:calendar-query xmlns:D=\"DAV:\" xmlns:C=\"urn:ietf:params:xml:ns:caldav\">\
<D:prop><D:getcontenttype/><D:getetag/></D:prop>\
<C:filter>{}</C:filter>\
</C:calendar-query>", filter);
let request = self
.inner
.get_http()?
.request(report(), &self.inner.url)
.headers(headers.clone())
.body(data)
.build()?;
let response = self.inner.send_request(request)?;
rv.extend(self.inner.parse_prop_response(response, "text/calendar")?);
}
Ok(Box::new(rv.into_iter()))
}
}
fn get(&mut self, href: &str) -> Fallible<(Item, String)> {
self.inner.get(href)
}
fn upload(&mut self, item: Item) -> Fallible<(String, String)> {
let href = format!("{}.ics", generate_href(&item.get_ident()?));
self.inner.put(&href, &item, "text/calendar", None)
}
fn update(&mut self, href: &str, item: Item, etag: &str) -> Fallible<String> {
self.inner
.put(href, &item, "text/calendar", Some(etag))
.map(|x| x.1)
}
fn delete(&mut self, href: &str, etag: &str) -> Fallible<()> {
self.inner.delete(href, etag)
}
}
pub mod exports {
use super::super::http::init_http_config;
use super::*;
#[derive(Debug, Fail, Shippai)]
pub enum DavError {
#[fail(display = "Server did not return etag.")]
EtagNotFound,
}
use std::ffi::CStr;
use std::os::raw::c_char;
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_init_carddav(
url: *const c_char,
username: *const c_char,
password: *const c_char,
useragent: *const c_char,
verify_cert: *const c_char,
auth_cert: *const c_char,
) -> *mut Box<Storage> {
let url = CStr::from_ptr(url);
Box::into_raw(Box::new(Box::new(CarddavStorage::new(
url.to_str().unwrap(),
init_http_config(username, password, useragent, verify_cert, auth_cert),
))))
}
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_init_caldav(
url: *const c_char,
username: *const c_char,
password: *const c_char,
useragent: *const c_char,
verify_cert: *const c_char,
auth_cert: *const c_char,
start_date: i64,
end_date: i64,
include_vevent: bool,
include_vjournal: bool,
include_vtodo: bool,
) -> *mut Box<Storage> {
let url = CStr::from_ptr(url);
let parse_date = |i| {
if i > 0 {
Some(chrono::DateTime::from_utc(
chrono::NaiveDateTime::from_timestamp(i, 0),
chrono::Utc,
))
} else {
None
}
};
let mut item_types = vec![];
if include_vevent {
item_types.push("VEVENT");
}
if include_vjournal {
item_types.push("VJOURNAL");
}
if include_vtodo {
item_types.push("VTODO");
}
Box::into_raw(Box::new(Box::new(CaldavStorage::new(
url.to_str().unwrap(),
init_http_config(username, password, useragent, verify_cert, auth_cert),
parse_date(start_date),
parse_date(end_date),
item_types,
))))
}
}
use exports::DavError;

View file

@ -0,0 +1,110 @@
use quick_xml;
use quick_xml::events::Event;
use errors::*;
use std::io::BufRead;
#[derive(Debug)]
pub struct Response {
pub href: Option<String>,
pub etag: Option<String>,
pub mimetype: Option<String>,
pub has_collection_tag: bool,
}
impl Response {
pub fn new() -> Self {
Response {
href: None,
etag: None,
has_collection_tag: false,
mimetype: None,
}
}
}
pub struct ListingParser<T: BufRead> {
reader: quick_xml::Reader<T>,
ns_buf: Vec<u8>,
}
impl<T: BufRead> ListingParser<T> {
pub fn new(mut reader: quick_xml::Reader<T>) -> Self {
reader.expand_empty_elements(true);
reader.trim_text(true);
reader.check_end_names(true);
reader.check_comments(false);
ListingParser {
reader,
ns_buf: vec![],
}
}
fn next_response(&mut self) -> Fallible<Option<Response>> {
let mut buf = vec![];
#[derive(Debug, Clone, Copy)]
enum State {
Outer,
Response,
Href,
ContentType,
Etag,
};
let mut state = State::Outer;
let mut current_response = Response::new();
loop {
match self
.reader
.read_namespaced_event(&mut buf, &mut self.ns_buf)?
{
(ns, Event::Start(ref e)) => {
match (state, ns, e.local_name()) {
(State::Outer, Some(b"DAV:"), b"response") => state = State::Response,
(State::Response, Some(b"DAV:"), b"href") => state = State::Href,
(State::Response, Some(b"DAV:"), b"getetag") => state = State::Etag,
(State::Response, Some(b"DAV:"), b"getcontenttype") => {
state = State::ContentType
}
(State::Response, Some(b"DAV:"), b"collection") => {
current_response.has_collection_tag = true;
}
_ => (),
}
debug!("State: {:?}", state);
}
(_, Event::Text(e)) => {
let txt = e.unescape_and_decode(&self.reader)?;
match state {
State::Href => current_response.href = Some(txt),
State::ContentType => current_response.mimetype = Some(txt),
State::Etag => current_response.etag = Some(txt),
_ => continue,
}
state = State::Response;
}
(ns, Event::End(e)) => match (state, ns, e.local_name()) {
(State::Response, Some(b"DAV:"), b"response") => {
return Ok(Some(current_response))
}
_ => (),
},
(_, Event::Eof) => return Ok(None),
_ => (),
}
}
}
pub fn get_all_responses(&mut self) -> Fallible<Vec<Response>> {
let mut rv = vec![];
while let Some(x) = self.next_response()? {
rv.push(x);
}
Ok(rv)
}
}

196
rust/src/storage/exports.rs Normal file
View file

@ -0,0 +1,196 @@
pub use super::dav::exports::*;
pub use super::filesystem::exports::*;
pub use super::http::exports::*;
pub use super::singlefile::exports::*;
use super::Storage;
use errors::*;
use item::Item;
use std::ffi::{CStr, CString};
use std::os::raw::c_char;
use std::ptr;
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_storage_free(storage: *mut Box<Storage>) {
let _: Box<Box<Storage>> = Box::from_raw(storage);
}
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_storage_list(
storage: *mut Box<Storage>,
err: *mut *mut ShippaiError,
) -> *mut VdirsyncerStorageListing {
if let Some(x) = export_result((**storage).list(), err) {
Box::into_raw(Box::new(VdirsyncerStorageListing {
iterator: x,
href: None,
etag: None,
}))
} else {
ptr::null_mut()
}
}
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_storage_get(
storage: *mut Box<Storage>,
c_href: *const c_char,
err: *mut *mut ShippaiError,
) -> *mut VdirsyncerStorageGetResult {
let href = CStr::from_ptr(c_href);
if let Some((item, href)) = export_result((**storage).get(href.to_str().unwrap()), err) {
Box::into_raw(Box::new(VdirsyncerStorageGetResult {
item: Box::into_raw(Box::new(item)),
etag: CString::new(href).unwrap().into_raw(),
}))
} else {
ptr::null_mut()
}
}
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_storage_upload(
storage: *mut Box<Storage>,
item: *mut Item,
err: *mut *mut ShippaiError,
) -> *mut VdirsyncerStorageUploadResult {
if let Some((href, etag)) = export_result((**storage).upload((*item).clone()), err) {
Box::into_raw(Box::new(VdirsyncerStorageUploadResult {
href: CString::new(href).unwrap().into_raw(),
etag: CString::new(etag).unwrap().into_raw(),
}))
} else {
ptr::null_mut()
}
}
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_storage_update(
storage: *mut Box<Storage>,
c_href: *const c_char,
item: *mut Item,
c_etag: *const c_char,
err: *mut *mut ShippaiError,
) -> *const c_char {
let href = CStr::from_ptr(c_href);
let etag = CStr::from_ptr(c_etag);
let res = (**storage).update(
href.to_str().unwrap(),
(*item).clone(),
etag.to_str().unwrap(),
);
if let Some(etag) = export_result(res, err) {
CString::new(etag).unwrap().into_raw()
} else {
ptr::null_mut()
}
}
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_storage_delete(
storage: *mut Box<Storage>,
c_href: *const c_char,
c_etag: *const c_char,
err: *mut *mut ShippaiError,
) {
let href = CStr::from_ptr(c_href);
let etag = CStr::from_ptr(c_etag);
let res = (**storage).delete(href.to_str().unwrap(), etag.to_str().unwrap());
let _ = export_result(res, err);
}
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_storage_buffered(storage: *mut Box<Storage>) {
(**storage).buffered();
}
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_storage_flush(
storage: *mut Box<Storage>,
err: *mut *mut ShippaiError,
) {
let _ = export_result((**storage).flush(), err);
}
pub struct VdirsyncerStorageListing {
iterator: Box<Iterator<Item = (String, String)>>,
href: Option<String>,
etag: Option<String>,
}
impl VdirsyncerStorageListing {
pub fn advance(&mut self) -> bool {
match self.iterator.next() {
Some((href, etag)) => {
self.href = Some(href);
self.etag = Some(etag);
true
}
None => {
self.href = None;
self.etag = None;
false
}
}
}
pub fn get_href(&mut self) -> Option<String> {
self.href.take()
}
pub fn get_etag(&mut self) -> Option<String> {
self.etag.take()
}
}
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_free_storage_listing(listing: *mut VdirsyncerStorageListing) {
let _: Box<VdirsyncerStorageListing> = Box::from_raw(listing);
}
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_advance_storage_listing(
listing: *mut VdirsyncerStorageListing,
) -> bool {
(*listing).advance()
}
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_storage_listing_get_href(
listing: *mut VdirsyncerStorageListing,
) -> *const c_char {
CString::new((*listing).get_href().unwrap())
.unwrap()
.into_raw()
}
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_storage_listing_get_etag(
listing: *mut VdirsyncerStorageListing,
) -> *const c_char {
CString::new((*listing).get_etag().unwrap())
.unwrap()
.into_raw()
}
#[repr(C)]
pub struct VdirsyncerStorageGetResult {
pub item: *mut Item,
pub etag: *const c_char,
}
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_free_storage_get_result(res: *mut VdirsyncerStorageGetResult) {
let _: Box<VdirsyncerStorageGetResult> = Box::from_raw(res);
}
#[repr(C)]
pub struct VdirsyncerStorageUploadResult {
pub href: *const c_char,
pub etag: *const c_char,
}
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_free_storage_upload_result(
res: *mut VdirsyncerStorageUploadResult,
) {
let _: Box<VdirsyncerStorageUploadResult> = Box::from_raw(res);
}

View file

@ -0,0 +1,220 @@
use super::Storage;
use errors::*;
use failure;
use libc;
use std::fs;
use std::io;
use std::io::{Read, Write};
use std::os::unix::fs::MetadataExt;
use std::path::{Path, PathBuf};
use std::process::Command;
use super::utils;
use item::Item;
use atomicwrites::{AllowOverwrite, AtomicFile, DisallowOverwrite};
pub struct FilesystemStorage {
path: PathBuf,
fileext: String,
post_hook: Option<String>,
}
impl FilesystemStorage {
pub fn new<P: AsRef<Path>>(path: P, fileext: &str, post_hook: Option<String>) -> Self {
FilesystemStorage {
path: path.as_ref().to_owned(),
fileext: fileext.into(),
post_hook,
}
}
fn get_href(&self, ident: Option<&str>) -> String {
let href_base = match ident {
Some(x) => utils::generate_href(x),
None => utils::random_href(),
};
format!("{}{}", href_base, self.fileext)
}
fn get_filepath(&self, href: &str) -> PathBuf {
self.path.join(href)
}
fn run_post_hook<S: AsRef<::std::ffi::OsStr>>(&self, fpath: S) {
if let Some(ref cmd) = self.post_hook {
let status = match Command::new(cmd).arg(fpath).status() {
Ok(x) => x,
Err(e) => {
warn!("Failed to run external hook: {}", e);
return;
}
};
if !status.success() {
if let Some(code) = status.code() {
warn!("External hook exited with error code {}.", code);
} else {
warn!("External hook was killed.");
}
}
}
}
}
#[inline]
fn handle_io_error(href: &str, e: io::Error) -> failure::Error {
match e.kind() {
io::ErrorKind::NotFound => Error::ItemNotFound {
href: href.to_owned(),
}.into(),
io::ErrorKind::AlreadyExists => Error::ItemAlreadyExisting {
href: href.to_owned(),
}.into(),
_ => e.into(),
}
}
pub mod exports {
use super::*;
use std::ffi::CStr;
use std::os::raw::c_char;
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_init_filesystem(
path: *const c_char,
fileext: *const c_char,
post_hook: *const c_char,
) -> *mut Box<Storage> {
let path_c = CStr::from_ptr(path);
let fileext_c = CStr::from_ptr(fileext);
let post_hook_c = CStr::from_ptr(post_hook);
let post_hook_str = post_hook_c.to_str().unwrap();
Box::into_raw(Box::new(Box::new(FilesystemStorage::new(
path_c.to_str().unwrap(),
fileext_c.to_str().unwrap(),
if post_hook_str.is_empty() {
None
} else {
Some(post_hook_str.to_owned())
},
))))
}
}
#[inline]
fn etag_from_file(metadata: &fs::Metadata) -> String {
format!(
"{}.{};{}",
metadata.mtime(),
metadata.mtime_nsec(),
metadata.ino()
)
}
impl Storage for FilesystemStorage {
fn list<'a>(&'a mut self) -> Fallible<Box<Iterator<Item = (String, String)> + 'a>> {
let mut rv: Vec<(String, String)> = vec![];
for entry_res in fs::read_dir(&self.path)? {
let entry = entry_res?;
let metadata = entry.metadata()?;
if !metadata.is_file() {
continue;
}
let fname: String = match entry.file_name().into_string() {
Ok(x) => x,
Err(_) => continue,
};
if !fname.ends_with(&self.fileext) {
continue;
}
rv.push((fname, etag_from_file(&metadata)));
}
Ok(Box::new(rv.into_iter()))
}
fn get(&mut self, href: &str) -> Fallible<(Item, String)> {
let fpath = self.get_filepath(href);
let mut f = match fs::File::open(fpath) {
Ok(x) => x,
Err(e) => Err(handle_io_error(href, e))?,
};
let mut s = String::new();
f.read_to_string(&mut s)?;
Ok((Item::from_raw(s), etag_from_file(&f.metadata()?)))
}
fn upload(&mut self, item: Item) -> Fallible<(String, String)> {
#[inline]
fn inner(s: &mut FilesystemStorage, item: &Item, href: &str) -> io::Result<String> {
let filepath = s.get_filepath(href);
let af = AtomicFile::new(&filepath, DisallowOverwrite);
let content = item.get_raw();
af.write(|f| f.write_all(content.as_bytes()))?;
let new_etag = etag_from_file(&fs::metadata(&filepath)?);
s.run_post_hook(filepath);
Ok(new_etag)
}
let ident = item.get_ident()?;
let mut href = self.get_href(Some(&ident));
let etag = match inner(self, &item, &href) {
Ok(x) => x,
Err(ref e) if e.raw_os_error() == Some(libc::ENAMETOOLONG) => {
href = self.get_href(None);
match inner(self, &item, &href) {
Ok(x) => x,
Err(e) => Err(handle_io_error(&href, e))?,
}
}
Err(e) => Err(handle_io_error(&href, e))?,
};
Ok((href, etag))
}
fn update(&mut self, href: &str, item: Item, etag: &str) -> Fallible<String> {
let filepath = self.get_filepath(href);
let metadata = match fs::metadata(&filepath) {
Ok(x) => x,
Err(e) => Err(handle_io_error(href, e))?,
};
let actual_etag = etag_from_file(&metadata);
if actual_etag != etag {
Err(Error::WrongEtag {
href: href.to_owned(),
})?;
}
let af = AtomicFile::new(&filepath, AllowOverwrite);
let content = item.get_raw();
af.write(|f| f.write_all(content.as_bytes()))?;
let new_etag = etag_from_file(&fs::metadata(filepath)?);
Ok(new_etag)
}
fn delete(&mut self, href: &str, etag: &str) -> Fallible<()> {
let filepath = self.get_filepath(href);
let metadata = match fs::metadata(&filepath) {
Ok(x) => x,
Err(e) => Err(handle_io_error(href, e))?,
};
let actual_etag = etag_from_file(&metadata);
if actual_etag != etag {
Err(Error::WrongEtag {
href: href.to_owned(),
})?;
}
fs::remove_file(filepath)?;
Ok(())
}
}

230
rust/src/storage/http.rs Normal file
View file

@ -0,0 +1,230 @@
use std::collections::BTreeMap;
use std::fs::File;
use std::io::Read;
use std::ffi::CStr;
use std::os::raw::c_char;
use reqwest;
use super::singlefile::split_collection;
use super::Storage;
use errors::*;
use item::Item;
type ItemCache = BTreeMap<String, (Item, String)>;
pub type Username = String;
pub type Password = String;
pub type Auth = (Username, Password);
/// Wrapper around Client.execute to enable logging
#[inline]
pub fn send_request(
client: &reqwest::Client,
request: reqwest::Request,
) -> Fallible<reqwest::Response> {
debug!("> {} {}", request.method(), request.url());
for header in request.headers().iter() {
debug!("> {}: {}", header.name(), header.value_string());
}
debug!("> {:?}", request.body());
debug!("> ---");
let response = client.execute(request)?;
debug!("< {:?}", response.status());
for header in response.headers().iter() {
debug!("< {}: {}", header.name(), header.value_string());
}
Ok(response)
}
#[derive(Clone)]
pub struct HttpConfig {
pub auth: Option<Auth>,
pub useragent: Option<String>,
pub verify_cert: Option<String>,
pub auth_cert: Option<String>,
}
impl HttpConfig {
pub fn into_connection(self) -> Fallible<reqwest::ClientBuilder> {
let mut headers = reqwest::header::Headers::new();
if let Some((username, password)) = self.auth {
headers.set(reqwest::header::Authorization(reqwest::header::Basic {
username,
password: Some(password),
}));
}
if let Some(useragent) = self.useragent {
headers.set(reqwest::header::UserAgent::new(useragent));
}
let mut client = reqwest::Client::builder();
client.default_headers(headers);
if let Some(verify_cert) = self.verify_cert {
let mut buf = Vec::new();
File::open(verify_cert)?.read_to_end(&mut buf)?;
let cert = reqwest::Certificate::from_pem(&buf)?;
client.add_root_certificate(cert);
}
// TODO: auth_cert https://github.com/sfackler/rust-native-tls/issues/27
Ok(client)
}
}
pub struct HttpStorage {
url: String,
// href -> (item, etag)
items_cache: Option<ItemCache>,
http_config: HttpConfig,
}
impl HttpStorage {
pub fn new(url: String, http_config: HttpConfig) -> Self {
HttpStorage {
url,
items_cache: None,
http_config,
}
}
fn get_items(&mut self) -> Fallible<&mut ItemCache> {
if self.items_cache.is_none() {
self.list()?;
}
Ok(self.items_cache.as_mut().unwrap())
}
}
impl Storage for HttpStorage {
fn list<'a>(&'a mut self) -> Fallible<Box<Iterator<Item = (String, String)> + 'a>> {
let client = self.http_config.clone().into_connection()?.build()?;
let mut response = handle_http_error(&self.url, client.get(&self.url).send()?)?;
let s = response.text()?;
let mut new_cache = BTreeMap::new();
for component in split_collection(&s)? {
let mut item = Item::from_component(component);
item = item.with_uid(&item.get_hash()?)?;
let ident = item.get_ident()?;
let hash = item.get_hash()?;
new_cache.insert(ident, (item, hash));
}
self.items_cache = Some(new_cache);
Ok(Box::new(self.items_cache.as_ref().unwrap().iter().map(
|(href, &(_, ref etag))| (href.clone(), etag.clone()),
)))
}
fn get(&mut self, href: &str) -> Fallible<(Item, String)> {
match self.get_items()?.get(href) {
Some(&(ref href, ref etag)) => Ok((href.clone(), etag.clone())),
None => Err(Error::ItemNotFound {
href: href.to_owned(),
})?,
}
}
fn upload(&mut self, _item: Item) -> Fallible<(String, String)> {
Err(Error::ReadOnly)?
}
fn update(&mut self, _href: &str, _item: Item, _etag: &str) -> Fallible<String> {
Err(Error::ReadOnly)?
}
fn delete(&mut self, _href: &str, _etag: &str) -> Fallible<()> {
Err(Error::ReadOnly)?
}
}
pub mod exports {
use super::*;
use std::ffi::CStr;
use std::os::raw::c_char;
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_init_http(
url: *const c_char,
username: *const c_char,
password: *const c_char,
useragent: *const c_char,
verify_cert: *const c_char,
auth_cert: *const c_char,
) -> *mut Box<Storage> {
let url = CStr::from_ptr(url);
Box::into_raw(Box::new(Box::new(HttpStorage::new(
url.to_str().unwrap().to_owned(),
init_http_config(username, password, useragent, verify_cert, auth_cert),
))))
}
}
pub fn handle_http_error(href: &str, mut r: reqwest::Response) -> Fallible<reqwest::Response> {
if !r.status().is_success() {
debug!("< Error response, dumping body:");
debug!("< {:?}", r.text());
}
match r.status() {
reqwest::StatusCode::NotFound => Err(Error::ItemNotFound {
href: href.to_owned(),
})?,
reqwest::StatusCode::UnsupportedMediaType => Err(Error::UnsupportedVobject {
href: href.to_owned(),
})?,
_ => Ok(r.error_for_status()?),
}
}
pub unsafe fn init_http_config(
username: *const c_char,
password: *const c_char,
useragent: *const c_char,
verify_cert: *const c_char,
auth_cert: *const c_char,
) -> HttpConfig {
let username = CStr::from_ptr(username);
let password = CStr::from_ptr(password);
let username_dec = username.to_str().unwrap();
let password_dec = password.to_str().unwrap();
let useragent = CStr::from_ptr(useragent);
let useragent_dec = useragent.to_str().unwrap();
let verify_cert = CStr::from_ptr(verify_cert);
let verify_cert_dec = verify_cert.to_str().unwrap();
let auth_cert = CStr::from_ptr(auth_cert);
let auth_cert_dec = auth_cert.to_str().unwrap();
let auth = if !username_dec.is_empty() && !password_dec.is_empty() {
Some((username_dec.to_owned(), password_dec.to_owned()))
} else {
None
};
HttpConfig {
auth,
useragent: if useragent_dec.is_empty() {
None
} else {
Some(useragent_dec.to_owned())
},
verify_cert: if verify_cert_dec.is_empty() {
None
} else {
Some(verify_cert_dec.to_owned())
},
auth_cert: if auth_cert_dec.is_empty() {
None
} else {
Some(auth_cert_dec.to_owned())
},
}
}

54
rust/src/storage/mod.rs Normal file
View file

@ -0,0 +1,54 @@
mod dav;
pub mod exports;
mod filesystem;
mod http;
mod singlefile;
mod utils;
use errors::Fallible;
use item::Item;
type ItemAndEtag = (Item, String);
pub trait Storage {
/// returns an iterator of `(href, etag)`
fn list<'a>(&'a mut self) -> Fallible<Box<Iterator<Item = (String, String)> + 'a>>;
///Fetch a single item.
///
///:param href: href to fetch
///:returns: (item, etag)
///:raises: :exc:`vdirsyncer.exceptions.PreconditionFailed` if item can't be found.
fn get(&mut self, href: &str) -> Fallible<ItemAndEtag>;
/// Upload a new item.
///
/// In cases where the new etag cannot be atomically determined (i.e. in the same
/// "transaction" as the upload itself), this method may return `None` as etag. This
/// special case only exists because of DAV. Avoid this situation whenever possible.
///
/// Returns `(href, etag)`
fn upload(&mut self, item: Item) -> Fallible<(String, String)>;
/// Update an item.
///
/// The etag may be none in some cases, see `upload`.
///
/// Returns `etag`
fn update(&mut self, href: &str, item: Item, etag: &str) -> Fallible<String>;
/// Delete an item by href.
fn delete(&mut self, href: &str, etag: &str) -> Fallible<()>;
/// Enter buffered mode for storages that support it.
///
/// Uploads, updates and deletions may not be effective until `flush` is explicitly called.
///
/// Use this if you will potentially write a lot of data to the storage, it improves
/// performance for storages that implement it.
fn buffered(&mut self) {}
/// Write back all changes to the collection.
fn flush(&mut self) -> Fallible<()> {
Ok(())
}
}

View file

@ -0,0 +1,370 @@
use super::Storage;
use errors::*;
use std::collections::btree_map::Entry::*;
use std::collections::{BTreeMap, BTreeSet};
use std::fs::{metadata, File};
use std::io::{Read, Write};
use std::path::{Path, PathBuf};
use std::time::SystemTime;
use vobject;
use atomicwrites::{AllowOverwrite, AtomicFile};
use item::Item;
type ItemCache = BTreeMap<String, (Item, String)>;
pub struct SinglefileStorage {
path: PathBuf,
// href -> (item, etag)
items_cache: Option<(ItemCache, SystemTime)>,
buffered_mode: bool,
dirty_cache: bool,
}
impl SinglefileStorage {
pub fn new<P: AsRef<Path>>(path: P) -> Self {
SinglefileStorage {
path: path.as_ref().to_owned(),
items_cache: None,
buffered_mode: false,
dirty_cache: false,
}
}
fn get_items(&mut self) -> Fallible<&mut ItemCache> {
if self.items_cache.is_none() {
self.list()?;
}
Ok(&mut self.items_cache.as_mut().unwrap().0)
}
fn write_back(&mut self) -> Fallible<()> {
self.dirty_cache = true;
if self.buffered_mode {
return Ok(());
}
self.flush()?;
Ok(())
}
}
pub mod exports {
use super::*;
use std::ffi::CStr;
use std::os::raw::c_char;
#[no_mangle]
pub unsafe extern "C" fn vdirsyncer_init_singlefile(path: *const c_char) -> *mut Box<Storage> {
let cstring = CStr::from_ptr(path);
Box::into_raw(Box::new(Box::new(SinglefileStorage::new(
cstring.to_str().unwrap(),
))))
}
}
impl Storage for SinglefileStorage {
fn list<'a>(&'a mut self) -> Fallible<Box<Iterator<Item = (String, String)> + 'a>> {
let mut new_cache = BTreeMap::new();
let mtime = metadata(&self.path)?.modified()?;
let mut f = File::open(&self.path)?;
let mut s = String::new();
f.read_to_string(&mut s)?;
for component in split_collection(&s)? {
let item = Item::from_component(component);
let hash = item.get_hash()?;
let ident = item.get_ident()?;
new_cache.insert(ident, (item, hash));
}
self.items_cache = Some((new_cache, mtime));
self.dirty_cache = false;
Ok(Box::new(self.items_cache.as_ref().unwrap().0.iter().map(
|(href, &(_, ref etag))| (href.clone(), etag.clone()),
)))
}
fn get(&mut self, href: &str) -> Fallible<(Item, String)> {
match self.get_items()?.get(href) {
Some(&(ref href, ref etag)) => Ok((href.clone(), etag.clone())),
None => Err(Error::ItemNotFound {
href: href.to_owned(),
})?,
}
}
fn upload(&mut self, item: Item) -> Fallible<(String, String)> {
let hash = item.get_hash()?;
let href = item.get_ident()?;
match self.get_items()?.entry(href.clone()) {
Occupied(_) => Err(Error::ItemAlreadyExisting { href: href.clone() })?,
Vacant(vc) => vc.insert((item, hash.clone())),
};
self.write_back()?;
Ok((href, hash))
}
fn update(&mut self, href: &str, item: Item, etag: &str) -> Fallible<String> {
let hash = match self.get_items()?.entry(href.to_owned()) {
Occupied(mut oc) => {
if oc.get().1 == etag {
let hash = item.get_hash()?;
oc.insert((item, hash.clone()));
hash
} else {
Err(Error::WrongEtag {
href: href.to_owned(),
})?
}
}
Vacant(_) => Err(Error::ItemNotFound {
href: href.to_owned(),
})?,
};
self.write_back()?;
Ok(hash)
}
fn delete(&mut self, href: &str, etag: &str) -> Fallible<()> {
match self.get_items()?.entry(href.to_owned()) {
Occupied(oc) => {
if oc.get().1 == etag {
oc.remove();
} else {
Err(Error::WrongEtag {
href: href.to_owned(),
})?
}
}
Vacant(_) => Err(Error::ItemNotFound {
href: href.to_owned(),
})?,
}
self.write_back()?;
Ok(())
}
fn buffered(&mut self) {
self.buffered_mode = true;
}
fn flush(&mut self) -> Fallible<()> {
if !self.dirty_cache {
return Ok(());
}
let (items, mtime) = self.items_cache.take().unwrap();
let af = AtomicFile::new(&self.path, AllowOverwrite);
let content = join_collection(items.into_iter().map(|(_, (item, _))| item))?;
let path = &self.path;
let write_inner = |f: &mut File| -> Fallible<()> {
f.write_all(content.as_bytes())?;
let real_mtime = metadata(path)?.modified()?;
if mtime != real_mtime {
Err(Error::MtimeMismatch {
filepath: path.to_string_lossy().into_owned(),
})?;
}
Ok(())
};
af.write::<(), ::failure::Compat<::failure::Error>, _>(|f| {
write_inner(f).map_err(|e| e.compat())
})?;
self.dirty_cache = false;
Ok(())
}
}
pub fn split_collection(mut input: &str) -> Fallible<Vec<vobject::Component>> {
let mut rv = vec![];
while !input.is_empty() {
let (component, remainder) =
vobject::read_component(input).map_err(::failure::SyncFailure::new)?;
input = remainder;
match component.name.as_ref() {
"VCALENDAR" => rv.extend(split_vcalendar(component)?),
"VCARD" => rv.push(component),
"VADDRESSBOOK" => for vcard in component.subcomponents {
if vcard.name != "VCARD" {
Err(Error::UnexpectedVobject {
found: vcard.name.clone(),
expected: "VCARD".to_owned(),
})?;
}
rv.push(vcard);
},
_ => Err(Error::UnexpectedVobject {
found: component.name.clone(),
expected: "VCALENDAR | VCARD | VADDRESSBOOK".to_owned(),
})?,
}
}
Ok(rv)
}
/// Split one VCALENDAR component into multiple VCALENDAR components
#[inline]
fn split_vcalendar(mut vcalendar: vobject::Component) -> Fallible<Vec<vobject::Component>> {
vcalendar.props.remove("METHOD");
let mut timezones = BTreeMap::new(); // tzid => component
let mut subcomponents = vec![];
for component in vcalendar.subcomponents.drain(..) {
match component.name.as_ref() {
"VTIMEZONE" => {
let tzid = match component.get_only("TZID") {
Some(x) => x.value_as_string().clone(),
None => continue,
};
timezones.insert(tzid, component);
}
"VTODO" | "VEVENT" | "VJOURNAL" => subcomponents.push(component),
_ => Err(Error::UnexpectedVobject {
found: component.name.clone(),
expected: "VTIMEZONE | VTODO | VEVENT | VJOURNAL".to_owned(),
})?,
};
}
let mut by_uid = BTreeMap::new();
let mut no_uid = vec![];
for component in subcomponents {
let uid = component.get_only("UID").cloned();
let mut wrapper = match uid
.as_ref()
.and_then(|u| by_uid.remove(&u.value_as_string()))
{
Some(x) => x,
None => vcalendar.clone(),
};
let mut required_tzids = BTreeSet::new();
for props in component.props.values() {
for prop in props {
if let Some(x) = prop.params.get("TZID") {
required_tzids.insert(x.to_owned());
}
}
}
for tzid in required_tzids {
if let Some(tz) = timezones.get(&tzid) {
wrapper.subcomponents.push(tz.clone());
}
}
wrapper.subcomponents.push(component);
match uid {
Some(p) => {
by_uid.insert(p.value_as_string(), wrapper);
}
None => no_uid.push(wrapper),
}
}
Ok(by_uid
.into_iter()
.map(|(_, v)| v)
.chain(no_uid.into_iter())
.collect())
}
fn join_collection<I: Iterator<Item = Item>>(item_iter: I) -> Fallible<String> {
let mut items = item_iter.peekable();
let item_name = match items.peek() {
Some(x) => x.get_component()?.name.clone(),
None => return Ok("".to_owned()),
};
let wrapper_name = match item_name.as_ref() {
"VCARD" => "VADDRESSBOOK",
"VCALENDAR" => "VCALENDAR",
_ => Err(Error::UnexpectedVobject {
found: item_name.clone(),
expected: "VCARD | VCALENDAR".to_owned(),
})?,
};
let mut wrapper = vobject::Component::new(wrapper_name);
let mut version: Option<vobject::Property> = None;
for item in items {
let mut c = item.into_component()?;
if c.name != item_name {
return Err(Error::UnexpectedVobject {
found: c.name,
expected: item_name.clone(),
}.into());
}
if item_name == wrapper_name {
wrapper.subcomponents.extend(c.subcomponents.drain(..));
match (version.as_ref(), c.get_only("VERSION")) {
(Some(x), Some(y)) if x.raw_value != y.raw_value => {
return Err(Error::UnexpectedVobjectVersion {
expected: x.raw_value.clone(),
found: y.raw_value.clone(),
}.into());
}
(None, Some(_)) => (),
_ => continue,
}
version = c.get_only("VERSION").cloned();
} else {
wrapper.subcomponents.push(c);
}
}
if let Some(v) = version {
wrapper.set(v);
}
Ok(vobject::write_component(&wrapper))
}
#[cfg(test)]
mod tests {
use super::*;
fn check_roundtrip(raw: &str) {
let components = split_collection(raw).unwrap();
let raw2 = join_collection(components.into_iter().map(Item::from_component)).unwrap();
assert_eq!(
Item::from_raw(raw.to_owned()).get_hash().unwrap(),
Item::from_raw(raw2.to_owned()).get_hash().unwrap()
);
}
#[test]
fn test_wrapper_properties_roundtrip() {
let raw = r#"BEGIN:VCALENDAR
PRODID:-//Google Inc//Google Calendar 70.9054//EN
X-WR-CALNAME:markus.unterwaditzer@runtastic.com
X-WR-TIMEZONE:Europe/Vienna
VERSION:2.0
CALSCALE:GREGORIAN
BEGIN:VEVENT
DTSTART;TZID=Europe/Vienna:20171012T153000
DTEND;TZID=Europe/Vienna:20171012T170000
DTSTAMP:20171009T085029Z
UID:test@test.com
STATUS:CONFIRMED
SUMMARY:Test
TRANSP:OPAQUE
END:VEVENT
END:VCALENDAR"#;
check_roundtrip(raw);
}
}

24
rust/src/storage/utils.rs Normal file
View file

@ -0,0 +1,24 @@
use uuid::Uuid;
fn is_href_safe(ident: &str) -> bool {
for c in ident.chars() {
match c {
'_' | '.' | '-' | '+' => (),
_ if c.is_alphanumeric() => (),
_ => return false,
}
}
true
}
pub fn generate_href(ident: &str) -> String {
if is_href_safe(ident) {
ident.to_owned()
} else {
random_href()
}
}
pub fn random_href() -> String {
format!("{}", Uuid::new_v4())
}

146
rust/vdirsyncer_rustext.h Normal file
View file

@ -0,0 +1,146 @@
#include <stdint.h>
#include <stdlib.h>
#include <stdbool.h>
typedef struct Box_Storage Box_Storage;
typedef struct Item Item;
typedef struct ShippaiError ShippaiError;
typedef struct VdirsyncerStorageListing VdirsyncerStorageListing;
typedef struct {
Item *item;
const char *etag;
} VdirsyncerStorageGetResult;
typedef struct {
const char *href;
const char *etag;
} VdirsyncerStorageUploadResult;
extern const uint8_t SHIPPAI_VARIANT_DavError_EtagNotFound;
extern const uint8_t SHIPPAI_VARIANT_Error_ItemAlreadyExisting;
extern const uint8_t SHIPPAI_VARIANT_Error_ItemNotFound;
extern const uint8_t SHIPPAI_VARIANT_Error_ItemUnparseable;
extern const uint8_t SHIPPAI_VARIANT_Error_MtimeMismatch;
extern const uint8_t SHIPPAI_VARIANT_Error_ReadOnly;
extern const uint8_t SHIPPAI_VARIANT_Error_UnexpectedVobject;
extern const uint8_t SHIPPAI_VARIANT_Error_UnexpectedVobjectVersion;
extern const uint8_t SHIPPAI_VARIANT_Error_UnsupportedVobject;
extern const uint8_t SHIPPAI_VARIANT_Error_WrongEtag;
void shippai_free_failure(ShippaiError *t);
void shippai_free_str(char *t);
const char *shippai_get_debug(ShippaiError *t);
const char *shippai_get_display(ShippaiError *t);
uint8_t shippai_get_variant_DavError(ShippaiError *t);
uint8_t shippai_get_variant_Error(ShippaiError *t);
bool shippai_is_error_DavError(ShippaiError *t);
bool shippai_is_error_Error(ShippaiError *t);
bool vdirsyncer_advance_storage_listing(VdirsyncerStorageListing *listing);
void vdirsyncer_free_item(Item *c);
void vdirsyncer_free_storage_get_result(VdirsyncerStorageGetResult *res);
void vdirsyncer_free_storage_listing(VdirsyncerStorageListing *listing);
void vdirsyncer_free_storage_upload_result(VdirsyncerStorageUploadResult *res);
void vdirsyncer_free_str(const char *s);
const char *vdirsyncer_get_hash(Item *c, ShippaiError **err);
const char *vdirsyncer_get_raw(Item *c);
const char *vdirsyncer_get_uid(Item *c);
Box_Storage *vdirsyncer_init_caldav(const char *url,
const char *username,
const char *password,
const char *useragent,
const char *verify_cert,
const char *auth_cert,
int64_t start_date,
int64_t end_date,
bool include_vevent,
bool include_vjournal,
bool include_vtodo);
Box_Storage *vdirsyncer_init_carddav(const char *url,
const char *username,
const char *password,
const char *useragent,
const char *verify_cert,
const char *auth_cert);
Box_Storage *vdirsyncer_init_filesystem(const char *path,
const char *fileext,
const char *post_hook);
Box_Storage *vdirsyncer_init_http(const char *url,
const char *username,
const char *password,
const char *useragent,
const char *verify_cert,
const char *auth_cert);
void vdirsyncer_init_logger(void);
Box_Storage *vdirsyncer_init_singlefile(const char *path);
Item *vdirsyncer_item_from_raw(const char *s);
bool vdirsyncer_item_is_parseable(Item *c);
void vdirsyncer_storage_buffered(Box_Storage *storage);
void vdirsyncer_storage_delete(Box_Storage *storage,
const char *c_href,
const char *c_etag,
ShippaiError **err);
void vdirsyncer_storage_flush(Box_Storage *storage, ShippaiError **err);
void vdirsyncer_storage_free(Box_Storage *storage);
VdirsyncerStorageGetResult *vdirsyncer_storage_get(Box_Storage *storage,
const char *c_href,
ShippaiError **err);
VdirsyncerStorageListing *vdirsyncer_storage_list(Box_Storage *storage, ShippaiError **err);
const char *vdirsyncer_storage_listing_get_etag(VdirsyncerStorageListing *listing);
const char *vdirsyncer_storage_listing_get_href(VdirsyncerStorageListing *listing);
const char *vdirsyncer_storage_update(Box_Storage *storage,
const char *c_href,
Item *item,
const char *c_etag,
ShippaiError **err);
VdirsyncerStorageUploadResult *vdirsyncer_storage_upload(Box_Storage *storage,
Item *item,
ShippaiError **err);
Item *vdirsyncer_with_uid(Item *c, const char *uid, ShippaiError **err);

View file

@ -0,0 +1,11 @@
echo "export PATH=$HOME/.cargo/bin/:$HOME/.local/bin/:$PATH" >> $BASH_ENV
. $BASH_ENV
make install-rust
sudo apt-get install -y cmake
pip install --user virtualenv
virtualenv ~/env
echo ". ~/env/bin/activate" >> $BASH_ENV
. $BASH_ENV

View file

@ -8,9 +8,12 @@ ARG distrover
RUN apt-get update
RUN apt-get install -y build-essential fakeroot debhelper git
RUN apt-get install -y python3-all python3-pip
RUN apt-get install -y python3-all python3-dev python3-pip
RUN apt-get install -y ruby ruby-dev
RUN apt-get install -y python-all python-pip
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y
RUN apt-get install -y libssl-dev libffi-dev
ENV PATH="/root/.cargo/bin/:${PATH}"
RUN gem install fpm
@ -24,7 +27,7 @@ RUN mkdir /vdirsyncer/pkgs/
RUN basename *.tar.gz .tar.gz | cut -d'-' -f2 | sed -e 's/\.dev/~/g' | tee version
RUN (echo -n *.tar.gz; echo '[google]') | tee requirements.txt
RUN . /vdirsyncer/env/bin/activate; fpm -s virtualenv -t deb \
RUN . /vdirsyncer/env/bin/activate; fpm --verbose -s virtualenv -t deb \
-n "vdirsyncer-latest" \
-v "$(cat version)" \
--prefix /opt/venvs/vdirsyncer-latest \

View file

@ -1,78 +0,0 @@
import itertools
import json
import sys
python_versions = ("3.4", "3.5", "3.6")
latest_python = "3.6"
cfg = {}
cfg['sudo'] = True
cfg['dist'] = 'trusty'
cfg['language'] = 'python'
cfg['cache'] = 'pip'
cfg['git'] = {
'submodules': False
}
cfg['branches'] = {
'only': ['auto', 'master', '/^.*-maintenance$/']
}
cfg['install'] = """
. scripts/travis-install.sh
pip install -U pip setuptools
pip install wheel
make -e install-dev
make -e install-$BUILD
""".strip().splitlines()
cfg['script'] = ["make -e $BUILD"]
matrix = []
cfg['matrix'] = {'include': matrix}
matrix.append({
'python': latest_python,
'env': 'BUILD=style'
})
for python, requirements in itertools.product(python_versions,
("devel", "release", "minimal")):
dav_servers = ("radicale", "xandikos")
if python == latest_python and requirements == "release":
dav_servers += ("fastmail",)
for dav_server in dav_servers:
job = {
'python': python,
'env': ("BUILD=test "
"DAV_SERVER={dav_server} "
"REQUIREMENTS={requirements} "
.format(dav_server=dav_server,
requirements=requirements))
}
build_prs = dav_server not in ("fastmail", "davical", "icloud")
if not build_prs:
job['if'] = 'NOT (type IN (pull_request))'
matrix.append(job)
matrix.append({
'python': latest_python,
'env': ("BUILD=test "
"ETESYNC_TESTS=true "
"REQUIREMENTS=latest")
})
matrix.append({
'language': 'generic',
'os': 'osx',
'env': 'BUILD=test'
})
json.dump(cfg, sys.stdout, sort_keys=True, indent=2)

View file

@ -1,10 +0,0 @@
#!/bin/sh
# The OS X VM doesn't have any Python support at all
# See https://github.com/travis-ci/travis-ci/issues/2312
if [ "$TRAVIS_OS_NAME" = "osx" ]; then
brew update
brew install python3
virtualenv -p python3 $HOME/osx-py3
. $HOME/osx-py3/bin/activate
fi

View file

@ -1,14 +1,11 @@
[wheel]
universal = 1
[tool:pytest]
norecursedirs = tests/storage/servers/*
addopts = --tb=short
addopts = --tb=short --duration 3
[flake8]
# E731: Use a def instead of lambda expr
# E743: Ambiguous function definition
ignore = E731, E743
select = C,E,F,W,B,B9
exclude = .eggs, tests/storage/servers/owncloud/, tests/storage/servers/nextcloud/, tests/storage/servers/baikal/, build/
exclude = .eggs/, tests/storage/servers/nextcloud/, build/, vdirsyncer/_native*
application-package-names = tests,vdirsyncer

View file

@ -7,8 +7,10 @@ how to package vdirsyncer.
'''
import os
from setuptools import Command, find_packages, setup
milksnake = 'milksnake'
requirements = [
# https://github.com/mitsuhiko/click/issues/200
@ -32,10 +34,35 @@ requirements = [
'requests_toolbelt >=0.4.0',
# https://github.com/untitaker/python-atomicwrites/commit/4d12f23227b6a944ab1d99c507a69fdbc7c9ed6d # noqa
'atomicwrites>=0.1.7'
'atomicwrites>=0.1.7',
milksnake,
'shippai >= 0.2.3',
]
def build_native(spec):
cmd = ['cargo', 'build']
if os.environ.get('RUST_BACKTRACE', 'false') in ('true', '1', 'full'):
dylib_folder = 'target/debug'
else:
dylib_folder = 'target/release'
cmd.append('--release')
build = spec.add_external_build(cmd=cmd, path='./rust/')
spec.add_cffi_module(
module_path='vdirsyncer._native',
dylib=lambda: build.find_dylib('vdirsyncer_rustext',
in_path=dylib_folder),
header_filename='rust/vdirsyncer_rustext.h',
# Rust bug: If thread-local storage is used, this flag is necessary
# (mitsuhiko)
rtld_flags=['NOW', 'NODELETE']
)
class PrintRequirements(Command):
description = 'Prints minimal requirements'
user_options = []
@ -75,7 +102,10 @@ setup(
},
# Build dependencies
setup_requires=['setuptools_scm != 1.12.0'],
setup_requires=[
'setuptools_scm != 1.12.0',
milksnake,
],
# Other
packages=find_packages(exclude=['tests.*', 'tests']),
@ -101,4 +131,7 @@ setup(
'Topic :: Internet',
'Topic :: Utilities',
],
milksnake_tasks=[build_native],
zip_safe=False,
platforms='any'
)

View file

@ -3,9 +3,11 @@
Test suite for vdirsyncer.
'''
import random
import hypothesis.strategies as st
from vdirsyncer.vobject import normalize_item
from vdirsyncer.vobject import Item
import urllib3
import urllib3.exceptions
@ -18,7 +20,7 @@ def blow_up(*a, **kw):
def assert_item_equals(a, b):
assert normalize_item(a) == normalize_item(b)
assert a.hash == b.hash
VCARD_TEMPLATE = u'''BEGIN:VCARD
@ -55,6 +57,7 @@ END:VCALENDAR'''
BARE_EVENT_TEMPLATE = u'''BEGIN:VEVENT
DTSTART:19970714T170000Z
DTEND:19970715T035959Z
DTSTAMP:19970610T172345Z
SUMMARY:Bastille Day Party
X-SOMETHING:{r}
UID:{uid}
@ -109,3 +112,10 @@ uid_strategy = st.text(
)),
min_size=1
).filter(lambda x: x.strip() == x)
def format_item(uid=None, item_template=VCARD_TEMPLATE):
# assert that special chars are handled correctly.
r = random.random()
uid = uid or r
return Item(item_template.format(r=r, uid=uid))

View file

@ -1,13 +1,8 @@
# -*- coding: utf-8 -*-
import random
import uuid
import textwrap
from urllib.parse import quote as urlquote, unquote as urlunquote
import hypothesis.strategies as st
from hypothesis import given
import pytest
@ -16,7 +11,7 @@ from vdirsyncer.storage.base import normalize_meta_value
from vdirsyncer.vobject import Item
from .. import EVENT_TEMPLATE, TASK_TEMPLATE, VCARD_TEMPLATE, \
assert_item_equals, normalize_item, printable_characters_strategy
assert_item_equals, format_item
def get_server_mixin(server_name):
@ -25,12 +20,6 @@ def get_server_mixin(server_name):
return x.ServerMixin
def format_item(item_template, uid=None):
# assert that special chars are handled correctly.
r = random.random()
return Item(item_template.format(r=r, uid=uid or r))
class StorageTests(object):
storage_class = None
supports_collections = True
@ -62,7 +51,7 @@ class StorageTests(object):
'VCARD': VCARD_TEMPLATE,
}[item_type]
return lambda **kw: format_item(template, **kw)
return lambda **kw: format_item(item_template=template, **kw)
@pytest.fixture
def requires_collections(self):
@ -143,6 +132,8 @@ class StorageTests(object):
def test_delete(self, s, get_item):
href, etag = s.upload(get_item())
if etag is None:
_, etag = s.get(href)
s.delete(href, etag)
assert not list(s.list())
@ -160,6 +151,8 @@ class StorageTests(object):
def test_has(self, s, get_item):
assert not s.has('asd')
href, etag = s.upload(get_item())
if etag is None:
_, etag = s.get(href)
assert s.has(href)
assert not s.has('asd')
s.delete(href, etag)
@ -246,38 +239,6 @@ class StorageTests(object):
assert len(items) == 2
assert len(set(items)) == 2
def test_specialchars(self, monkeypatch, requires_collections,
get_storage_args, get_item):
if getattr(self, 'dav_server', '') == 'radicale':
pytest.skip('Radicale is fundamentally broken.')
if getattr(self, 'dav_server', '') in ('icloud', 'fastmail'):
pytest.skip('iCloud and FastMail reject this name.')
monkeypatch.setattr('vdirsyncer.utils.generate_href', lambda x: x)
uid = u'test @ foo ät bar град сатану'
collection = 'test @ foo ät bar'
s = self.storage_class(**get_storage_args(collection=collection))
item = get_item(uid=uid)
href, etag = s.upload(item)
item2, etag2 = s.get(href)
if etag is not None:
assert etag2 == etag
assert_item_equals(item2, item)
(_, etag3), = s.list()
assert etag2 == etag3
# etesync uses UUIDs for collection names
if self.storage_class.storage_name.startswith('etesync'):
return
assert collection in urlunquote(s.collection)
if self.storage_class.storage_name.endswith('dav'):
assert urlquote(uid, '/@:') in href
def test_metadata(self, requires_metadata, s):
if not getattr(self, 'dav_server', ''):
assert not s.get_meta('color')
@ -297,18 +258,16 @@ class StorageTests(object):
assert rv == x
assert isinstance(rv, str)
@given(value=st.one_of(
st.none(),
printable_characters_strategy
))
@pytest.mark.parametrize('value', [
'fööbör',
'ананасовое перо'
])
def test_metadata_normalization(self, requires_metadata, s, value):
x = s.get_meta('displayname')
assert x == normalize_meta_value(x)
if not getattr(self, 'dav_server', None):
# ownCloud replaces "" with "unnamed"
s.set_meta('displayname', value)
assert s.get_meta('displayname') == normalize_meta_value(value)
s.set_meta('displayname', value)
assert s.get_meta('displayname') == normalize_meta_value(value)
def test_recurring_events(self, s, item_type):
if item_type != 'VEVENT':
@ -354,4 +313,60 @@ class StorageTests(object):
href, etag = s.upload(item)
item2, etag2 = s.get(href)
assert normalize_item(item) == normalize_item(item2)
assert item2.raw.count('BEGIN:VEVENT') == 2
assert 'RRULE' in item2.raw
def test_buffered(self, get_storage_args, get_item, requires_collections):
args = get_storage_args()
s1 = self.storage_class(**args)
s2 = self.storage_class(**args)
s1.upload(get_item())
assert sorted(list(s1.list())) == sorted(list(s2.list()))
s1.buffered()
s1.upload(get_item())
s1.flush()
assert sorted(list(s1.list())) == sorted(list(s2.list()))
def test_retain_timezones(self, item_type, s):
if item_type != 'VEVENT':
pytest.skip('This storage instance doesn\'t support iCalendar.')
item = Item(textwrap.dedent('''
BEGIN:VCALENDAR
PRODID:-//ownCloud calendar v1.4.0
VERSION:2.0
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20161004T110533
DTSTAMP:20161004T110533
LAST-MODIFIED:20161004T110533
UID:y2lmgz48mg
SUMMARY:Test
CLASS:PUBLIC
STATUS:CONFIRMED
DTSTART;TZID=Europe/Berlin:20161014T101500
DTEND;TZID=Europe/Berlin:20161014T114500
END:VEVENT
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:DAYLIGHT
DTSTART:20160327T030000
TZNAME:CEST
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20161030T020000
TZNAME:CET
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
END:STANDARD
END:VTIMEZONE
END:VCALENDAR
''').strip())
href, etag = s.upload(item)
item2, _ = s.get(href)
assert 'VTIMEZONE' in item2.raw
assert item2.hash == item.hash

View file

@ -1,19 +1,7 @@
# -*- coding: utf-8 -*-
import uuid
import os
import pytest
import requests
import requests.exceptions
from tests import assert_item_equals
from vdirsyncer import exceptions
from vdirsyncer.vobject import Item
from .. import StorageTests, get_server_mixin
@ -24,14 +12,6 @@ ServerMixin = get_server_mixin(dav_server)
class DAVStorageTests(ServerMixin, StorageTests):
dav_server = dav_server
@pytest.mark.skipif(dav_server == 'radicale',
reason='Radicale is very tolerant.')
def test_dav_broken_item(self, s):
item = Item(u'HAHA:YES')
with pytest.raises((exceptions.Error, requests.exceptions.HTTPError)):
s.upload(item)
assert not list(s.list())
def test_dav_empty_get_multi_performance(self, s, monkeypatch):
def breakdown(*a, **kw):
raise AssertionError('Expected not to be called.')
@ -43,14 +23,3 @@ class DAVStorageTests(ServerMixin, StorageTests):
finally:
# Make sure monkeypatch doesn't interfere with DAV server teardown
monkeypatch.undo()
def test_dav_unicode_href(self, s, get_item, monkeypatch):
if self.dav_server == 'radicale':
pytest.skip('Radicale is unable to deal with unicode hrefs')
monkeypatch.setattr(s, '_get_href',
lambda item: item.ident + s.fileext)
item = get_item(uid=u'град сатану' + str(uuid.uuid4()))
href, etag = s.upload(item)
item2, etag2 = s.get(href)
assert_item_equals(item, item2)

View file

@ -5,12 +5,8 @@ from textwrap import dedent
import pytest
import requests
import requests.exceptions
from tests import EVENT_TEMPLATE, TASK_TEMPLATE, VCARD_TEMPLATE
from vdirsyncer import exceptions
from vdirsyncer.storage.dav import CalDAVStorage
from . import DAVStorageTests, dav_server
@ -28,34 +24,11 @@ class TestCalDAVStorage(DAVStorageTests):
s = self.storage_class(item_types=(item_type,), **get_storage_args())
try:
s.upload(format_item(VCARD_TEMPLATE))
except (exceptions.Error, requests.exceptions.HTTPError):
s.upload(format_item(item_template=VCARD_TEMPLATE))
except Exception:
pass
assert not list(s.list())
# The `arg` param is not named `item_types` because that would hit
# https://bitbucket.org/pytest-dev/pytest/issue/745/
@pytest.mark.parametrize('arg,calls_num', [
(('VTODO',), 1),
(('VEVENT',), 1),
(('VTODO', 'VEVENT'), 2),
(('VTODO', 'VEVENT', 'VJOURNAL'), 3),
((), 1)
])
def test_item_types_performance(self, get_storage_args, arg, calls_num,
monkeypatch):
s = self.storage_class(item_types=arg, **get_storage_args())
old_parse = s._parse_prop_responses
calls = []
def new_parse(*a, **kw):
calls.append(None)
return old_parse(*a, **kw)
monkeypatch.setattr(s, '_parse_prop_responses', new_parse)
list(s.list())
assert len(calls) == calls_num
@pytest.mark.xfail(dav_server == 'radicale',
reason='Radicale doesn\'t support timeranges.')
def test_timerange_correctness(self, get_storage_args):
@ -64,7 +37,7 @@ class TestCalDAVStorage(DAVStorageTests):
s = self.storage_class(start_date=start_date, end_date=end_date,
**get_storage_args())
too_old_item = format_item(dedent(u'''
too_old_item = format_item(item_template=dedent(u'''
BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//hacksw/handcal//NONSGML v1.0//EN
@ -78,7 +51,7 @@ class TestCalDAVStorage(DAVStorageTests):
END:VCALENDAR
''').strip())
too_new_item = format_item(dedent(u'''
too_new_item = format_item(item_template=dedent(u'''
BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//hacksw/handcal//NONSGML v1.0//EN
@ -92,7 +65,7 @@ class TestCalDAVStorage(DAVStorageTests):
END:VCALENDAR
''').strip())
good_item = format_item(dedent(u'''
good_item = format_item(item_template=dedent(u'''
BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//hacksw/handcal//NONSGML v1.0//EN
@ -113,40 +86,19 @@ class TestCalDAVStorage(DAVStorageTests):
(actual_href, _), = s.list()
assert actual_href == expected_href
def test_invalid_resource(self, monkeypatch, get_storage_args):
calls = []
args = get_storage_args(collection=None)
def request(session, method, url, **kwargs):
assert url == args['url']
calls.append(None)
r = requests.Response()
r.status_code = 200
r._content = b'Hello World.'
return r
monkeypatch.setattr('requests.sessions.Session.request', request)
with pytest.raises(ValueError):
s = self.storage_class(**args)
list(s.list())
assert len(calls) == 1
@pytest.mark.skipif(dav_server == 'icloud',
reason='iCloud only accepts VEVENT')
def test_item_types_general(self, s):
event = s.upload(format_item(EVENT_TEMPLATE))[0]
task = s.upload(format_item(TASK_TEMPLATE))[0]
s.item_types = ('VTODO', 'VEVENT')
def test_item_types_general(self, get_storage_args):
args = get_storage_args()
s = self.storage_class(**args)
event = s.upload(format_item(item_template=EVENT_TEMPLATE))[0]
task = s.upload(format_item(item_template=TASK_TEMPLATE))[0]
def l():
return set(href for href, etag in s.list())
assert l() == {event, task}
s.item_types = ('VTODO',)
assert l() == {task}
s.item_types = ('VEVENT',)
assert l() == {event}
s.item_types = ()
assert l() == {event, task}
for item_types, expected_items in [
(('VTODO', 'VEVENT'), {event, task}),
(('VTODO',), {task}),
(('VEVENT',), {event}),
]:
args['item_types'] = item_types
s = self.storage_class(**args)
assert set(href for href, etag in s.list()) == expected_items

@ -1 +0,0 @@
Subproject commit 6c8c379f1ee8bf4ab0ac54fc4eec3e4a6349c237

View file

@ -7,16 +7,18 @@ try:
# Those credentials are configured through the Travis UI
'username': os.environ['DAVICAL_USERNAME'].strip(),
'password': os.environ['DAVICAL_PASSWORD'].strip(),
'url': 'https://brutus.lostpackets.de/davical-test/caldav.php/',
'url': 'https://caesar.lostpackets.de/davical-test/caldav.php/',
}
except KeyError as e:
pytestmark = pytest.mark.skip('Missing envkey: {}'.format(str(e)))
caldav_args = None
@pytest.mark.flaky(reruns=5)
class ServerMixin(object):
@pytest.fixture
def davical_args(self):
if caldav_args is None:
pytest.skip('Missing envkeys for davical')
if self.storage_class.fileext == '.ics':
return dict(caldav_args)
elif self.storage_class.fileext == '.vcf':

View file

@ -3,15 +3,19 @@ import os
import pytest
username = os.environ.get('FASTMAIL_USERNAME', '').strip()
password = os.environ.get('FASTMAIL_PASSWORD', '').strip()
class ServerMixin(object):
@pytest.fixture
def get_storage_args(self, slow_create_collection):
if not username:
pytest.skip('Fastmail credentials not available')
def inner(collection='test'):
args = {
'username': os.environ['FASTMAIL_USERNAME'],
'password': os.environ['FASTMAIL_PASSWORD']
}
args = {'username': username, 'password': password}
if self.storage_class.fileext == '.ics':
args['url'] = 'https://caldav.messagingengine.com/'

View file

@ -2,6 +2,9 @@ import os
import pytest
username = os.environ.get('ICLOUD_USERNAME', '').strip()
password = os.environ.get('ICLOUD_PASSWORD', '').strip()
class ServerMixin(object):
@ -12,11 +15,11 @@ class ServerMixin(object):
# See https://github.com/pimutils/vdirsyncer/pull/593#issuecomment-285941615 # noqa
pytest.skip('iCloud doesn\'t support anything else than VEVENT')
if not username:
pytest.skip('iCloud credentials not available')
def inner(collection='test'):
args = {
'username': os.environ['ICLOUD_USERNAME'],
'password': os.environ['ICLOUD_PASSWORD']
}
args = {'username': username, 'password': password}
if self.storage_class.fileext == '.ics':
args['url'] = 'https://caldav.icloud.com/'

View file

@ -1 +0,0 @@
mysteryshack

View file

@ -1,75 +0,0 @@
# -*- coding: utf-8 -*-
import os
import subprocess
import time
import shutil
import pytest
import requests
testserver_repo = os.path.dirname(__file__)
make_sh = os.path.abspath(os.path.join(testserver_repo, 'make.sh'))
def wait():
for i in range(100):
try:
requests.get('http://127.0.0.1:6767/', verify=False)
except Exception as e:
# Don't know exact exception class, don't care.
# Also, https://github.com/kennethreitz/requests/issues/2192
if 'connection refused' not in str(e).lower():
raise
time.sleep(2 ** i)
else:
return True
return False
class ServerMixin(object):
@pytest.fixture(scope='session')
def setup_mysteryshack_server(self, xprocess):
def preparefunc(cwd):
return wait, ['sh', make_sh, 'testserver']
subprocess.check_call(['sh', make_sh, 'testserver-config'])
xprocess.ensure('mysteryshack_server', preparefunc)
return subprocess.check_output([
os.path.join(
testserver_repo,
'mysteryshack/target/debug/mysteryshack'
),
'-c', '/tmp/mysteryshack/config',
'user',
'authorize',
'testuser',
'https://example.com',
self.storage_class.scope + ':rw'
]).strip().decode()
@pytest.fixture
def get_storage_args(self, monkeypatch, setup_mysteryshack_server):
from requests import Session
monkeypatch.setitem(os.environ, 'OAUTHLIB_INSECURE_TRANSPORT', 'true')
old_request = Session.request
def request(self, method, url, **kw):
url = url.replace('https://', 'http://')
return old_request(self, method, url, **kw)
monkeypatch.setattr(Session, 'request', request)
shutil.rmtree('/tmp/mysteryshack/testuser/data', ignore_errors=True)
shutil.rmtree('/tmp/mysteryshack/testuser/meta', ignore_errors=True)
def inner(**kw):
kw['account'] = 'testuser@127.0.0.1:6767'
kw['access_token'] = setup_mysteryshack_server
if self.storage_class.fileext == '.ics':
kw.setdefault('collection', 'test')
return kw
return inner

View file

@ -1,18 +0,0 @@
#!/bin/sh
set -ex
cd "$(dirname "$0")"
. ./variables.sh
if [ "$CI" = "true" ]; then
curl -sL https://static.rust-lang.org/rustup.sh -o ~/rust-installer/rustup.sh
sh ~/rust-installer/rustup.sh --prefix=~/rust --spec=stable -y --disable-sudo 2> /dev/null
fi
if [ ! -d mysteryshack ]; then
git clone https://github.com/untitaker/mysteryshack
fi
pip install pytest-xprocess
cd mysteryshack
make debug-build # such that first test doesn't hang too long w/o output

View file

@ -1,9 +0,0 @@
#!/bin/sh
set -e
# pytest-xprocess doesn't allow us to CD into a particular directory before
# launching a command, so we do it here.
cd "$(dirname "$0")"
. ./variables.sh
cd mysteryshack
exec make "$@"

View file

@ -1 +0,0 @@
export PATH="$PATH:$HOME/.cargo/bin/"

@ -1 +0,0 @@
Subproject commit a27144ddcf39a3283179a4f7ce1ab22b2e810205

View file

@ -0,0 +1,29 @@
import os
import requests
import pytest
port = os.environ.get('NEXTCLOUD_HOST', None) or 'localhost:5000'
user = os.environ.get('NEXTCLOUD_USER', None) or 'asdf'
pwd = os.environ.get('NEXTCLOUD_PASS', None) or 'asdf'
class ServerMixin(object):
storage_class = None
wsgi_teardown = None
@pytest.fixture
def get_storage_args(self, item_type,
slow_create_collection):
def inner(collection='test'):
args = {
'username': user,
'password': pwd,
'url': 'http://{}/remote.php/dav/'.format(port)
}
if collection is not None:
args = slow_create_collection(self.storage_class, args,
collection)
return args
return inner

@ -1 +0,0 @@
Subproject commit bb4fcc6f524467d58c95f1dcec8470fdfcd65adf

View file

@ -1,35 +1,15 @@
import pytest
from xandikos.web import XandikosApp, XandikosBackend, WellknownRedirector
import wsgi_intercept
import wsgi_intercept.requests_intercept
class ServerMixin(object):
@pytest.fixture
def get_storage_args(self, request, tmpdir, slow_create_collection):
tmpdir.mkdir('xandikos')
backend = XandikosBackend(path=str(tmpdir))
cup = '/user/'
backend.create_principal(cup, create_defaults=True)
app = XandikosApp(backend, cup)
app = WellknownRedirector(app, '/')
wsgi_intercept.requests_intercept.install()
wsgi_intercept.add_wsgi_intercept('127.0.0.1', 8080, lambda: app)
def teardown():
wsgi_intercept.remove_wsgi_intercept('127.0.0.1', 8080)
wsgi_intercept.requests_intercept.uninstall()
request.addfinalizer(teardown)
def inner(collection='test'):
url = 'http://127.0.0.1:8080/'
args = {'url': url, 'collection': collection}
url = 'http://127.0.0.1:5001/'
args = {'url': url}
if collection is not None:
args = self.storage_class.create_collection(**args)
args = slow_create_collection(self.storage_class, args,
collection)
return args
return inner

View file

@ -1,13 +0,0 @@
#!/bin/sh
set -e
pip install wsgi_intercept
if [ "$REQUIREMENTS" = "release" ] || [ "$REQUIREMENTS" = "minimal" ]; then
pip install -U xandikos
elif [ "$REQUIREMENTS" = "devel" ]; then
pip install -U git+https://github.com/jelmer/xandikos
else
echo "Invalid REQUIREMENTS value"
false
fi

View file

@ -1,13 +1,11 @@
# -*- coding: utf-8 -*-
import subprocess
import pytest
from vdirsyncer.storage.filesystem import FilesystemStorage
from vdirsyncer.vobject import Item
from . import StorageTests
from tests import format_item
class TestFilesystemStorage(StorageTests):
@ -29,54 +27,22 @@ class TestFilesystemStorage(StorageTests):
f.write('stub')
self.storage_class(str(tmpdir) + '/hue', '.txt')
def test_broken_data(self, tmpdir):
s = self.storage_class(str(tmpdir), '.txt')
class BrokenItem(object):
raw = u'Ц, Ш, Л, ж, Д, З, Ю'.encode('utf-8')
uid = 'jeezus'
ident = uid
with pytest.raises(TypeError):
s.upload(BrokenItem)
assert not tmpdir.listdir()
def test_ident_with_slash(self, tmpdir):
s = self.storage_class(str(tmpdir), '.txt')
s.upload(Item(u'UID:a/b/c'))
s.upload(format_item('a/b/c'))
item_file, = tmpdir.listdir()
assert '/' not in item_file.basename and item_file.isfile()
def test_too_long_uid(self, tmpdir):
s = self.storage_class(str(tmpdir), '.txt')
item = Item(u'UID:' + u'hue' * 600)
item = format_item('hue' * 600)
href, etag = s.upload(item)
assert item.uid not in href
def test_post_hook_inactive(self, tmpdir, monkeypatch):
def check_call_mock(*args, **kwargs):
assert False
monkeypatch.setattr(subprocess, 'call', check_call_mock)
s = self.storage_class(str(tmpdir), '.txt', post_hook=None)
s.upload(Item(u'UID:a/b/c'))
def test_post_hook_active(self, tmpdir, monkeypatch):
calls = []
exe = 'foo'
def check_call_mock(l, *args, **kwargs):
calls.append(True)
assert len(l) == 2
assert l[0] == exe
monkeypatch.setattr(subprocess, 'call', check_call_mock)
s = self.storage_class(str(tmpdir), '.txt', post_hook=exe)
s.upload(Item(u'UID:a/b/c'))
assert calls
def test_post_hook_active(self, tmpdir):
s = self.storage_class(str(tmpdir), '.txt', post_hook='rm')
s.upload(format_item('a/b/c'))
assert not list(s.list())
def test_ignore_git_dirs(self, tmpdir):
tmpdir.mkdir('.git').mkdir('foo')

View file

@ -1,123 +0,0 @@
# -*- coding: utf-8 -*-
import pytest
from requests import Response
from tests import normalize_item
from vdirsyncer.exceptions import UserError
from vdirsyncer.storage.http import HttpStorage, prepare_auth
def test_list(monkeypatch):
collection_url = 'http://127.0.0.1/calendar/collection.ics'
items = [
(u'BEGIN:VEVENT\n'
u'SUMMARY:Eine Kurzinfo\n'
u'DESCRIPTION:Beschreibung des Termines\n'
u'END:VEVENT'),
(u'BEGIN:VEVENT\n'
u'SUMMARY:Eine zweite Küèrzinfo\n'
u'DESCRIPTION:Beschreibung des anderen Termines\n'
u'BEGIN:VALARM\n'
u'ACTION:AUDIO\n'
u'TRIGGER:19980403T120000\n'
u'ATTACH;FMTTYPE=audio/basic:http://host.com/pub/ssbanner.aud\n'
u'REPEAT:4\n'
u'DURATION:PT1H\n'
u'END:VALARM\n'
u'END:VEVENT')
]
responses = [
u'\n'.join([u'BEGIN:VCALENDAR'] + items + [u'END:VCALENDAR'])
] * 2
def get(self, method, url, *a, **kw):
assert method == 'GET'
assert url == collection_url
r = Response()
r.status_code = 200
assert responses
r._content = responses.pop().encode('utf-8')
r.headers['Content-Type'] = 'text/calendar'
r.encoding = 'ISO-8859-1'
return r
monkeypatch.setattr('requests.sessions.Session.request', get)
s = HttpStorage(url=collection_url)
found_items = {}
for href, etag in s.list():
item, etag2 = s.get(href)
assert item.uid is not None
assert etag2 == etag
found_items[normalize_item(item)] = href
expected = set(normalize_item(u'BEGIN:VCALENDAR\n' + x + '\nEND:VCALENDAR')
for x in items)
assert set(found_items) == expected
for href, etag in s.list():
item, etag2 = s.get(href)
assert item.uid is not None
assert etag2 == etag
assert found_items[normalize_item(item)] == href
def test_readonly_param():
url = 'http://example.com/'
with pytest.raises(ValueError):
HttpStorage(url=url, read_only=False)
a = HttpStorage(url=url, read_only=True).read_only
b = HttpStorage(url=url, read_only=None).read_only
assert a is b is True
def test_prepare_auth():
assert prepare_auth(None, '', '') is None
assert prepare_auth(None, 'user', 'pwd') == ('user', 'pwd')
assert prepare_auth('basic', 'user', 'pwd') == ('user', 'pwd')
with pytest.raises(ValueError) as excinfo:
assert prepare_auth('basic', '', 'pwd')
assert 'you need to specify username and password' in \
str(excinfo.value).lower()
from requests.auth import HTTPDigestAuth
assert isinstance(prepare_auth('digest', 'user', 'pwd'),
HTTPDigestAuth)
with pytest.raises(ValueError) as excinfo:
prepare_auth('ladida', 'user', 'pwd')
assert 'unknown authentication method' in str(excinfo.value).lower()
def test_prepare_auth_guess(monkeypatch):
import requests_toolbelt.auth.guess
assert isinstance(prepare_auth('guess', 'user', 'pwd'),
requests_toolbelt.auth.guess.GuessAuth)
monkeypatch.delattr(requests_toolbelt.auth.guess, 'GuessAuth')
with pytest.raises(UserError) as excinfo:
prepare_auth('guess', 'user', 'pwd')
assert 'requests_toolbelt is too old' in str(excinfo.value).lower()
def test_verify_false_disallowed():
with pytest.raises(ValueError) as excinfo:
HttpStorage(url='http://example.com', verify=False)
assert 'forbidden' in str(excinfo.value).lower()
assert 'consider setting verify_fingerprint' in str(excinfo.value).lower()

View file

@ -1,81 +0,0 @@
# -*- coding: utf-8 -*-
import pytest
from requests import Response
import vdirsyncer.storage.http
from vdirsyncer.storage.base import Storage
from vdirsyncer.storage.singlefile import SingleFileStorage
from . import StorageTests
class CombinedStorage(Storage):
'''A subclass of HttpStorage to make testing easier. It supports writes via
SingleFileStorage.'''
_repr_attributes = ('url', 'path')
storage_name = 'http_and_singlefile'
def __init__(self, url, path, **kwargs):
if kwargs.get('collection', None) is not None:
raise ValueError()
super(CombinedStorage, self).__init__(**kwargs)
self.url = url
self.path = path
self._reader = vdirsyncer.storage.http.HttpStorage(url=url)
self._reader._ignore_uids = False
self._writer = SingleFileStorage(path=path)
def list(self, *a, **kw):
return self._reader.list(*a, **kw)
def get(self, *a, **kw):
self.list()
return self._reader.get(*a, **kw)
def upload(self, *a, **kw):
return self._writer.upload(*a, **kw)
def update(self, *a, **kw):
return self._writer.update(*a, **kw)
def delete(self, *a, **kw):
return self._writer.delete(*a, **kw)
class TestHttpStorage(StorageTests):
storage_class = CombinedStorage
supports_collections = False
supports_metadata = False
@pytest.fixture(autouse=True)
def setup_tmpdir(self, tmpdir, monkeypatch):
self.tmpfile = str(tmpdir.ensure('collection.txt'))
def _request(method, url, *args, **kwargs):
assert method == 'GET'
assert url == 'http://localhost:123/collection.txt'
assert 'vdirsyncer' in kwargs['headers']['User-Agent']
r = Response()
r.status_code = 200
try:
with open(self.tmpfile, 'rb') as f:
r._content = f.read()
except IOError:
r._content = b''
r.headers['Content-Type'] = 'text/calendar'
r.encoding = 'utf-8'
return r
monkeypatch.setattr(vdirsyncer.storage.http, 'request', _request)
@pytest.fixture
def get_storage_args(self):
def inner(collection=None):
assert collection is None
return {'url': 'http://localhost:123/collection.txt',
'path': self.tmpfile}
return inner

View file

@ -1,9 +1,7 @@
import pytest
import json
from textwrap import dedent
import hypothesis.strategies as st
from hypothesis import given
from vdirsyncer import exceptions
from vdirsyncer.storage.base import Storage
@ -176,7 +174,8 @@ def test_null_collection_with_named_collection(tmpdir, runner):
assert 'HAHA' in bar.read()
@given(a_requires=st.booleans(), b_requires=st.booleans())
@pytest.mark.parametrize('a_requires,b_requires',
[(x, y) for x in (0, 1) for y in (0, 1)])
def test_collection_required(a_requires, b_requires, tmpdir, runner,
monkeypatch):

View file

@ -62,7 +62,7 @@ def test_repair_uids(storage, runner, repair_uids):
assert 'UID or href is unsafe, assigning random UID' in result.output
assert not f.exists()
new_f, = storage.listdir()
s = new_f.read()
s = new_f.read().strip()
assert s.startswith('BEGIN:VCARD')
assert s.endswith('END:VCARD')

View file

@ -4,11 +4,10 @@ import json
import sys
from textwrap import dedent
import hypothesis.strategies as st
from hypothesis import example, given
import pytest
from tests import format_item
def test_simple_run(tmpdir, runner):
runner.write_with_general(dedent('''
@ -37,10 +36,12 @@ def test_simple_run(tmpdir, runner):
result = runner.invoke(['sync'])
assert not result.exception
tmpdir.join('path_a/haha.txt').write('UID:haha')
item = format_item('haha')
tmpdir.join('path_a/haha.txt').write(item.raw)
result = runner.invoke(['sync'])
assert 'Copying (uploading) item haha to my_b' in result.output
assert tmpdir.join('path_b/haha.txt').read() == 'UID:haha'
assert tmpdir.join('path_b/haha.txt').read().splitlines() == \
item.raw.splitlines()
def test_sync_inexistant_pair(tmpdir, runner):
@ -109,7 +110,8 @@ def test_empty_storage(tmpdir, runner):
result = runner.invoke(['sync'])
assert not result.exception
tmpdir.join('path_a/haha.txt').write('UID:haha')
item = format_item('haha')
tmpdir.join('path_a/haha.txt').write(item.raw)
result = runner.invoke(['sync'])
assert not result.exception
tmpdir.join('path_b/haha.txt').remove()
@ -152,7 +154,7 @@ def test_collections_cache_invalidation(tmpdir, runner):
collections = ["a", "b", "c"]
''').format(str(tmpdir)))
foo.join('a/itemone.txt').write('UID:itemone')
foo.join('a/itemone.txt').write(format_item('itemone').raw)
result = runner.invoke(['discover'])
assert not result.exception
@ -271,25 +273,13 @@ def test_multiple_pairs(tmpdir, runner):
# XXX: https://github.com/pimutils/vdirsyncer/issues/617
@pytest.mark.skipif(sys.platform == 'darwin',
reason='This test inexplicably fails')
@given(collections=st.sets(
st.text(
st.characters(
blacklist_characters=set(
u'./\x00' # Invalid chars on POSIX filesystems
),
# Surrogates can't be encoded to utf-8 in Python
blacklist_categories=set(['Cs'])
),
min_size=1,
max_size=50
),
min_size=1
))
@example(collections=[u'persönlich'])
@example(collections={'a', 'A'})
@example(collections={'\ufffe'})
@pytest.mark.xfail(sys.platform == 'darwin',
reason='This test inexplicably fails')
@pytest.mark.parametrize('collections', [
{'persönlich'},
{'a', 'A'},
{'\ufffe'},
])
def test_create_collections(subtest, collections):
@subtest
@ -347,9 +337,10 @@ def test_ident_conflict(tmpdir, runner):
foo = tmpdir.mkdir('foo')
tmpdir.mkdir('bar')
foo.join('one.txt').write('UID:1')
foo.join('two.txt').write('UID:1')
foo.join('three.txt').write('UID:1')
item = format_item('1')
foo.join('one.txt').write(item.raw)
foo.join('two.txt').write(item.raw)
foo.join('three.txt').write(item.raw)
result = runner.invoke(['discover'])
assert not result.exception
@ -403,17 +394,16 @@ def test_no_configured_pairs(tmpdir, runner, cmd):
assert result.exception.code == 5
@pytest.mark.parametrize('resolution,expect_foo,expect_bar', [
(['command', 'cp'], 'UID:lol\nfööcontent', 'UID:lol\nfööcontent')
])
def test_conflict_resolution(tmpdir, runner, resolution, expect_foo,
expect_bar):
def test_conflict_resolution(tmpdir, runner):
item_a = format_item('lol')
item_b = format_item('lol')
runner.write_with_general(dedent('''
[pair foobar]
a = "foo"
b = "bar"
collections = null
conflict_resolution = {val}
conflict_resolution = ["command", "cp"]
[storage foo]
type = "filesystem"
@ -424,14 +414,14 @@ def test_conflict_resolution(tmpdir, runner, resolution, expect_foo,
type = "filesystem"
fileext = ".txt"
path = "{base}/bar"
'''.format(base=str(tmpdir), val=json.dumps(resolution))))
'''.format(base=str(tmpdir))))
foo = tmpdir.join('foo')
bar = tmpdir.join('bar')
fooitem = foo.join('lol.txt').ensure()
fooitem.write('UID:lol\nfööcontent')
fooitem.write(item_a.raw)
baritem = bar.join('lol.txt').ensure()
baritem.write('UID:lol\nbööcontent')
baritem.write(item_b.raw)
r = runner.invoke(['discover'])
assert not r.exception
@ -439,8 +429,8 @@ def test_conflict_resolution(tmpdir, runner, resolution, expect_foo,
r = runner.invoke(['sync'])
assert not r.exception
assert fooitem.read() == expect_foo
assert baritem.read() == expect_bar
assert fooitem.read().splitlines() == item_a.raw.splitlines()
assert baritem.read().splitlines() == item_a.raw.splitlines()
@pytest.mark.parametrize('partial_sync', ['error', 'ignore', 'revert', None])
@ -471,11 +461,12 @@ def test_partial_sync(tmpdir, runner, partial_sync):
foo = tmpdir.mkdir('foo')
bar = tmpdir.mkdir('bar')
foo.join('other.txt').write('UID:other')
bar.join('other.txt').write('UID:other')
item = format_item('other')
foo.join('other.txt').write(item.raw)
bar.join('other.txt').write(item.raw)
baritem = bar.join('lol.txt')
baritem.write('UID:lol')
baritem.write(format_item('lol').raw)
r = runner.invoke(['discover'])
assert not r.exception

View file

@ -1,6 +1,9 @@
import os
import pytest
from io import StringIO
from textwrap import dedent
from vdirsyncer.cli.config import _resolve_conflict_via_command
from vdirsyncer.cli.config import Config, _resolve_conflict_via_command
from vdirsyncer.vobject import Item
@ -22,3 +25,26 @@ def test_conflict_resolution_command():
a, b, ['~/command'], 'a', 'b',
_check_call=check_call
).raw == a.raw
def test_config_reader_invalid_collections():
s = StringIO(dedent('''
[general]
status_path = "foo"
[storage foo]
type = "memory"
[storage bar]
type = "memory"
[pair foobar]
a = "foo"
b = "bar"
collections = [["a", "b", "c", "d"]]
''').strip())
with pytest.raises(ValueError) as excinfo:
Config.from_fileobject(s)
assert 'Expected list of format' in str(excinfo.value)

View file

@ -8,7 +8,7 @@ from hypothesis.stateful import Bundle, RuleBasedStateMachine, rule
import pytest
from tests import blow_up, uid_strategy
from tests import blow_up, format_item, uid_strategy
from vdirsyncer.storage.memory import MemoryStorage, _random_string
from vdirsyncer.sync import sync as _sync
@ -49,7 +49,7 @@ def test_missing_status():
a = MemoryStorage()
b = MemoryStorage()
status = {}
item = Item(u'asdf')
item = format_item('asdf')
a.upload(item)
b.upload(item)
sync(a, b, status)
@ -62,8 +62,8 @@ def test_missing_status_and_different_items():
b = MemoryStorage()
status = {}
item1 = Item(u'UID:1\nhaha')
item2 = Item(u'UID:1\nhoho')
item1 = format_item('1')
item2 = format_item('1')
a.upload(item1)
b.upload(item2)
with pytest.raises(SyncConflict):
@ -79,8 +79,8 @@ def test_read_only_and_prefetch():
b.read_only = True
status = {}
item1 = Item(u'UID:1\nhaha')
item2 = Item(u'UID:2\nhoho')
item1 = format_item('1')
item2 = format_item('2')
a.upload(item1)
a.upload(item2)
@ -95,7 +95,8 @@ def test_partial_sync_error():
b = MemoryStorage()
status = {}
a.upload(Item('UID:0'))
item = format_item('0')
a.upload(item)
b.read_only = True
with pytest.raises(PartialSync):
@ -107,13 +108,13 @@ def test_partial_sync_ignore():
b = MemoryStorage()
status = {}
item0 = Item('UID:0\nhehe')
item0 = format_item('0')
a.upload(item0)
b.upload(item0)
b.read_only = True
item1 = Item('UID:1\nhaha')
item1 = format_item('1')
a.upload(item1)
sync(a, b, status, partial_sync='ignore')
@ -128,23 +129,25 @@ def test_partial_sync_ignore2():
b = MemoryStorage()
status = {}
href, etag = a.upload(Item('UID:0'))
item = format_item('0')
href, etag = a.upload(item)
a.read_only = True
sync(a, b, status, partial_sync='ignore', force_delete=True)
assert items(b) == items(a) == {'UID:0'}
assert items(b) == items(a) == {item.raw}
b.items.clear()
sync(a, b, status, partial_sync='ignore', force_delete=True)
sync(a, b, status, partial_sync='ignore', force_delete=True)
assert items(a) == {'UID:0'}
assert items(a) == {item.raw}
assert not b.items
a.read_only = False
a.update(href, Item('UID:0\nupdated'), etag)
new_item = format_item('0')
a.update(href, new_item, etag)
a.read_only = True
sync(a, b, status, partial_sync='ignore', force_delete=True)
assert items(b) == items(a) == {'UID:0\nupdated'}
assert items(b) == items(a) == {new_item.raw}
def test_upload_and_update():
@ -152,22 +155,22 @@ def test_upload_and_update():
b = MemoryStorage(fileext='.b')
status = {}
item = Item(u'UID:1') # new item 1 in a
item = format_item('1') # new item 1 in a
a.upload(item)
sync(a, b, status)
assert items(b) == items(a) == {item.raw}
item = Item(u'UID:1\nASDF:YES') # update of item 1 in b
item = format_item('1') # update of item 1 in b
b.update('1.b', item, b.get('1.b')[1])
sync(a, b, status)
assert items(b) == items(a) == {item.raw}
item2 = Item(u'UID:2') # new item 2 in b
item2 = format_item('2') # new item 2 in b
b.upload(item2)
sync(a, b, status)
assert items(b) == items(a) == {item.raw, item2.raw}
item2 = Item(u'UID:2\nASDF:YES') # update of item 2 in a
item2 = format_item('2') # update of item 2 in a
a.update('2.a', item2, a.get('2.a')[1])
sync(a, b, status)
assert items(b) == items(a) == {item.raw, item2.raw}
@ -178,9 +181,9 @@ def test_deletion():
b = MemoryStorage(fileext='.b')
status = {}
item = Item(u'UID:1')
item = format_item('1')
a.upload(item)
item2 = Item(u'UID:2')
item2 = format_item('2')
a.upload(item2)
sync(a, b, status)
b.delete('1.b', b.get('1.b')[1])
@ -200,14 +203,14 @@ def test_insert_hash():
b = MemoryStorage()
status = {}
item = Item('UID:1')
item = format_item('1')
href, etag = a.upload(item)
sync(a, b, status)
for d in status['1']:
del d['hash']
a.update(href, Item('UID:1\nHAHA:YES'), etag)
a.update(href, format_item('1'), etag) # new item content
sync(a, b, status)
assert 'hash' in status['1'][0] and 'hash' in status['1'][1]
@ -215,7 +218,7 @@ def test_insert_hash():
def test_already_synced():
a = MemoryStorage(fileext='.a')
b = MemoryStorage(fileext='.b')
item = Item(u'UID:1')
item = format_item('1')
a.upload(item)
b.upload(item)
status = {
@ -243,14 +246,14 @@ def test_already_synced():
def test_conflict_resolution_both_etags_new(winning_storage):
a = MemoryStorage()
b = MemoryStorage()
item = Item(u'UID:1')
item = format_item('1')
href_a, etag_a = a.upload(item)
href_b, etag_b = b.upload(item)
status = {}
sync(a, b, status)
assert status
item_a = Item(u'UID:1\nitem a')
item_b = Item(u'UID:1\nitem b')
item_a = format_item('1')
item_b = format_item('1')
a.update(href_a, item_a, etag_a)
b.update(href_b, item_b, etag_b)
with pytest.raises(SyncConflict):
@ -264,13 +267,14 @@ def test_conflict_resolution_both_etags_new(winning_storage):
def test_updated_and_deleted():
a = MemoryStorage()
b = MemoryStorage()
href_a, etag_a = a.upload(Item(u'UID:1'))
item = format_item('1')
href_a, etag_a = a.upload(item)
status = {}
sync(a, b, status, force_delete=True)
(href_b, etag_b), = b.list()
b.delete(href_b, etag_b)
updated = Item(u'UID:1\nupdated')
updated = format_item('1')
a.update(href_a, updated, etag_a)
sync(a, b, status, force_delete=True)
@ -280,8 +284,8 @@ def test_updated_and_deleted():
def test_conflict_resolution_invalid_mode():
a = MemoryStorage()
b = MemoryStorage()
item_a = Item(u'UID:1\nitem a')
item_b = Item(u'UID:1\nitem b')
item_a = format_item('1')
item_b = format_item('1')
a.upload(item_a)
b.upload(item_b)
with pytest.raises(ValueError):
@ -291,7 +295,7 @@ def test_conflict_resolution_invalid_mode():
def test_conflict_resolution_new_etags_without_changes():
a = MemoryStorage()
b = MemoryStorage()
item = Item(u'UID:1')
item = format_item('1')
href_a, etag_a = a.upload(item)
href_b, etag_b = b.upload(item)
status = {'1': (href_a, 'BOGUS_a', href_b, 'BOGUS_b')}
@ -326,7 +330,7 @@ def test_uses_get_multi(monkeypatch):
a = MemoryStorage()
b = MemoryStorage()
item = Item(u'UID:1')
item = format_item('1')
expected_href, etag = a.upload(item)
sync(a, b, {})
@ -336,8 +340,8 @@ def test_uses_get_multi(monkeypatch):
def test_empty_storage_dataloss():
a = MemoryStorage()
b = MemoryStorage()
a.upload(Item(u'UID:1'))
a.upload(Item(u'UID:2'))
for i in '12':
a.upload(format_item(i))
status = {}
sync(a, b, status)
with pytest.raises(StorageEmpty):
@ -350,22 +354,24 @@ def test_empty_storage_dataloss():
def test_no_uids():
a = MemoryStorage()
b = MemoryStorage()
a.upload(Item(u'ASDF'))
b.upload(Item(u'FOOBAR'))
item_a = format_item('')
item_b = format_item('')
a.upload(item_a)
b.upload(item_b)
status = {}
sync(a, b, status)
assert items(a) == items(b) == {u'ASDF', u'FOOBAR'}
assert items(a) == items(b) == {item_a.raw, item_b.raw}
def test_changed_uids():
a = MemoryStorage()
b = MemoryStorage()
href_a, etag_a = a.upload(Item(u'UID:A-ONE'))
href_b, etag_b = b.upload(Item(u'UID:B-ONE'))
href_a, etag_a = a.upload(format_item('a1'))
href_b, etag_b = b.upload(format_item('b1'))
status = {}
sync(a, b, status)
a.update(href_a, Item(u'UID:A-TWO'), etag_a)
a.update(href_a, format_item('a2'), etag_a)
sync(a, b, status)
@ -383,34 +389,37 @@ def test_partial_sync_revert():
a = MemoryStorage(instance_name='a')
b = MemoryStorage(instance_name='b')
status = {}
a.upload(Item(u'UID:1'))
b.upload(Item(u'UID:2'))
item1 = format_item('1')
item2 = format_item('2')
a.upload(item1)
b.upload(item2)
b.read_only = True
sync(a, b, status, partial_sync='revert')
assert len(status) == 2
assert items(a) == {'UID:1', 'UID:2'}
assert items(b) == {'UID:2'}
assert items(a) == {item1.raw, item2.raw}
assert items(b) == {item2.raw}
sync(a, b, status, partial_sync='revert')
assert len(status) == 1
assert items(a) == {'UID:2'}
assert items(b) == {'UID:2'}
assert items(a) == {item2.raw}
assert items(b) == {item2.raw}
# Check that updates get reverted
a.items[next(iter(a.items))] = ('foo', Item('UID:2\nupdated'))
assert items(a) == {'UID:2\nupdated'}
item2_up = format_item('2')
a.items[next(iter(a.items))] = ('foo', item2_up)
assert items(a) == {item2_up.raw}
sync(a, b, status, partial_sync='revert')
assert len(status) == 1
assert items(a) == {'UID:2\nupdated'}
assert items(a) == {item2_up.raw}
sync(a, b, status, partial_sync='revert')
assert items(a) == {'UID:2'}
assert items(a) == {item2.raw}
# Check that deletions get reverted
a.items.clear()
sync(a, b, status, partial_sync='revert', force_delete=True)
sync(a, b, status, partial_sync='revert', force_delete=True)
assert items(a) == {'UID:2'}
assert items(a) == {item2.raw}
@pytest.mark.parametrize('sync_inbetween', (True, False))
@ -418,13 +427,16 @@ def test_ident_conflict(sync_inbetween):
a = MemoryStorage()
b = MemoryStorage()
status = {}
href_a, etag_a = a.upload(Item(u'UID:aaa'))
href_b, etag_b = a.upload(Item(u'UID:bbb'))
item_a = format_item('aaa')
item_b = format_item('bbb')
href_a, etag_a = a.upload(item_a)
href_b, etag_b = a.upload(item_b)
if sync_inbetween:
sync(a, b, status)
a.update(href_a, Item(u'UID:xxx'), etag_a)
a.update(href_b, Item(u'UID:xxx'), etag_b)
item_x = format_item('xxx')
a.update(href_a, item_x, etag_a)
a.update(href_b, item_x, etag_b)
with pytest.raises(IdentConflict):
sync(a, b, status)
@ -441,7 +453,8 @@ def test_moved_href():
a = MemoryStorage()
b = MemoryStorage()
status = {}
href, etag = a.upload(Item(u'UID:haha'))
item = format_item('haha')
href, etag = a.upload(item)
sync(a, b, status)
b.items['lol'] = b.items.pop('haha')
@ -454,7 +467,7 @@ def test_moved_href():
sync(a, b, status)
assert len(status) == 1
assert items(a) == items(b) == {'UID:haha'}
assert items(a) == items(b) == {item.raw}
assert status['haha'][1]['href'] == 'lol'
old_status = deepcopy(status)
@ -463,7 +476,7 @@ def test_moved_href():
sync(a, b, status)
assert old_status == status
assert items(a) == items(b) == {'UID:haha'}
assert items(a) == items(b) == {item.raw}
def test_bogus_etag_change():
@ -476,26 +489,31 @@ def test_bogus_etag_change():
a = MemoryStorage()
b = MemoryStorage()
status = {}
href_a, etag_a = a.upload(Item(u'UID:ASDASD'))
sync(a, b, status)
assert len(status) == len(list(a.list())) == len(list(b.list())) == 1
item = format_item('ASDASD')
href_a, etag_a = a.upload(item)
sync(a, b, status)
assert len(status) == 1
assert items(a) == items(b) == {item.raw}
new_item = format_item('ASDASD')
(href_b, etag_b), = b.list()
a.update(href_a, Item(u'UID:ASDASD'), etag_a)
b.update(href_b, Item(u'UID:ASDASD\nACTUALCHANGE:YES'), etag_b)
a.update(href_a, item, etag_a)
b.update(href_b, new_item, etag_b)
b.delete = b.update = b.upload = blow_up
sync(a, b, status)
assert len(status) == 1
assert items(a) == items(b) == {u'UID:ASDASD\nACTUALCHANGE:YES'}
assert items(a) == items(b) == {new_item.raw}
def test_unicode_hrefs():
a = MemoryStorage()
b = MemoryStorage()
status = {}
href, etag = a.upload(Item(u'UID:äää'))
item = format_item('äää')
href, etag = a.upload(item)
sync(a, b, status)
@ -565,7 +583,7 @@ class SyncMachine(RuleBasedStateMachine):
uid=uid_strategy,
etag=st.text())
def upload(self, storage, uid, etag):
item = Item(u'UID:{}'.format(uid))
item = Item('BEGIN:VCARD\r\nUID:{}\r\nEND:VCARD'.format(uid))
storage.items[uid] = (etag, item)
@rule(storage=Storage, href=st.text())
@ -643,8 +661,8 @@ def test_rollback(error_callback):
b = MemoryStorage()
status = {}
a.items['0'] = ('', Item('UID:0'))
b.items['1'] = ('', Item('UID:1'))
a.items['0'] = ('', format_item('0'))
b.items['1'] = ('', format_item('1'))
b.upload = b.update = b.delete = action_failure
@ -668,7 +686,7 @@ def test_duplicate_hrefs():
a = MemoryStorage()
b = MemoryStorage()
a.list = lambda: [('a', 'a')] * 3
a.items['a'] = ('a', Item('UID:a'))
a.items['a'] = ('a', format_item('a'))
status = {}
sync(a, b, status)

View file

@ -2,7 +2,7 @@ from vdirsyncer import exceptions
def test_user_error_problems():
e = exceptions.UserError('A few problems occured', problems=[
e = exceptions.UserError('A few problems occurred', problems=[
'Problem one',
'Problem two',
'Problem three'
@ -11,4 +11,4 @@ def test_user_error_problems():
assert 'one' in str(e)
assert 'two' in str(e)
assert 'three' in str(e)
assert 'problems occured' in str(e)
assert 'problems occurred' in str(e)

View file

@ -38,7 +38,7 @@ def test_repair_uids(uid):
@settings(perform_health_check=False) # Using the random module for UIDs
def test_repair_unsafe_uids(uid):
s = MemoryStorage()
item = Item(u'BEGIN:VCARD\nUID:{}\nEND:VCARD'.format(uid))
item = Item(u'BEGIN:VCARD\nUID:123\nEND:VCARD').with_uid(uid)
href, etag = s.upload(item)
assert s.get(href)[0].uid == uid
assert not href_safe(uid)

View file

@ -9,12 +9,23 @@ from hypothesis.stateful import Bundle, RuleBasedStateMachine, rule
import pytest
from tests import BARE_EVENT_TEMPLATE, EVENT_TEMPLATE, \
EVENT_WITH_TIMEZONE_TEMPLATE, VCARD_TEMPLATE, normalize_item, \
EVENT_WITH_TIMEZONE_TEMPLATE, VCARD_TEMPLATE, \
uid_strategy
import vdirsyncer.vobject as vobject
@pytest.fixture
def check_roundtrip(benchmark):
def inner(split):
joined = benchmark(lambda: vobject.join_collection(split))
split2 = benchmark(lambda: list(vobject.split_collection(joined)))
assert [vobject.Item(item).hash for item in split] == \
[vobject.Item(item).hash for item in split2]
return inner
_simple_split = [
VCARD_TEMPLATE.format(r=123, uid=123),
VCARD_TEMPLATE.format(r=345, uid=345),
@ -28,11 +39,13 @@ _simple_joined = u'\r\n'.join(
)
def test_split_collection_simple(benchmark):
def test_split_collection_simple(benchmark, check_roundtrip):
check_roundtrip(_simple_split)
given = benchmark(lambda: list(vobject.split_collection(_simple_joined)))
assert [normalize_item(item) for item in given] == \
[normalize_item(item) for item in _simple_split]
assert [vobject.Item(item).hash for item in given] == \
[vobject.Item(item).hash for item in _simple_split]
assert [x.splitlines() for x in given] == \
[x.splitlines() for x in _simple_split]
@ -46,9 +59,10 @@ def test_split_collection_multiple_wrappers(benchmark):
for x in _simple_split
)
given = benchmark(lambda: list(vobject.split_collection(joined)))
check_roundtrip(given)
assert [normalize_item(item) for item in given] == \
[normalize_item(item) for item in _simple_split]
assert [vobject.Item(item).hash for item in given] == \
[vobject.Item(item).hash for item in _simple_split]
assert [x.splitlines() for x in given] == \
[x.splitlines() for x in _simple_split]
@ -56,7 +70,7 @@ def test_split_collection_multiple_wrappers(benchmark):
def test_join_collection_simple(benchmark):
given = benchmark(lambda: vobject.join_collection(_simple_split))
assert normalize_item(given) == normalize_item(_simple_joined)
assert vobject.Item(given).hash == vobject.Item(_simple_joined).hash
assert given.splitlines() == _simple_joined.splitlines()
@ -123,12 +137,12 @@ def test_split_collection_timezones():
[timezone, u'END:VCALENDAR']
)
given = set(normalize_item(item)
given = set(vobject.Item(item).hash
for item in vobject.split_collection(full))
expected = set(
normalize_item(u'\r\n'.join((
vobject.Item(u'\r\n'.join((
u'BEGIN:VCALENDAR', item, timezone, u'END:VCALENDAR'
)))
))).hash
for item in items
)
@ -146,11 +160,11 @@ def test_split_contacts():
with_wrapper.splitlines()
def test_hash_item():
def test_hash_item2():
a = EVENT_TEMPLATE.format(r=1, uid=1)
b = u'\n'.join(line for line in a.splitlines()
if u'PRODID' not in line)
assert vobject.hash_item(a) == vobject.hash_item(b)
assert vobject.Item(a).hash == vobject.Item(b).hash
def test_multiline_uid(benchmark):
@ -223,7 +237,7 @@ def test_replace_uid(template, uid):
item = vobject.Item(template.format(r=123, uid=123)).with_uid(uid)
assert item.uid == uid
if uid:
assert item.raw.count('\nUID:{}'.format(uid)) == 1
assert item.raw.count('\nUID:') == 1
else:
assert '\nUID:' not in item.raw
@ -235,7 +249,7 @@ def test_broken_item():
assert 'Parsing error at line 1' in str(excinfo.value)
item = vobject.Item('END:FOO')
assert item.parsed is None
assert not item.is_parseable
def test_multiple_items():
@ -351,3 +365,88 @@ def test_component_contains():
with pytest.raises(ValueError):
42 in item
def test_hash_item():
item1 = vobject.Item(
'BEGIN:FOO\r\n'
'X-RADICALE-NAME:YES\r\n'
'END:FOO\r\n'
)
item2 = vobject.Item(
'BEGIN:FOO\r\n'
'X-RADICALE-NAME:NO\r\n'
'END:FOO\r\n'
)
assert item1.hash == item2.hash
item2 = vobject.Item(
'BEGIN:FOO\r\n'
'X-RADICALE-NAME:NO\r\n'
'OTHER-PROP:YAY\r\n'
'END:FOO\r\n'
)
assert item1.hash != item2.hash
def test_hash_item_timezones():
item1 = vobject.Item(
'BEGIN:VCALENDAR\r\n'
'HELLO:HAHA\r\n'
'BEGIN:VTIMEZONE\r\n'
'PROP:YES\r\n'
'END:VTIMEZONE\r\n'
'END:VCALENDAR\r\n'
)
item2 = vobject.Item(
'BEGIN:VCALENDAR\r\n'
'HELLO:HAHA\r\n'
'END:VCALENDAR\r\n'
)
assert item1.hash == item2.hash
def test_hash_item_line_wrapping():
item1 = vobject.Item(
'BEGIN:VCALENDAR\r\n'
'PROP:a\r\n'
' b\r\n'
' c\r\n'
'END:VCALENDAR\r\n'
)
item2 = vobject.Item(
'BEGIN:VCALENDAR\r\n'
'PROP:abc\r\n'
'END:VCALENDAR\r\n'
)
assert item1.hash == item2.hash
def test_wrapper_properties(check_roundtrip):
raws = [dedent('''
BEGIN:VCALENDAR
PRODID:-//Google Inc//Google Calendar 70.9054//EN
VERSION:2.0
CALSCALE:GREGORIAN
X-WR-CALNAME:hans.gans@gmail.com
X-WR-TIMEZONE:Europe/Vienna
BEGIN:VEVENT
DTSTART;TZID=Europe/Vienna:20171012T153000
DTEND;TZID=Europe/Vienna:20171012T170000
DTSTAMP:20171009T085029Z
UID:test@test.com
STATUS:CONFIRMED
SUMMARY:Test
TRANSP:OPAQUE
END:VEVENT
END:VCALENDAR
''').strip()]
check_roundtrip(raws)

View file

@ -63,8 +63,9 @@ def _validate_collections_param(collections):
elif isinstance(collection, list):
e = ValueError(
'Expected list of format '
'["config_name", "storage_a_name", "storage_b_name"]'
.format(len(collection)))
'["config_name", "storage_a_name", "storage_b_name"], but '
'found {!r} instead.'
.format(collection))
if len(collection) != 3:
raise e

View file

@ -146,9 +146,9 @@ def handle_cli_error(status_name=None, e=None):
import traceback
tb = traceback.format_tb(tb)
if status_name:
msg = 'Unknown error occured for {}'.format(status_name)
msg = 'Unknown error occurred for {}'.format(status_name)
else:
msg = 'Unknown error occured'
msg = 'Unknown error occurred'
msg += ': {}\nUse `-vdebug` to see the full traceback.'.format(e)
@ -244,6 +244,9 @@ def save_status(base_path, pair, collection=None, data_type=None, data=None):
def storage_class_from_config(config):
config = dict(config)
if 'type' not in config:
raise exceptions.UserError('Missing parameter "type"')
storage_name = config.pop('type')
try:
cls = storage_names[storage_name]

View file

@ -79,3 +79,11 @@ class UnsupportedMetadataError(Error, NotImplementedError):
class CollectionRequired(Error):
'''`collection = null` is not allowed.'''
class VobjectParseError(Error, ValueError):
'''The parsed vobject is invalid.'''
class UnsupportedVobjectError(Error, ValueError):
'''The server rejected the vobject because of its type'''

39
vdirsyncer/native.py Normal file
View file

@ -0,0 +1,39 @@
import shippai
from . import exceptions
from ._native import ffi, lib
lib.vdirsyncer_init_logger()
errors = shippai.Shippai(ffi, lib)
def string_rv(c_str):
try:
return ffi.string(c_str).decode('utf-8')
finally:
lib.vdirsyncer_free_str(c_str)
def item_rv(c):
return ffi.gc(c, lib.vdirsyncer_free_item)
def get_error_pointer():
return ffi.new("ShippaiError **")
def check_error(e):
try:
errors.check_exception(e[0])
except errors.Error.ItemNotFound as e:
raise exceptions.NotFoundError(e)
except errors.Error.ItemAlreadyExisting as e:
raise exceptions.AlreadyExistingError(e)
except errors.Error.WrongEtag as e:
raise exceptions.WrongEtagError(e)
except errors.Error.ReadOnly as e:
raise exceptions.ReadOnlyError(e)
except errors.Error.UnsupportedVobject as e:
raise exceptions.UnsupportedVobjectError(e)

View file

@ -40,7 +40,7 @@ def repair_storage(storage, repair_unsafe_uid):
def repair_item(href, item, seen_uids, repair_unsafe_uid):
if item.parsed is None:
if not item.is_parseable:
raise IrreparableItem()
new_item = item

View file

@ -0,0 +1,72 @@
from .. import native
from ..vobject import Item
from functools import partial
class RustStorageMixin:
_native_storage = None
def _native(self, name):
return partial(
getattr(native.lib, 'vdirsyncer_storage_{}'.format(name)),
self._native_storage
)
def list(self):
e = native.get_error_pointer()
listing = self._native('list')(e)
native.check_error(e)
listing = native.ffi.gc(listing,
native.lib.vdirsyncer_free_storage_listing)
while native.lib.vdirsyncer_advance_storage_listing(listing):
href = native.string_rv(
native.lib.vdirsyncer_storage_listing_get_href(listing))
etag = native.string_rv(
native.lib.vdirsyncer_storage_listing_get_etag(listing))
yield href, etag
def get(self, href):
href = href.encode('utf-8')
e = native.get_error_pointer()
result = self._native('get')(href, e)
native.check_error(e)
result = native.ffi.gc(result,
native.lib.vdirsyncer_free_storage_get_result)
item = native.item_rv(result.item)
etag = native.string_rv(result.etag)
return Item(None, _native=item), etag
# FIXME: implement get_multi
def upload(self, item):
e = native.get_error_pointer()
result = self._native('upload')(item._native, e)
native.check_error(e)
result = native.ffi.gc(
result, native.lib.vdirsyncer_free_storage_upload_result)
href = native.string_rv(result.href)
etag = native.string_rv(result.etag)
return href, etag or None
def update(self, href, item, etag):
href = href.encode('utf-8')
etag = etag.encode('utf-8')
e = native.get_error_pointer()
etag = self._native('update')(href, item._native, etag, e)
native.check_error(e)
return native.string_rv(etag) or None
def delete(self, href, etag):
href = href.encode('utf-8')
etag = etag.encode('utf-8')
e = native.get_error_pointer()
self._native('delete')(href, etag, e)
native.check_error(e)
def buffered(self):
self._native('buffered')()
def flush(self):
e = native.get_error_pointer()
self._native('flush')(e)
native.check_error(e)

View file

@ -1,6 +1,5 @@
# -*- coding: utf-8 -*-
import contextlib
import functools
from .. import exceptions
@ -198,26 +197,6 @@ class Storage(metaclass=StorageMeta):
'''
raise NotImplementedError()
@contextlib.contextmanager
def at_once(self):
'''A contextmanager that buffers all writes.
Essentially, this::
s.upload(...)
s.update(...)
becomes this::
with s.at_once():
s.upload(...)
s.update(...)
Note that this removes guarantees about which exceptions are returned
when.
'''
yield
def get_meta(self, key):
'''Get metadata value for collection/storage.
@ -240,6 +219,14 @@ class Storage(metaclass=StorageMeta):
raise NotImplementedError('This storage does not support metadata.')
def buffered(self):
'''See documentation in rust/storage/mod.rs'''
pass
def flush(self):
'''See documentation in rust/storage/mod.rs'''
pass
def normalize_meta_value(value):
# `None` is returned by iCloud for empty properties.

View file

@ -11,10 +11,10 @@ import requests
from requests.exceptions import HTTPError
from .base import Storage, normalize_meta_value
from .. import exceptions, http, utils
from ._rust import RustStorageMixin
from .. import exceptions, http, native, utils
from ..http import USERAGENT, prepare_auth, \
prepare_client_cert, prepare_verify
from ..vobject import Item
dav_logger = logging.getLogger(__name__)
@ -33,61 +33,6 @@ _path_reserved_chars = frozenset(_generate_path_reserved_chars())
del _generate_path_reserved_chars
def _contains_quoted_reserved_chars(x):
for y in _path_reserved_chars:
if y in x:
dav_logger.debug('Unsafe character: {!r}'.format(y))
return True
return False
def _assert_multistatus_success(r):
# Xandikos returns a multistatus on PUT.
try:
root = _parse_xml(r.content)
except InvalidXMLResponse:
return
for status in root.findall('.//{DAV:}status'):
parts = status.text.strip().split()
try:
st = int(parts[1])
except (ValueError, IndexError):
continue
if st < 200 or st >= 400:
raise HTTPError('Server error: {}'.format(st))
def _normalize_href(base, href):
'''Normalize the href to be a path only relative to hostname and
schema.'''
orig_href = href
if not href:
raise ValueError(href)
x = urlparse.urljoin(base, href)
x = urlparse.urlsplit(x).path
# Encoding issues:
# - https://github.com/owncloud/contacts/issues/581
# - https://github.com/Kozea/Radicale/issues/298
old_x = None
while old_x is None or x != old_x:
if _contains_quoted_reserved_chars(x):
break
old_x = x
x = urlparse.unquote(x)
x = urlparse.quote(x, '/@%:')
if orig_href == x:
dav_logger.debug('Already normalized: {!r}'.format(x))
else:
dav_logger.debug('Normalized URL from {!r} to {!r}'
.format(orig_href, x))
return x
class InvalidXMLResponse(exceptions.InvalidResponse):
pass
@ -126,27 +71,13 @@ def _merge_xml(items):
return rv
def _fuzzy_matches_mimetype(strict, weak):
# different servers give different getcontenttypes:
# "text/vcard", "text/x-vcard", "text/x-vcard; charset=utf-8",
# "text/directory;profile=vCard", "text/directory",
# "text/vcard; charset=utf-8"
if strict is None or weak is None:
return True
mediatype, subtype = strict.split('/')
if subtype in weak:
return True
return False
class Discover(object):
_namespace = None
_resourcetype = None
_homeset_xml = None
_homeset_tag = None
_well_known_uri = None
_collection_xml = b"""
_collection_xml = b"""<?xml version="1.0" encoding="utf-8" ?>
<d:propfind xmlns:d="DAV:">
<d:prop>
<d:resourcetype />
@ -376,10 +307,6 @@ class DAVSession(object):
self._session = requests.session()
@utils.cached_property
def parsed_url(self):
return urlparse.urlparse(self.url)
def request(self, method, path, **kwargs):
url = self.url
if path:
@ -396,7 +323,7 @@ class DAVSession(object):
}
class DAVStorage(Storage):
class DAVStorage(RustStorageMixin, Storage):
# the file extension of items. Useful for testing against radicale.
fileext = None
# mimetype of items
@ -440,203 +367,6 @@ class DAVStorage(Storage):
d = cls.discovery_class(session, kwargs)
return d.create(collection)
def _normalize_href(self, *args, **kwargs):
return _normalize_href(self.session.url, *args, **kwargs)
def _get_href(self, item):
href = utils.generate_href(item.ident)
return self._normalize_href(href + self.fileext)
def _is_item_mimetype(self, mimetype):
return _fuzzy_matches_mimetype(self.item_mimetype, mimetype)
def get(self, href):
((actual_href, item, etag),) = self.get_multi([href])
assert href == actual_href
return item, etag
def get_multi(self, hrefs):
hrefs = set(hrefs)
href_xml = []
for href in hrefs:
if href != self._normalize_href(href):
raise exceptions.NotFoundError(href)
href_xml.append('<D:href>{}</D:href>'.format(href))
if not href_xml:
return ()
data = self.get_multi_template \
.format(hrefs='\n'.join(href_xml)).encode('utf-8')
response = self.session.request(
'REPORT',
'',
data=data,
headers=self.session.get_default_headers()
)
root = _parse_xml(response.content) # etree only can handle bytes
rv = []
hrefs_left = set(hrefs)
for href, etag, prop in self._parse_prop_responses(root):
raw = prop.find(self.get_multi_data_query)
if raw is None:
dav_logger.warning('Skipping {}, the item content is missing.'
.format(href))
continue
raw = raw.text or u''
if isinstance(raw, bytes):
raw = raw.decode(response.encoding)
if isinstance(etag, bytes):
etag = etag.decode(response.encoding)
try:
hrefs_left.remove(href)
except KeyError:
if href in hrefs:
dav_logger.warning('Server sent item twice: {}'
.format(href))
else:
dav_logger.warning('Server sent unsolicited item: {}'
.format(href))
else:
rv.append((href, Item(raw), etag))
for href in hrefs_left:
raise exceptions.NotFoundError(href)
return rv
def _put(self, href, item, etag):
headers = self.session.get_default_headers()
headers['Content-Type'] = self.item_mimetype
if etag is None:
headers['If-None-Match'] = '*'
else:
headers['If-Match'] = etag
response = self.session.request(
'PUT',
href,
data=item.raw.encode('utf-8'),
headers=headers
)
_assert_multistatus_success(response)
# The server may not return an etag under certain conditions:
#
# An origin server MUST NOT send a validator header field (Section
# 7.2), such as an ETag or Last-Modified field, in a successful
# response to PUT unless the request's representation data was saved
# without any transformation applied to the body (i.e., the
# resource's new representation data is identical to the
# representation data received in the PUT request) and the validator
# field value reflects the new representation.
#
# -- https://tools.ietf.org/html/rfc7231#section-4.3.4
#
# In such cases we return a constant etag. The next synchronization
# will then detect an etag change and will download the new item.
etag = response.headers.get('etag', None)
href = self._normalize_href(response.url)
return href, etag
def update(self, href, item, etag):
if etag is None:
raise ValueError('etag must be given and must not be None.')
href, etag = self._put(self._normalize_href(href), item, etag)
return etag
def upload(self, item):
href = self._get_href(item)
return self._put(href, item, None)
def delete(self, href, etag):
href = self._normalize_href(href)
headers = self.session.get_default_headers()
headers.update({
'If-Match': etag
})
self.session.request(
'DELETE',
href,
headers=headers
)
def _parse_prop_responses(self, root, handled_hrefs=None):
if handled_hrefs is None:
handled_hrefs = set()
for response in root.iter('{DAV:}response'):
href = response.find('{DAV:}href')
if href is None:
dav_logger.error('Skipping response, href is missing.')
continue
href = self._normalize_href(href.text)
if href in handled_hrefs:
# Servers that send duplicate hrefs:
# - Zimbra
# https://github.com/pimutils/vdirsyncer/issues/88
# - Davmail
# https://github.com/pimutils/vdirsyncer/issues/144
dav_logger.warning('Skipping identical href: {!r}'
.format(href))
continue
props = response.findall('{DAV:}propstat/{DAV:}prop')
if props is None or not len(props):
dav_logger.debug('Skipping {!r}, properties are missing.'
.format(href))
continue
else:
props = _merge_xml(props)
if props.find('{DAV:}resourcetype/{DAV:}collection') is not None:
dav_logger.debug('Skipping {!r}, is collection.'.format(href))
continue
etag = getattr(props.find('{DAV:}getetag'), 'text', '')
if not etag:
dav_logger.debug('Skipping {!r}, etag property is missing.'
.format(href))
continue
contenttype = getattr(props.find('{DAV:}getcontenttype'),
'text', None)
if not self._is_item_mimetype(contenttype):
dav_logger.debug('Skipping {!r}, {!r} != {!r}.'
.format(href, contenttype,
self.item_mimetype))
continue
handled_hrefs.add(href)
yield href, etag, props
def list(self):
headers = self.session.get_default_headers()
headers['Depth'] = '1'
data = '''<?xml version="1.0" encoding="utf-8" ?>
<D:propfind xmlns:D="DAV:">
<D:prop>
<D:resourcetype/>
<D:getcontenttype/>
<D:getetag/>
</D:prop>
</D:propfind>
'''.encode('utf-8')
# We use a PROPFIND request instead of addressbook-query due to issues
# with Zimbra. See https://github.com/pimutils/vdirsyncer/issues/83
response = self.session.request('PROPFIND', '', data=data,
headers=headers)
root = _parse_xml(response.content)
rv = self._parse_prop_responses(root)
for href, etag, _prop in rv:
yield href, etag
def get_meta(self, key):
try:
tagname, namespace = self._property_table[key]
@ -734,7 +464,7 @@ class CalDAVStorage(DAVStorage):
if not isinstance(item_types, (list, tuple)):
raise exceptions.UserError('item_types must be a list.')
self.item_types = tuple(item_types)
self.item_types = tuple(x.upper() for x in item_types)
if (start_date is None) != (end_date is None):
raise exceptions.UserError('If start_date is given, '
'end_date has to be given too.')
@ -749,81 +479,22 @@ class CalDAVStorage(DAVStorage):
if isinstance(end_date, (bytes, str))
else end_date)
@staticmethod
def _get_list_filters(components, start, end):
if components:
caldavfilter = '''
<C:comp-filter name="VCALENDAR">
<C:comp-filter name="{component}">
{timefilter}
</C:comp-filter>
</C:comp-filter>
'''
if start is not None and end is not None:
start = start.strftime(CALDAV_DT_FORMAT)
end = end.strftime(CALDAV_DT_FORMAT)
timefilter = ('<C:time-range start="{start}" end="{end}"/>'
.format(start=start, end=end))
else:
timefilter = ''
for component in components:
yield caldavfilter.format(component=component,
timefilter=timefilter)
else:
if start is not None and end is not None:
for x in CalDAVStorage._get_list_filters(('VTODO', 'VEVENT'),
start, end):
yield x
def list(self):
caldavfilters = list(self._get_list_filters(
self.item_types,
self.start_date,
self.end_date
))
if not caldavfilters:
# If we don't have any filters (which is the default), taking the
# risk of sending a calendar-query is not necessary. There doesn't
# seem to be a widely-usable way to send calendar-queries with the
# same semantics as a PROPFIND request... so why not use PROPFIND
# instead?
#
# See https://github.com/dmfs/tasks/issues/118 for backstory.
for x in DAVStorage.list(self):
yield x
data = '''<?xml version="1.0" encoding="utf-8" ?>
<C:calendar-query xmlns:D="DAV:"
xmlns:C="urn:ietf:params:xml:ns:caldav">
<D:prop>
<D:getcontenttype/>
<D:getetag/>
</D:prop>
<C:filter>
{caldavfilter}
</C:filter>
</C:calendar-query>'''
headers = self.session.get_default_headers()
# https://github.com/pimutils/vdirsyncer/issues/166
# The default in CalDAV's calendar-queries is 0, but the examples use
# an explicit value of 1 for querying items. it is extremely unclear in
# the spec which values from WebDAV are actually allowed.
headers['Depth'] = '1'
handled_hrefs = set()
for caldavfilter in caldavfilters:
xml = data.format(caldavfilter=caldavfilter).encode('utf-8')
response = self.session.request('REPORT', '', data=xml,
headers=headers)
root = _parse_xml(response.content)
rv = self._parse_prop_responses(root, handled_hrefs)
for href, etag, _prop in rv:
yield href, etag
self._native_storage = native.ffi.gc(
native.lib.vdirsyncer_init_caldav(
kwargs['url'].encode('utf-8'),
kwargs.get('username', '').encode('utf-8'),
kwargs.get('password', '').encode('utf-8'),
kwargs.get('useragent', '').encode('utf-8'),
kwargs.get('verify_cert', '').encode('utf-8'),
kwargs.get('auth_cert', '').encode('utf-8'),
int(self.start_date.timestamp()) if self.start_date else -1,
int(self.end_date.timestamp()) if self.end_date else -1,
'VEVENT' in item_types,
'VJOURNAL' in item_types,
'VTODO' in item_types
),
native.lib.vdirsyncer_storage_free
)
class CardDAVStorage(DAVStorage):
@ -843,3 +514,18 @@ class CardDAVStorage(DAVStorage):
</C:addressbook-multiget>'''
get_multi_data_query = '{urn:ietf:params:xml:ns:carddav}address-data'
def __init__(self, **kwargs):
self._native_storage = native.ffi.gc(
native.lib.vdirsyncer_init_carddav(
kwargs['url'].encode('utf-8'),
kwargs.get('username', '').encode('utf-8'),
kwargs.get('password', '').encode('utf-8'),
kwargs.get('useragent', '').encode('utf-8'),
kwargs.get('verify_cert', '').encode('utf-8'),
kwargs.get('auth_cert', '').encode('utf-8')
),
native.lib.vdirsyncer_storage_free
)
super(CardDAVStorage, self).__init__(**kwargs)

View file

@ -1,4 +1,3 @@
import contextlib
import functools
import logging
import os
@ -30,10 +29,10 @@ logger = logging.getLogger(__name__)
def _writing_op(f):
@functools.wraps(f)
def inner(self, *args, **kwargs):
if not self._at_once:
if not self._buffered:
self._sync_journal()
rv = f(self, *args, **kwargs)
if not self._at_once:
if not self._buffered:
self._sync_journal()
return rv
return inner
@ -102,7 +101,7 @@ class _Session:
class EtesyncStorage(Storage):
_collection_type = None
_item_type = None
_at_once = False
_buffered = False
def __init__(self, email, secrets_dir, server_url=None, db_path=None,
**kwargs):
@ -205,15 +204,11 @@ class EtesyncStorage(Storage):
except etesync.exceptions.DoesNotExist as e:
raise exceptions.NotFoundError(e)
@contextlib.contextmanager
def at_once(self):
def buffered(self):
self._buffered = True
def flush(self):
self._sync_journal()
self._at_once = True
try:
yield self
self._sync_journal()
finally:
self._at_once = False
class EtesyncContacts(EtesyncStorage):

View file

@ -3,19 +3,18 @@
import errno
import logging
import os
import subprocess
from atomicwrites import atomic_write
from .base import Storage, normalize_meta_value
from .. import exceptions
from ..utils import checkdir, expand_path, generate_href, get_etag_from_file
from ..vobject import Item
from ._rust import RustStorageMixin
from .. import native
from ..utils import checkdir, expand_path
logger = logging.getLogger(__name__)
class FilesystemStorage(Storage):
class FilesystemStorage(RustStorageMixin, Storage):
storage_name = 'filesystem'
_repr_attributes = ('path',)
@ -30,6 +29,15 @@ class FilesystemStorage(Storage):
self.fileext = fileext
self.post_hook = post_hook
self._native_storage = native.ffi.gc(
native.lib.vdirsyncer_init_filesystem(
path.encode('utf-8'),
fileext.encode('utf-8'),
(post_hook or "").encode('utf-8')
),
native.lib.vdirsyncer_storage_free
)
@classmethod
def discover(cls, path, **kwargs):
if kwargs.pop('collection', None) is not None:
@ -71,102 +79,6 @@ class FilesystemStorage(Storage):
kwargs['collection'] = collection
return kwargs
def _get_filepath(self, href):
return os.path.join(self.path, href)
def _get_href(self, ident):
return generate_href(ident) + self.fileext
def list(self):
for fname in os.listdir(self.path):
fpath = os.path.join(self.path, fname)
if os.path.isfile(fpath) and fname.endswith(self.fileext):
yield fname, get_etag_from_file(fpath)
def get(self, href):
fpath = self._get_filepath(href)
try:
with open(fpath, 'rb') as f:
return (Item(f.read().decode(self.encoding)),
get_etag_from_file(fpath))
except IOError as e:
if e.errno == errno.ENOENT:
raise exceptions.NotFoundError(href)
else:
raise
def upload(self, item):
if not isinstance(item.raw, str):
raise TypeError('item.raw must be a unicode string.')
try:
href = self._get_href(item.ident)
fpath, etag = self._upload_impl(item, href)
except OSError as e:
if e.errno in (
errno.ENAMETOOLONG, # Unix
errno.ENOENT # Windows
):
logger.debug('UID as filename rejected, trying with random '
'one.')
# random href instead of UID-based
href = self._get_href(None)
fpath, etag = self._upload_impl(item, href)
else:
raise
if self.post_hook:
self._run_post_hook(fpath)
return href, etag
def _upload_impl(self, item, href):
fpath = self._get_filepath(href)
try:
with atomic_write(fpath, mode='wb', overwrite=False) as f:
f.write(item.raw.encode(self.encoding))
return fpath, get_etag_from_file(f)
except OSError as e:
if e.errno == errno.EEXIST:
raise exceptions.AlreadyExistingError(existing_href=href)
else:
raise
def update(self, href, item, etag):
fpath = self._get_filepath(href)
if not os.path.exists(fpath):
raise exceptions.NotFoundError(item.uid)
actual_etag = get_etag_from_file(fpath)
if etag != actual_etag:
raise exceptions.WrongEtagError(etag, actual_etag)
if not isinstance(item.raw, str):
raise TypeError('item.raw must be a unicode string.')
with atomic_write(fpath, mode='wb', overwrite=True) as f:
f.write(item.raw.encode(self.encoding))
etag = get_etag_from_file(f)
if self.post_hook:
self._run_post_hook(fpath)
return etag
def delete(self, href, etag):
fpath = self._get_filepath(href)
if not os.path.isfile(fpath):
raise exceptions.NotFoundError(href)
actual_etag = get_etag_from_file(fpath)
if etag != actual_etag:
raise exceptions.WrongEtagError(etag, actual_etag)
os.remove(fpath)
def _run_post_hook(self, fpath):
logger.info('Calling post_hook={} with argument={}'.format(
self.post_hook, fpath))
try:
subprocess.call([self.post_hook, fpath])
except OSError as e:
logger.warning('Error executing external hook: {}'.format(str(e)))
def get_meta(self, key):
fpath = os.path.join(self.path, key)
try:

View file

@ -11,7 +11,7 @@ import click
from click_threading import get_ui_worker
from . import base, dav
from . import base, olddav as dav
from .. import exceptions
from ..utils import checkdir, expand_path, open_graphical_browser

View file

@ -1,15 +1,13 @@
# -*- coding: utf-8 -*-
import urllib.parse as urlparse
from .base import Storage
from .. import exceptions
from ..http import USERAGENT, prepare_auth, \
prepare_client_cert, prepare_verify, request
from ..vobject import Item, split_collection
from ._rust import RustStorageMixin
from .. import exceptions, native
from ..http import USERAGENT
class HttpStorage(Storage):
class HttpStorage(RustStorageMixin, Storage):
storage_name = 'http'
read_only = True
_repr_attributes = ('username', 'url')
@ -18,49 +16,27 @@ class HttpStorage(Storage):
# Required for tests.
_ignore_uids = True
def __init__(self, url, username='', password='', verify=True, auth=None,
useragent=USERAGENT, verify_fingerprint=None, auth_cert=None,
**kwargs):
def __init__(self, url, username='', password='', useragent=USERAGENT,
verify_cert=None, auth_cert=None, **kwargs):
if kwargs.get('collection') is not None:
raise exceptions.UserError('HttpStorage does not support '
'collections.')
assert auth_cert is None, "not yet supported"
super(HttpStorage, self).__init__(**kwargs)
self._settings = {
'auth': prepare_auth(auth, username, password),
'cert': prepare_client_cert(auth_cert),
'latin1_fallback': False,
}
self._settings.update(prepare_verify(verify, verify_fingerprint))
self._native_storage = native.ffi.gc(
native.lib.vdirsyncer_init_http(
url.encode('utf-8'),
(username or "").encode('utf-8'),
(password or "").encode('utf-8'),
(useragent or "").encode('utf-8'),
(verify_cert or "").encode('utf-8'),
(auth_cert or "").encode('utf-8')
),
native.lib.vdirsyncer_storage_free
)
self.username, self.password = username, password
self.useragent = useragent
collection = kwargs.get('collection')
if collection is not None:
url = urlparse.urljoin(url, collection)
self.username = username
self.url = url
self.parsed_url = urlparse.urlparse(self.url)
def _default_headers(self):
return {'User-Agent': self.useragent}
def list(self):
r = request('GET', self.url, headers=self._default_headers(),
**self._settings)
self._items = {}
for item in split_collection(r.text):
item = Item(item)
if self._ignore_uids:
item = item.with_uid(item.hash)
self._items[item.ident] = item, item.hash
return ((href, etag) for href, (item, etag) in self._items.items())
def get(self, href):
if self._items is None:
self.list()
try:
return self._items[href]
except KeyError:
raise exceptions.NotFoundError(href)

View file

@ -0,0 +1,821 @@
# -*- coding: utf-8 -*-
import datetime
import logging
import urllib.parse as urlparse
import xml.etree.ElementTree as etree
from inspect import getfullargspec
import requests
from requests.exceptions import HTTPError
from .base import Storage, normalize_meta_value
from .. import exceptions, http, utils
from ..http import USERAGENT, prepare_auth, \
prepare_client_cert, prepare_verify
from ..vobject import Item
dav_logger = logging.getLogger(__name__)
CALDAV_DT_FORMAT = '%Y%m%dT%H%M%SZ'
def _generate_path_reserved_chars():
for x in "/?#[]!$&'()*+,;":
x = urlparse.quote(x, '')
yield x.upper()
yield x.lower()
_path_reserved_chars = frozenset(_generate_path_reserved_chars())
del _generate_path_reserved_chars
def _contains_quoted_reserved_chars(x):
for y in _path_reserved_chars:
if y in x:
dav_logger.debug('Unsafe character: {!r}'.format(y))
return True
return False
def _assert_multistatus_success(r):
# Xandikos returns a multistatus on PUT.
try:
root = _parse_xml(r.content)
except InvalidXMLResponse:
return
for status in root.findall('.//{DAV:}status'):
parts = status.text.strip().split()
try:
st = int(parts[1])
except (ValueError, IndexError):
continue
if st < 200 or st >= 400:
raise HTTPError('Server error: {}'.format(st))
def _normalize_href(base, href):
'''Normalize the href to be a path only relative to hostname and
schema.'''
if not href:
raise ValueError(href)
x = urlparse.urljoin(base, href)
x = urlparse.urlsplit(x).path
return x
class InvalidXMLResponse(exceptions.InvalidResponse):
pass
_BAD_XML_CHARS = (
b'\x00\x01\x02\x03\x04\x05\x06\x07\x08\x0b\x0c\x0e\x0f'
b'\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f'
)
def _clean_body(content, bad_chars=_BAD_XML_CHARS):
new_content = content.translate(None, bad_chars)
if new_content != content:
dav_logger.warning(
'Your server incorrectly returned ASCII control characters in its '
'XML. Vdirsyncer ignores those, but this is a bug in your server.'
)
return new_content
def _parse_xml(content):
try:
return etree.XML(_clean_body(content))
except etree.ParseError as e:
raise InvalidXMLResponse('Invalid XML encountered: {}\n'
'Double-check the URLs in your config.'
.format(e))
def _merge_xml(items):
if not items:
return None
rv = items[0]
for item in items[1:]:
rv.extend(item.getiterator())
return rv
def _fuzzy_matches_mimetype(strict, weak):
# different servers give different getcontenttypes:
# "text/vcard", "text/x-vcard", "text/x-vcard; charset=utf-8",
# "text/directory;profile=vCard", "text/directory",
# "text/vcard; charset=utf-8"
if strict is None or weak is None:
return True
mediatype, subtype = strict.split('/')
if subtype in weak:
return True
return False
class Discover(object):
_namespace = None
_resourcetype = None
_homeset_xml = None
_homeset_tag = None
_well_known_uri = None
_collection_xml = b"""<?xml version="1.0" encoding="utf-8" ?>
<d:propfind xmlns:d="DAV:">
<d:prop>
<d:resourcetype />
</d:prop>
</d:propfind>
"""
def __init__(self, session, kwargs):
if kwargs.pop('collection', None) is not None:
raise TypeError('collection argument must not be given.')
self.session = session
self.kwargs = kwargs
@staticmethod
def _get_collection_from_url(url):
_, collection = url.rstrip('/').rsplit('/', 1)
return urlparse.unquote(collection)
def find_principal(self):
try:
return self._find_principal_impl('')
except (HTTPError, exceptions.Error):
dav_logger.debug('Trying out well-known URI')
return self._find_principal_impl(self._well_known_uri)
def _find_principal_impl(self, url):
headers = self.session.get_default_headers()
headers['Depth'] = '0'
body = b"""
<d:propfind xmlns:d="DAV:">
<d:prop>
<d:current-user-principal />
</d:prop>
</d:propfind>
"""
response = self.session.request('PROPFIND', url, headers=headers,
data=body)
root = _parse_xml(response.content)
rv = root.find('.//{DAV:}current-user-principal/{DAV:}href')
if rv is None:
# This is for servers that don't support current-user-principal
# E.g. Synology NAS
# See https://github.com/pimutils/vdirsyncer/issues/498
dav_logger.debug(
'No current-user-principal returned, re-using URL {}'
.format(response.url))
return response.url
return urlparse.urljoin(response.url, rv.text).rstrip('/') + '/'
def find_home(self):
url = self.find_principal()
headers = self.session.get_default_headers()
headers['Depth'] = '0'
response = self.session.request('PROPFIND', url,
headers=headers,
data=self._homeset_xml)
root = etree.fromstring(response.content)
# Better don't do string formatting here, because of XML namespaces
rv = root.find('.//' + self._homeset_tag + '/{DAV:}href')
if rv is None:
raise InvalidXMLResponse('Couldn\'t find home-set.')
return urlparse.urljoin(response.url, rv.text).rstrip('/') + '/'
def find_collections(self):
rv = None
try:
rv = list(self._find_collections_impl(''))
except (HTTPError, exceptions.Error):
pass
if rv:
return rv
dav_logger.debug('Given URL is not a homeset URL')
return self._find_collections_impl(self.find_home())
def _check_collection_resource_type(self, response):
if self._resourcetype is None:
return True
props = _merge_xml(response.findall(
'{DAV:}propstat/{DAV:}prop'
))
if props is None or not len(props):
dav_logger.debug('Skipping, missing <prop>: %s', response)
return False
if props.find('{DAV:}resourcetype/' + self._resourcetype) \
is None:
dav_logger.debug('Skipping, not of resource type %s: %s',
self._resourcetype, response)
return False
return True
def _find_collections_impl(self, url):
headers = self.session.get_default_headers()
headers['Depth'] = '1'
r = self.session.request('PROPFIND', url, headers=headers,
data=self._collection_xml)
root = _parse_xml(r.content)
done = set()
for response in root.findall('{DAV:}response'):
if not self._check_collection_resource_type(response):
continue
href = response.find('{DAV:}href')
if href is None:
raise InvalidXMLResponse('Missing href tag for collection '
'props.')
href = urlparse.urljoin(r.url, href.text)
if href not in done:
done.add(href)
yield {'href': href}
def discover(self):
for c in self.find_collections():
url = c['href']
collection = self._get_collection_from_url(url)
storage_args = dict(self.kwargs)
storage_args.update({'url': url, 'collection': collection})
yield storage_args
def create(self, collection):
if collection is None:
collection = self._get_collection_from_url(self.kwargs['url'])
for c in self.discover():
if c['collection'] == collection:
return c
home = self.find_home()
url = urlparse.urljoin(
home,
urlparse.quote(collection, '/@')
)
try:
url = self._create_collection_impl(url)
except HTTPError as e:
raise NotImplementedError(e)
else:
rv = dict(self.kwargs)
rv['collection'] = collection
rv['url'] = url
return rv
def _create_collection_impl(self, url):
data = '''<?xml version="1.0" encoding="utf-8" ?>
<D:mkcol xmlns:D="DAV:">
<D:set>
<D:prop>
<D:resourcetype>
<D:collection/>
{}
</D:resourcetype>
</D:prop>
</D:set>
</D:mkcol>
'''.format(
etree.tostring(etree.Element(self._resourcetype),
encoding='unicode')
).encode('utf-8')
response = self.session.request(
'MKCOL',
url,
data=data,
headers=self.session.get_default_headers(),
)
return response.url
class CalDiscover(Discover):
_namespace = 'urn:ietf:params:xml:ns:caldav'
_resourcetype = '{%s}calendar' % _namespace
_homeset_xml = b"""
<d:propfind xmlns:d="DAV:" xmlns:c="urn:ietf:params:xml:ns:caldav">
<d:prop>
<c:calendar-home-set />
</d:prop>
</d:propfind>
"""
_homeset_tag = '{%s}calendar-home-set' % _namespace
_well_known_uri = '/.well-known/caldav'
class CardDiscover(Discover):
_namespace = 'urn:ietf:params:xml:ns:carddav'
_resourcetype = '{%s}addressbook' % _namespace
_homeset_xml = b"""
<d:propfind xmlns:d="DAV:" xmlns:c="urn:ietf:params:xml:ns:carddav">
<d:prop>
<c:addressbook-home-set />
</d:prop>
</d:propfind>
"""
_homeset_tag = '{%s}addressbook-home-set' % _namespace
_well_known_uri = '/.well-known/carddav'
class DAVSession(object):
'''
A helper class to connect to DAV servers.
'''
@classmethod
def init_and_remaining_args(cls, **kwargs):
argspec = getfullargspec(cls.__init__)
self_args, remainder = \
utils.split_dict(kwargs, argspec.args.__contains__)
return cls(**self_args), remainder
def __init__(self, url, username='', password='', verify=True, auth=None,
useragent=USERAGENT, verify_fingerprint=None,
auth_cert=None):
self._settings = {
'cert': prepare_client_cert(auth_cert),
'auth': prepare_auth(auth, username, password)
}
self._settings.update(prepare_verify(verify, verify_fingerprint))
self.useragent = useragent
self.url = url.rstrip('/') + '/'
self._session = requests.session()
@utils.cached_property
def parsed_url(self):
return urlparse.urlparse(self.url)
def request(self, method, path, **kwargs):
url = self.url
if path:
url = urlparse.urljoin(self.url, path)
more = dict(self._settings)
more.update(kwargs)
return http.request(method, url, session=self._session, **more)
def get_default_headers(self):
return {
'User-Agent': self.useragent,
'Content-Type': 'application/xml; charset=UTF-8'
}
class DAVStorage(Storage):
# the file extension of items. Useful for testing against radicale.
fileext = None
# mimetype of items
item_mimetype = None
# XML to use when fetching multiple hrefs.
get_multi_template = None
# The LXML query for extracting results in get_multi
get_multi_data_query = None
# The Discover subclass to use
discovery_class = None
# The DAVSession class to use
session_class = DAVSession
_repr_attributes = ('username', 'url')
_property_table = {
'displayname': ('displayname', 'DAV:'),
}
def __init__(self, **kwargs):
# defined for _repr_attributes
self.username = kwargs.get('username')
self.url = kwargs.get('url')
self.session, kwargs = \
self.session_class.init_and_remaining_args(**kwargs)
super(DAVStorage, self).__init__(**kwargs)
import inspect
__init__.__signature__ = inspect.signature(session_class.__init__)
@classmethod
def discover(cls, **kwargs):
session, _ = cls.session_class.init_and_remaining_args(**kwargs)
d = cls.discovery_class(session, kwargs)
return d.discover()
@classmethod
def create_collection(cls, collection, **kwargs):
session, _ = cls.session_class.init_and_remaining_args(**kwargs)
d = cls.discovery_class(session, kwargs)
return d.create(collection)
def _normalize_href(self, *args, **kwargs):
return _normalize_href(self.session.url, *args, **kwargs)
def _get_href(self, item):
href = utils.generate_href(item.ident)
return self._normalize_href(href + self.fileext)
def _is_item_mimetype(self, mimetype):
return _fuzzy_matches_mimetype(self.item_mimetype, mimetype)
def get(self, href):
((actual_href, item, etag),) = self.get_multi([href])
assert href == actual_href
return item, etag
def get_multi(self, hrefs):
hrefs = set(hrefs)
href_xml = []
for href in hrefs:
if href != self._normalize_href(href):
raise exceptions.NotFoundError(href)
href_xml.append('<D:href>{}</D:href>'.format(href))
if not href_xml:
return ()
data = self.get_multi_template \
.format(hrefs='\n'.join(href_xml)).encode('utf-8')
response = self.session.request(
'REPORT',
'',
data=data,
headers=self.session.get_default_headers()
)
root = _parse_xml(response.content) # etree only can handle bytes
rv = []
hrefs_left = set(hrefs)
for href, etag, prop in self._parse_prop_responses(root):
raw = prop.find(self.get_multi_data_query)
if raw is None:
dav_logger.warning('Skipping {}, the item content is missing.'
.format(href))
continue
raw = raw.text or u''
if isinstance(raw, bytes):
raw = raw.decode(response.encoding)
if isinstance(etag, bytes):
etag = etag.decode(response.encoding)
try:
hrefs_left.remove(href)
except KeyError:
if href in hrefs:
dav_logger.warning('Server sent item twice: {}'
.format(href))
else:
dav_logger.warning('Server sent unsolicited item: {}'
.format(href))
else:
rv.append((href, Item(raw), etag))
for href in hrefs_left:
raise exceptions.NotFoundError(href)
return rv
def _put(self, href, item, etag):
headers = self.session.get_default_headers()
headers['Content-Type'] = self.item_mimetype
if etag is None:
headers['If-None-Match'] = '*'
else:
headers['If-Match'] = etag
response = self.session.request(
'PUT',
href,
data=item.raw.encode('utf-8'),
headers=headers
)
_assert_multistatus_success(response)
# The server may not return an etag under certain conditions:
#
# An origin server MUST NOT send a validator header field (Section
# 7.2), such as an ETag or Last-Modified field, in a successful
# response to PUT unless the request's representation data was saved
# without any transformation applied to the body (i.e., the
# resource's new representation data is identical to the
# representation data received in the PUT request) and the validator
# field value reflects the new representation.
#
# -- https://tools.ietf.org/html/rfc7231#section-4.3.4
#
# In such cases we return a constant etag. The next synchronization
# will then detect an etag change and will download the new item.
etag = response.headers.get('etag', None)
href = self._normalize_href(response.url)
return href, etag
def update(self, href, item, etag):
if etag is None:
raise ValueError('etag must be given and must not be None.')
href, etag = self._put(self._normalize_href(href), item, etag)
return etag
def upload(self, item):
href = self._get_href(item)
return self._put(href, item, None)
def delete(self, href, etag):
href = self._normalize_href(href)
headers = self.session.get_default_headers()
headers.update({
'If-Match': etag
})
self.session.request(
'DELETE',
href,
headers=headers
)
def _parse_prop_responses(self, root, handled_hrefs=None):
if handled_hrefs is None:
handled_hrefs = set()
for response in root.iter('{DAV:}response'):
href = response.find('{DAV:}href')
if href is None:
dav_logger.error('Skipping response, href is missing.')
continue
href = self._normalize_href(href.text)
if href in handled_hrefs:
# Servers that send duplicate hrefs:
# - Zimbra
# https://github.com/pimutils/vdirsyncer/issues/88
# - Davmail
# https://github.com/pimutils/vdirsyncer/issues/144
dav_logger.warning('Skipping identical href: {!r}'
.format(href))
continue
props = response.findall('{DAV:}propstat/{DAV:}prop')
if props is None or not len(props):
dav_logger.debug('Skipping {!r}, properties are missing.'
.format(href))
continue
else:
props = _merge_xml(props)
if props.find('{DAV:}resourcetype/{DAV:}collection') is not None:
dav_logger.debug('Skipping {!r}, is collection.'.format(href))
continue
etag = getattr(props.find('{DAV:}getetag'), 'text', '')
if not etag:
dav_logger.debug('Skipping {!r}, etag property is missing.'
.format(href))
continue
contenttype = getattr(props.find('{DAV:}getcontenttype'),
'text', None)
if not self._is_item_mimetype(contenttype):
dav_logger.debug('Skipping {!r}, {!r} != {!r}.'
.format(href, contenttype,
self.item_mimetype))
continue
handled_hrefs.add(href)
yield href, etag, props
def list(self):
headers = self.session.get_default_headers()
headers['Depth'] = '1'
data = '''<?xml version="1.0" encoding="utf-8" ?>
<D:propfind xmlns:D="DAV:">
<D:prop>
<D:resourcetype/>
<D:getcontenttype/>
<D:getetag/>
</D:prop>
</D:propfind>
'''.encode('utf-8')
# We use a PROPFIND request instead of addressbook-query due to issues
# with Zimbra. See https://github.com/pimutils/vdirsyncer/issues/83
response = self.session.request('PROPFIND', '', data=data,
headers=headers)
root = _parse_xml(response.content)
rv = self._parse_prop_responses(root)
for href, etag, _prop in rv:
yield href, etag
def get_meta(self, key):
try:
tagname, namespace = self._property_table[key]
except KeyError:
raise exceptions.UnsupportedMetadataError()
xpath = '{%s}%s' % (namespace, tagname)
data = '''<?xml version="1.0" encoding="utf-8" ?>
<D:propfind xmlns:D="DAV:">
<D:prop>
{}
</D:prop>
</D:propfind>
'''.format(
etree.tostring(etree.Element(xpath), encoding='unicode')
).encode('utf-8')
headers = self.session.get_default_headers()
headers['Depth'] = '0'
response = self.session.request(
'PROPFIND', '',
data=data, headers=headers
)
root = _parse_xml(response.content)
for prop in root.findall('.//' + xpath):
text = normalize_meta_value(getattr(prop, 'text', None))
if text:
return text
return u''
def set_meta(self, key, value):
try:
tagname, namespace = self._property_table[key]
except KeyError:
raise exceptions.UnsupportedMetadataError()
lxml_selector = '{%s}%s' % (namespace, tagname)
element = etree.Element(lxml_selector)
element.text = normalize_meta_value(value)
data = '''<?xml version="1.0" encoding="utf-8" ?>
<D:propertyupdate xmlns:D="DAV:">
<D:set>
<D:prop>
{}
</D:prop>
</D:set>
</D:propertyupdate>
'''.format(etree.tostring(element, encoding='unicode')).encode('utf-8')
self.session.request(
'PROPPATCH', '',
data=data, headers=self.session.get_default_headers()
)
# XXX: Response content is currently ignored. Though exceptions are
# raised for HTTP errors, a multistatus with errorcodes inside is not
# parsed yet. Not sure how common those are, or how they look like. It
# might be easier (and safer in case of a stupid server) to just issue
# a PROPFIND to see if the value got actually set.
class CalDAVStorage(DAVStorage):
storage_name = 'caldav'
fileext = '.ics'
item_mimetype = 'text/calendar'
discovery_class = CalDiscover
start_date = None
end_date = None
get_multi_template = '''<?xml version="1.0" encoding="utf-8" ?>
<C:calendar-multiget xmlns:D="DAV:"
xmlns:C="urn:ietf:params:xml:ns:caldav">
<D:prop>
<D:getetag/>
<C:calendar-data/>
</D:prop>
{hrefs}
</C:calendar-multiget>'''
get_multi_data_query = '{urn:ietf:params:xml:ns:caldav}calendar-data'
_property_table = dict(DAVStorage._property_table)
_property_table.update({
'color': ('calendar-color', 'http://apple.com/ns/ical/'),
})
def __init__(self, start_date=None, end_date=None,
item_types=(), **kwargs):
super(CalDAVStorage, self).__init__(**kwargs)
if not isinstance(item_types, (list, tuple)):
raise exceptions.UserError('item_types must be a list.')
self.item_types = tuple(item_types)
if (start_date is None) != (end_date is None):
raise exceptions.UserError('If start_date is given, '
'end_date has to be given too.')
elif start_date is not None and end_date is not None:
namespace = dict(datetime.__dict__)
namespace['start_date'] = self.start_date = \
(eval(start_date, namespace)
if isinstance(start_date, (bytes, str))
else start_date)
self.end_date = \
(eval(end_date, namespace)
if isinstance(end_date, (bytes, str))
else end_date)
@staticmethod
def _get_list_filters(components, start, end):
caldavfilter = '''
<C:comp-filter name="VCALENDAR">
<C:comp-filter name="{component}">
{timefilter}
</C:comp-filter>
</C:comp-filter>
'''
timefilter = ''
if start is not None and end is not None:
start = start.strftime(CALDAV_DT_FORMAT)
end = end.strftime(CALDAV_DT_FORMAT)
timefilter = ('<C:time-range start="{start}" end="{end}"/>'
.format(start=start, end=end))
if not components:
components = ('VTODO', 'VEVENT')
for component in components:
yield caldavfilter.format(component=component,
timefilter=timefilter)
def list(self):
caldavfilters = list(self._get_list_filters(
self.item_types,
self.start_date,
self.end_date
))
if not caldavfilters:
# If we don't have any filters (which is the default), taking the
# risk of sending a calendar-query is not necessary. There doesn't
# seem to be a widely-usable way to send calendar-queries with the
# same semantics as a PROPFIND request... so why not use PROPFIND
# instead?
#
# See https://github.com/dmfs/tasks/issues/118 for backstory.
yield from DAVStorage.list(self)
return
data = '''<?xml version="1.0" encoding="utf-8" ?>
<C:calendar-query xmlns:D="DAV:"
xmlns:C="urn:ietf:params:xml:ns:caldav">
<D:prop>
<D:getcontenttype/>
<D:getetag/>
</D:prop>
<C:filter>
{caldavfilter}
</C:filter>
</C:calendar-query>'''
headers = self.session.get_default_headers()
# https://github.com/pimutils/vdirsyncer/issues/166
# The default in CalDAV's calendar-queries is 0, but the examples use
# an explicit value of 1 for querying items. it is extremely unclear in
# the spec which values from WebDAV are actually allowed.
headers['Depth'] = '1'
handled_hrefs = set()
for caldavfilter in caldavfilters:
xml = data.format(caldavfilter=caldavfilter).encode('utf-8')
response = self.session.request('REPORT', '', data=xml,
headers=headers)
root = _parse_xml(response.content)
rv = self._parse_prop_responses(root, handled_hrefs)
for href, etag, _prop in rv:
yield href, etag
class CardDAVStorage(DAVStorage):
storage_name = 'carddav'
fileext = '.vcf'
item_mimetype = 'text/vcard'
discovery_class = CardDiscover
get_multi_template = '''<?xml version="1.0" encoding="utf-8" ?>
<C:addressbook-multiget xmlns:D="DAV:"
xmlns:C="urn:ietf:params:xml:ns:carddav">
<D:prop>
<D:getetag/>
<C:address-data/>
</D:prop>
{hrefs}
</C:addressbook-multiget>'''
get_multi_data_query = '{urn:ietf:params:xml:ns:carddav}address-data'

View file

@ -1,53 +1,36 @@
# -*- coding: utf-8 -*-
import collections
import contextlib
import functools
import glob
import logging
import os
from atomicwrites import atomic_write
from .base import Storage
from .. import exceptions
from ..utils import checkfile, expand_path, get_etag_from_file
from ..vobject import Item, join_collection, split_collection
from ._rust import RustStorageMixin
from .. import native
from ..utils import checkfile, expand_path
logger = logging.getLogger(__name__)
def _writing_op(f):
@functools.wraps(f)
def inner(self, *args, **kwargs):
if self._items is None or not self._at_once:
self.list()
rv = f(self, *args, **kwargs)
if not self._at_once:
self._write()
return rv
return inner
class SingleFileStorage(RustStorageMixin, Storage):
class SingleFileStorage(Storage):
storage_name = 'singlefile'
_repr_attributes = ('path',)
_write_mode = 'wb'
_append_mode = 'ab'
_read_mode = 'rb'
_items = None
_last_etag = None
def __init__(self, path, encoding='utf-8', **kwargs):
def __init__(self, path, **kwargs):
super(SingleFileStorage, self).__init__(**kwargs)
path = os.path.abspath(expand_path(path))
checkfile(path, create=False)
self.path = path
self.encoding = encoding
self._at_once = False
self._native_storage = native.ffi.gc(
native.lib.vdirsyncer_init_singlefile(path.encode('utf-8')),
native.lib.vdirsyncer_storage_free
)
@classmethod
def discover(cls, path, **kwargs):
@ -94,94 +77,3 @@ class SingleFileStorage(Storage):
kwargs['path'] = path
kwargs['collection'] = collection
return kwargs
def list(self):
self._items = collections.OrderedDict()
try:
self._last_etag = get_etag_from_file(self.path)
with open(self.path, self._read_mode) as f:
text = f.read().decode(self.encoding)
except OSError as e:
import errno
if e.errno != errno.ENOENT: # file not found
raise IOError(e)
text = None
if not text:
return ()
for item in split_collection(text):
item = Item(item)
etag = item.hash
self._items[item.ident] = item, etag
return ((href, etag) for href, (item, etag) in self._items.items())
def get(self, href):
if self._items is None or not self._at_once:
self.list()
try:
return self._items[href]
except KeyError:
raise exceptions.NotFoundError(href)
@_writing_op
def upload(self, item):
href = item.ident
if href in self._items:
raise exceptions.AlreadyExistingError(existing_href=href)
self._items[href] = item, item.hash
return href, item.hash
@_writing_op
def update(self, href, item, etag):
if href not in self._items:
raise exceptions.NotFoundError(href)
_, actual_etag = self._items[href]
if etag != actual_etag:
raise exceptions.WrongEtagError(etag, actual_etag)
self._items[href] = item, item.hash
return item.hash
@_writing_op
def delete(self, href, etag):
if href not in self._items:
raise exceptions.NotFoundError(href)
_, actual_etag = self._items[href]
if etag != actual_etag:
raise exceptions.WrongEtagError(etag, actual_etag)
del self._items[href]
def _write(self):
if self._last_etag is not None and \
self._last_etag != get_etag_from_file(self.path):
raise exceptions.PreconditionFailed(
'Some other program modified the file {r!}. Re-run the '
'synchronization and make sure absolutely no other program is '
'writing into the same file.'.format(self.path))
text = join_collection(
item.raw for item, etag in self._items.values()
)
try:
with atomic_write(self.path, mode='wb', overwrite=True) as f:
f.write(text.encode(self.encoding))
finally:
self._items = None
self._last_etag = None
@contextlib.contextmanager
def at_once(self):
self.list()
self._at_once = True
try:
yield self
self._write()
finally:
self._at_once = False

View file

@ -27,6 +27,7 @@ sync_logger = logging.getLogger(__name__)
class _StorageInfo(object):
'''A wrapper class that holds prefetched items, the status and other
things.'''
def __init__(self, storage, status):
self.storage = storage
self.status = status
@ -57,6 +58,12 @@ class _StorageInfo(object):
# Prefetch items
for href, item, etag in (self.storage.get_multi(prefetch)
if prefetch else ()):
if not item.is_parseable:
sync_logger.warning(
'Storage "{}": item {} is malformed. '
'Please try to repair it.'
.format(self.storage.instance_name, href)
)
_store_props(item.ident, ItemMetadata(
href=href,
hash=item.hash,
@ -143,20 +150,25 @@ def sync(storage_a, storage_b, status, conflict_resolution=None,
actions = list(_get_actions(a_info, b_info))
with storage_a.at_once(), storage_b.at_once():
for action in actions:
try:
action.run(
a_info,
b_info,
conflict_resolution,
partial_sync
)
except Exception as e:
if error_callback:
error_callback(e)
else:
raise
storage_a.buffered()
storage_b.buffered()
for action in actions:
try:
action.run(
a_info,
b_info,
conflict_resolution,
partial_sync
)
except Exception as e:
if error_callback:
error_callback(e)
else:
raise
storage_a.flush()
storage_b.flush()
class Action:

View file

@ -1,37 +1,9 @@
# -*- coding: utf-8 -*-
import hashlib
from itertools import chain, tee
from .utils import cached_property, uniq
IGNORE_PROPS = (
# PRODID is changed by radicale for some reason after upload
'PRODID',
# Sometimes METHOD:PUBLISH is added by WebCAL providers, for us it doesn't
# make a difference
'METHOD',
# X-RADICALE-NAME is used by radicale, because hrefs don't really exist in
# their filesystem backend
'X-RADICALE-NAME',
# Apparently this is set by Horde?
# https://github.com/pimutils/vdirsyncer/issues/318
'X-WR-CALNAME',
# Those are from the VCARD specification and is supposed to change when the
# item does -- however, we can determine that ourselves
'REV',
'LAST-MODIFIED',
'CREATED',
# Some iCalendar HTTP calendars generate the DTSTAMP at request time, so
# this property always changes when the rest of the item didn't. Some do
# the same with the UID.
#
# - Google's read-only calendar links
# - http://www.feiertage-oesterreich.at/
'DTSTAMP',
'UID',
)
from . import native
class Item(object):
@ -39,101 +11,53 @@ class Item(object):
'''Immutable wrapper class for VCALENDAR (VEVENT, VTODO) and
VCARD'''
def __init__(self, raw):
def __init__(self, raw, _native=None):
if raw is None:
assert _native
self._native = _native
return
assert isinstance(raw, str), type(raw)
self._raw = raw
assert _native is None
self._native = native.item_rv(
native.lib.vdirsyncer_item_from_raw(raw.encode('utf-8'))
)
def with_uid(self, new_uid):
parsed = _Component.parse(self.raw)
stack = [parsed]
while stack:
component = stack.pop()
stack.extend(component.subcomponents)
new_uid = new_uid or ''
assert isinstance(new_uid, str), type(new_uid)
if component.name in ('VEVENT', 'VTODO', 'VJOURNAL', 'VCARD'):
del component['UID']
if new_uid:
component['UID'] = new_uid
e = native.get_error_pointer()
rv = native.lib.vdirsyncer_with_uid(self._native,
new_uid.encode('utf-8'),
e)
native.check_error(e)
return Item(None, _native=native.item_rv(rv))
return Item('\r\n'.join(parsed.dump_lines()))
@cached_property
def is_parseable(self):
return native.lib.vdirsyncer_item_is_parseable(self._native)
@cached_property
def raw(self):
'''Raw content of the item, as unicode string.
Vdirsyncer doesn't validate the content in any way.
'''
return self._raw
return native.string_rv(native.lib.vdirsyncer_get_raw(self._native))
@cached_property
def uid(self):
'''Global identifier of the item, across storages, doesn't change after
a modification of the item.'''
# Don't actually parse component, but treat all lines as single
# component, avoiding traversal through all subcomponents.
x = _Component('TEMP', self.raw.splitlines(), [])
try:
return x['UID'].strip() or None
except KeyError:
return None
rv = native.string_rv(native.lib.vdirsyncer_get_uid(self._native))
return rv or None
@cached_property
def hash(self):
'''Hash of self.raw, used for etags.'''
return hash_item(self.raw)
e = native.get_error_pointer()
rv = native.lib.vdirsyncer_get_hash(self._native, e)
native.check_error(e)
return native.string_rv(rv)
@cached_property
def ident(self):
'''Used for generating hrefs and matching up items during
synchronization. This is either the UID or the hash of the item's
content.'''
# We hash the item instead of directly using its raw content, because
#
# 1. The raw content might be really large, e.g. when it's a contact
# with a picture, which bloats the status file.
#
# 2. The status file would contain really sensitive information.
return self.uid or self.hash
@property
def parsed(self):
'''Don't cache because the rv is mutable.'''
try:
return _Component.parse(self.raw)
except Exception:
return None
def normalize_item(item, ignore_props=IGNORE_PROPS):
'''Create syntactically invalid mess that is equal for similar items.'''
if not isinstance(item, Item):
item = Item(item)
item = _strip_timezones(item)
x = _Component('TEMP', item.raw.splitlines(), [])
for prop in IGNORE_PROPS:
del x[prop]
x.props.sort()
return u'\r\n'.join(filter(bool, (line.strip() for line in x.props)))
def _strip_timezones(item):
parsed = item.parsed
if not parsed or parsed.name != 'VCALENDAR':
return item
parsed.subcomponents = [c for c in parsed.subcomponents
if c.name != 'VTIMEZONE']
return Item('\r\n'.join(parsed.dump_lines()))
def hash_item(text):
return hashlib.sha256(normalize_item(text).encode('utf-8')).hexdigest()
def split_collection(text):
assert isinstance(text, str)