git » nmdb » commit f294e72

Add Python 3 bindings

author Alberto Bertogli
2008-12-09 22:16:56 UTC
committer Alberto Bertogli
2008-12-09 22:24:08 UTC
parent b28ae38be6a3cfae3cb5ce27b63b4e8372eaa871

Add Python 3 bindings

They're obviously based on Python 2 bindings, and have the same API.

Signed-off-by: Alberto Bertogli <albertito@blitiri.com.ar>

.gitignore +1 -0
INSTALL +2 -1
Makefile +12 -1
README +3 -3
bindings/python3/LICENSE +34 -0
bindings/python3/nmdb.py +252 -0
bindings/python3/nmdb_ll.c +452 -0
bindings/python3/setup.py +17 -0
tests/python3/README +4 -0
tests/python3/random1-cache.py +129 -0
tests/python3/random1.py +154 -0

diff --git a/.gitignore b/.gitignore
index 1e72e7b..7cad653 100644
--- a/.gitignore
+++ b/.gitignore
@@ -17,6 +17,7 @@
 /libnmdb/libnmdb.pc
 /tags
 /bindings/python/build
+/bindings/python3/build
 /tests/perf/out
 /tests/perf/ag-data
 /tests/perf/graph
diff --git a/INSTALL b/INSTALL
index 687baa8..64f1615 100644
--- a/INSTALL
+++ b/INSTALL
@@ -60,7 +60,8 @@ Bindings
 
 To compile the Python bindings, you need to have the library already
 installed. Use "make python_install" at the top level directory to build and
-install the modules. The module will be named "nmdb".
+install the modules. The module will be named "nmdb". The same goes for Python
+3, use "make python3_install".
 
 The other bindings do not have a properly defined install procedure, and
 you'll need knowledge of the language to install them.
diff --git a/Makefile b/Makefile
index 5eb5ac9..5ef8582 100644
--- a/Makefile
+++ b/Makefile
@@ -32,7 +32,18 @@ python_install:
 python_clean:
 	cd bindings/python && rm -rf build/
 
+python3:
+	cd bindings/python3 && python3 setup.py build
 
-.PHONY: default all clean nmdb libnmdb utils python python_install python_clean
+python3_install:
+	cd bindings/python3 && python3 setup.py install
+
+python3_clean:
+	cd bindings/python3 && rm -rf build/
+
+
+.PHONY: default all clean nmdb libnmdb utils \
+	python python_install python_clean \
+	python3 python3_install python3_clean
 
 
diff --git a/README b/README
index f1c7b17..36e3411 100644
--- a/README
+++ b/README
@@ -13,9 +13,9 @@ Both work combined, but the use of the persistent backend is optional, so you
 can use the server only for cache queries, pretty much like memcached.
 
 This source distribution is composed of several parts: the server called
-"nmdb", the library and bindings for Python, D, NewLISP, Ruby, Bigloo Scheme
-and Haskell. Each one has a separate directory, and is licensed individually.
-See the LICENSE file for more information.
+"nmdb", the library and bindings for Python (2 and 3), D, NewLISP, Ruby,
+Bigloo Scheme and Haskell. Each one has a separate directory, and is licensed
+individually. See the LICENSE file for more information.
 
 
 Documentation
diff --git a/bindings/python3/LICENSE b/bindings/python3/LICENSE
new file mode 100644
index 0000000..f3a9498
--- /dev/null
+++ b/bindings/python3/LICENSE
@@ -0,0 +1,34 @@
+
+I don't like licenses, because I don't like having to worry about all this
+legal stuff just for a simple piece of software I don't really mind anyone
+using. But I also believe that it's important that people share and give back;
+so I'm placing this library under the following license, so you feel guilty if
+you don't ;)
+
+
+BOLA - Buena Onda License Agreement
+-----------------------------------
+
+This work is provided 'as-is', without any express or implied warranty. In no
+event will the authors be held liable for any damages arising from the use of
+this work.
+
+To all effects and purposes, this work is to be considered Public Domain.
+
+
+However, if you want to be "Buena onda", you should:
+
+1. Not take credit for it, and give proper recognition to the authors.
+2. Share your modifications, so everybody benefits from them.
+4. Do something nice for the authors.
+5. Help someone who needs it: sign up for some volunteer work or help your
+   neighbour paint the house.
+6. Don't waste. Anything, but specially energy that comes from natural
+   non-renewable resources. Extra points if you discover or invent something
+   to replace them.
+7. Be tolerant. Everything that's good in nature comes from cooperation.
+
+The order is important, and the further you go the more "Buena onda" you are.
+Make the world a better place: be "Buena onda".
+
+
diff --git a/bindings/python3/nmdb.py b/bindings/python3/nmdb.py
new file mode 100644
index 0000000..2c0b88c
--- /dev/null
+++ b/bindings/python3/nmdb.py
@@ -0,0 +1,252 @@
+
+"""
+libnmdb python 3 wrapper
+
+This module is a wrapper for the libnmdb, the C library used to implement
+clients to the nmdb server.
+
+It provides three similar classes: DB, SyncDB and Cache. They all present the
+same dictionary-alike interface, but differ in how they interact with the
+server.
+
+The DB class allows you to set, get and delete (key, value) pairs from the
+database; the SyncDB class works like DB, but does so in a synchronous way; and
+the Cache class affects only the cache and do not impact the backend database.
+
+Note that mixing cache sets with DB sets can create inconsistencies between
+the database and the cache. You shouldn't do that unless you know what you're
+doing.
+
+The classes use pickle to allow you to store and retrieve python objects in a
+transparent way. To disable it, set .autopickle to False.
+
+Here is an example using the DB class:
+
+>>> import nmdb
+>>> db = nmdb.DB()
+>>> db.add_tipc_server()
+>>> db[1] = { 'english': 'one', 'castellano': 'uno', 'quechua': 'huk' }
+>>> print(db[1])
+{'english': 'one', 'castellano': 'uno', 'quechua': 'huk'}
+>>> db[(1, 2)] = (True, False)
+>>> print(db[(1, 2)])
+(True, False)
+>>> del db[(1, 2)]
+>>> print(db[(1, 2)])
+Traceback (most recent call last):
+  File "<stdin>", line 1, in <module>
+  File "/usr/local/lib/python3.0/dist-packages/nmdb.py", line 206, in __getitem__
+    return self.get(key)
+  File "/usr/local/lib/python3.0/dist-packages/nmdb.py", line 102, in normal_get
+    return self.generic_get(self._db.get, key)
+  File "/usr/local/lib/python3.0/dist-packages/nmdb.py", line 93, in generic_get
+    raise KeyError
+KeyError
+>>>
+"""
+
+import pickle
+import nmdb_ll
+
+
+class NetworkError (Exception):
+	pass
+
+
+class GenericDB:
+	def __init__(self):
+		self._db = nmdb_ll.nmdb()
+		self.autopickle = True
+
+	def add_tipc_server(self, port = -1):
+		"Adds a TIPC server to the server pool."
+		rv = self._db.add_tipc_server(port)
+		if not rv:
+			raise NetworkError
+		return rv
+
+	def add_tcp_server(self, addr, port = -1):
+		"Adds a TCP server to the server pool."
+		rv = self._db.add_tcp_server(addr, port)
+		if not rv:
+			raise NetworkError
+		return rv
+
+	def add_udp_server(self, addr, port = -1):
+		"Adds an UDP server to the server pool."
+		rv = self._db.add_udp_server(addr, port)
+		if not rv:
+			raise NetworkError
+		return rv
+
+
+	def generic_get(self, getf, key):
+		"d[k]   Returns the value associated with the key k."
+		if self.autopickle:
+			key = pickle.dumps(key, protocol = -1)
+		try:
+			r = getf(key)
+		except:
+			raise NetworkError
+		if r == -1:
+			# For key errors, get returns -1 instead of a string
+			# so we know it's a miss.
+			raise KeyError
+		if self.autopickle:
+			r = pickle.loads(r)
+		return r
+
+	def cache_get(self, key):
+		return self.generic_get(self._db.cache_get, key)
+
+	def normal_get(self, key):
+		return self.generic_get(self._db.get, key)
+
+
+	def generic_set(self, setf, key, val):
+		"d[k] = v   Associates the value v to the key k."
+		if self.autopickle:
+			key = pickle.dumps(key, protocol = -1)
+			val = pickle.dumps(val, protocol = -1)
+		r = setf(key, val)
+		if r <= 0:
+			raise NetworkError
+		return 1
+
+	def cache_set(self, key, val):
+		return self.generic_set(self._db.cache_set, key, val)
+
+	def normal_set(self, key, val):
+		return self.generic_set(self._db.set, key, val)
+
+	def set_sync(self, key, val):
+		return self.generic_set(self._db.set_sync, key, val)
+
+
+	def generic_delete(self, delf, key):
+		"del d[k]   Deletes the key k."
+		if self.autopickle:
+			key = pickle.dumps(key, protocol = -1)
+		r = delf(key)
+		if r < 0:
+			raise NetworkError
+		elif r == 0:
+			raise KeyError
+		return 1
+
+	def cache_delete(self, key):
+		return self.generic_delete(self._db.cache_delete, key)
+
+	def normal_delete(self, key):
+		return self.generic_delete(self._db.delete, key)
+
+	def delete_sync(self, key):
+		return self.generic_delete(self._db.delete_sync, key)
+
+
+	def generic_cas(self, casf, key, oldval, newval):
+		"Perform a compare-and-swap."
+		if self.autopickle:
+			key = pickle.dumps(key, protocol = -1)
+			oldval = pickle.dumps(oldval, protocol = -1)
+			newval = pickle.dumps(newval, protocol = -1)
+		r = casf(key, oldval, newval)
+		if r == 2:
+			# success
+			return 2
+		elif r == 1:
+			# no match
+			return 1
+		elif r == 0:
+			# not in
+			raise KeyError
+		else:
+			raise NetworkError
+
+	def cache_cas(self, key, oldv, newv):
+		return self.generic_cas(self._db.cache_cas, key,
+				oldval, newval)
+
+	def normal_cas(self, key, oldval, newval):
+		return self.generic_cas(self._db.cas, key,
+				oldval, newval)
+
+
+	def generic_incr(self, incrf, key, increment):
+		"""Atomically increment the value associated with the given
+		key by the given increment."""
+		if self.autopickle:
+			key = pickle.dumps(key, protocol = -1)
+		r, v = incrf(key, increment)
+		if r == 2:
+			# success
+			return v
+		elif r == 1:
+			# no match, because the value didn't have the right
+			# format
+			raise TypeError("The value must be a " + \
+					"NULL-terminated string")
+		elif r == 0:
+			# not in
+			raise KeyError
+		else:
+			raise NetworkError
+
+	def cache_incr(self, key, increment = 1):
+		return self.generic_incr(self._db.cache_incr, key, increment)
+
+	def normal_incr(self, key, increment = 1):
+		return self.generic_incr(self._db.incr, key, increment)
+
+
+	# The following functions will assume the existance of self.set,
+	# self.get, and self.delete, which are supposed to be set by our
+	# subclasses.
+
+	def __getitem__(self, key):
+		return self.get(key)
+
+	def __setitem__(self, key, val):
+		return self.set(key, val)
+
+	def __delitem__(self, key):
+		return self.delete(key)
+
+	def __contains__(self, key):
+		"Returns True if the key is in the database, False otherwise."
+		try:
+			r = self.get(key)
+		except KeyError:
+			return False
+		if not r:
+			return False
+		return True
+
+	def has_key(self, key):
+		"Returns True if the key is in the database, False otherwise."
+		return self.__contains__(key)
+
+
+
+class Cache (GenericDB):
+	get = GenericDB.cache_get
+	set = GenericDB.cache_set
+	delete = GenericDB.cache_delete
+	cas = GenericDB.cache_cas
+	incr = GenericDB.cache_incr
+
+class DB (GenericDB):
+	get = GenericDB.normal_get
+	set = GenericDB.normal_set
+	delete = GenericDB.normal_delete
+	cas = GenericDB.normal_cas
+	incr = GenericDB.normal_incr
+
+class SyncDB (GenericDB):
+	get = GenericDB.normal_get
+	set = GenericDB.set_sync
+	delete = GenericDB.delete_sync
+	cas = GenericDB.normal_cas
+	incr = GenericDB.normal_incr
+
+
diff --git a/bindings/python3/nmdb_ll.c b/bindings/python3/nmdb_ll.c
new file mode 100644
index 0000000..a6a4ab8
--- /dev/null
+++ b/bindings/python3/nmdb_ll.c
@@ -0,0 +1,452 @@
+
+/*
+ * Python 3 bindings for libnmdb
+ * Alberto Bertogli (albertito@blitiri.com.ar)
+ *
+ * This is the low-level module, used by the python one to construct
+ * friendlier objects.
+ */
+
+#include <Python.h>
+#include <nmdb.h>
+
+
+/*
+ * Type definitions
+ */
+
+typedef struct {
+	PyObject_HEAD;
+	nmdb_t *db;
+} nmdbobject;
+static PyTypeObject nmdbType;
+
+/*
+ * The nmdb object
+ */
+
+/* delete */
+static void db_dealloc(nmdbobject *db)
+{
+	if (db->db) {
+		nmdb_free(db->db);
+	}
+	PyObject_Del(db);
+}
+
+
+/* add tipc server */
+static PyObject *db_add_tipc_server(nmdbobject *db, PyObject *args)
+{
+	int port;
+	int rv;
+
+	if (!PyArg_ParseTuple(args, "i:add_tipc_server", &port)) {
+		return NULL;
+	}
+
+	Py_BEGIN_ALLOW_THREADS
+	rv = nmdb_add_tipc_server(db->db, port);
+	Py_END_ALLOW_THREADS
+
+	return PyLong_FromLong(rv);
+}
+
+/* add tcp server */
+static PyObject *db_add_tcp_server(nmdbobject *db, PyObject *args)
+{
+	int port;
+	char *addr;
+	int rv;
+
+	if (!PyArg_ParseTuple(args, "si:add_tcp_server", &addr, &port)) {
+		return NULL;
+	}
+
+	Py_BEGIN_ALLOW_THREADS
+	rv = nmdb_add_tcp_server(db->db, addr, port);
+	Py_END_ALLOW_THREADS
+
+	return PyLong_FromLong(rv);
+}
+
+/* add udp server */
+static PyObject *db_add_udp_server(nmdbobject *db, PyObject *args)
+{
+	int port;
+	char *addr;
+	int rv;
+
+	if (!PyArg_ParseTuple(args, "si:add_udp_server", &addr, &port)) {
+		return NULL;
+	}
+
+	Py_BEGIN_ALLOW_THREADS
+	rv = nmdb_add_udp_server(db->db, addr, port);
+	Py_END_ALLOW_THREADS
+
+	return PyLong_FromLong(rv);
+}
+
+/* cache set */
+static PyObject *db_cache_set(nmdbobject *db, PyObject *args)
+{
+	unsigned char *key, *val;
+	int ksize, vsize;
+	int rv;
+
+	if (!PyArg_ParseTuple(args, "s#s#:cache_set", &key, &ksize,
+				&val, &vsize)) {
+		return NULL;
+	}
+
+	Py_BEGIN_ALLOW_THREADS
+	rv = nmdb_cache_set(db->db, key, ksize, val, vsize);
+	Py_END_ALLOW_THREADS
+
+	return PyLong_FromLong(rv);
+}
+
+/* cache get */
+static PyObject *db_cache_get(nmdbobject *db, PyObject *args)
+{
+	unsigned char *key, *val;
+	int ksize, vsize;
+	long rv;
+	PyObject *r;
+
+	if (!PyArg_ParseTuple(args, "s#:cache_get", &key, &ksize)) {
+		return NULL;
+	}
+
+	/* vsize is enough to hold the any value */
+	vsize = 128 * 1024;
+	val = malloc(vsize);
+	if (val == NULL)
+		return PyErr_NoMemory();
+
+	Py_BEGIN_ALLOW_THREADS
+	rv = nmdb_cache_get(db->db, key, ksize, val, vsize);
+	Py_END_ALLOW_THREADS
+
+	if (rv <= -2) {
+		/* FIXME: define a better exception */
+		r = PyErr_SetFromErrno(PyExc_IOError);
+	} else if (rv == -1) {
+		/* Miss, handled in the high-level module. */
+		r = PyLong_FromLong(-1);
+	} else {
+		r = PyBytes_FromStringAndSize((char *) val, rv);
+	}
+
+	free(val);
+	return r;
+}
+
+/* cache delete */
+static PyObject *db_cache_delete(nmdbobject *db, PyObject *args)
+{
+	unsigned char *key;
+	int ksize;
+	int rv;
+
+	if (!PyArg_ParseTuple(args, "s#:cache_delete", &key, &ksize)) {
+		return NULL;
+	}
+
+	Py_BEGIN_ALLOW_THREADS
+	rv = nmdb_cache_del(db->db, key, ksize);
+	Py_END_ALLOW_THREADS
+
+	return PyLong_FromLong(rv);
+}
+
+/* cache cas */
+static PyObject *db_cache_cas(nmdbobject *db, PyObject *args)
+{
+	unsigned char *key, *oldval, *newval;
+	int ksize, ovsize, nvsize;
+	int rv;
+
+	if (!PyArg_ParseTuple(args, "s#s#s#:cache_cas", &key, &ksize,
+				&oldval, &ovsize,
+				&newval, &nvsize)) {
+		return NULL;
+	}
+
+	Py_BEGIN_ALLOW_THREADS
+	rv = nmdb_cache_cas(db->db, key, ksize, oldval, ovsize,
+			newval, nvsize);
+	Py_END_ALLOW_THREADS
+
+	return PyLong_FromLong(rv);
+}
+
+/* cache increment */
+static PyObject *db_cache_incr(nmdbobject *db, PyObject *args)
+{
+	unsigned char *key;
+	int ksize;
+	int rv;
+	long long int increment;
+	int64_t newval;
+
+	if (!PyArg_ParseTuple(args, "s#L:cache_incr", &key, &ksize,
+				&increment)) {
+		return NULL;
+	}
+
+	Py_BEGIN_ALLOW_THREADS
+	rv = nmdb_cache_incr(db->db, key, ksize, increment, &newval);
+	Py_END_ALLOW_THREADS
+
+	return Py_BuildValue("LL", rv, newval);
+}
+
+
+/* db set */
+static PyObject *db_set(nmdbobject *db, PyObject *args)
+{
+	unsigned char *key, *val;
+	int ksize, vsize;
+	int rv;
+
+	if (!PyArg_ParseTuple(args, "s#s#:set", &key, &ksize,
+				&val, &vsize)) {
+		return NULL;
+	}
+
+	Py_BEGIN_ALLOW_THREADS
+	rv = nmdb_set(db->db, key, ksize, val, vsize);
+	Py_END_ALLOW_THREADS
+
+	return PyLong_FromLong(rv);
+}
+
+/* db get */
+static PyObject *db_get(nmdbobject *db, PyObject *args)
+{
+	unsigned char *key, *val;
+	int ksize, vsize;
+	long rv;
+	PyObject *r;
+
+	if (!PyArg_ParseTuple(args, "s#:get", &key, &ksize)) {
+		return NULL;
+	}
+
+	/* vsize is enough to hold the any value */
+	vsize = 128 * 1024;
+	val = malloc(vsize);
+	if (val == NULL)
+		return PyErr_NoMemory();
+
+	Py_BEGIN_ALLOW_THREADS
+	rv = nmdb_get(db->db, key, ksize, val, vsize);
+	Py_END_ALLOW_THREADS
+
+	if (rv <= -2) {
+		/* FIXME: define a better exception */
+		r = PyErr_SetFromErrno(PyExc_IOError);
+	} else if (rv == -1) {
+		/* Miss, handled in the high-level module. */
+		r = PyLong_FromLong(-1);
+	} else {
+		r = PyBytes_FromStringAndSize((char *) val, rv);
+	}
+
+	free(val);
+	return r;
+}
+
+/* db delete */
+static PyObject *db_delete(nmdbobject *db, PyObject *args)
+{
+	unsigned char *key;
+	int ksize;
+	int rv;
+
+	if (!PyArg_ParseTuple(args, "s#:delete", &key, &ksize)) {
+		return NULL;
+	}
+
+	Py_BEGIN_ALLOW_THREADS
+	rv = nmdb_del(db->db, key, ksize);
+	Py_END_ALLOW_THREADS
+
+	return PyLong_FromLong(rv);
+}
+
+/* db cas */
+static PyObject *db_cas(nmdbobject *db, PyObject *args)
+{
+	unsigned char *key, *oldval, *newval;
+	int ksize, ovsize, nvsize;
+	int rv;
+
+	if (!PyArg_ParseTuple(args, "s#s#s#:cas", &key, &ksize,
+				&oldval, &ovsize,
+				&newval, &nvsize)) {
+		return NULL;
+	}
+
+	Py_BEGIN_ALLOW_THREADS
+	rv = nmdb_cas(db->db, key, ksize, oldval, ovsize, newval, nvsize);
+	Py_END_ALLOW_THREADS
+
+	return PyLong_FromLong(rv);
+}
+
+/* db increment */
+static PyObject *db_incr(nmdbobject *db, PyObject *args)
+{
+	unsigned char *key;
+	int ksize;
+	int rv;
+	long long int increment;
+	int64_t newval;
+
+	if (!PyArg_ParseTuple(args, "s#L:incr", &key, &ksize, &increment)) {
+		return NULL;
+	}
+
+	Py_BEGIN_ALLOW_THREADS
+	rv = nmdb_incr(db->db, key, ksize, increment, &newval);
+	Py_END_ALLOW_THREADS
+
+	return Py_BuildValue("LL", rv, newval);
+}
+
+
+/* db set sync */
+static PyObject *db_set_sync(nmdbobject *db, PyObject *args)
+{
+	unsigned char *key, *val;
+	int ksize, vsize;
+	int rv;
+
+	if (!PyArg_ParseTuple(args, "s#s#:set_sync", &key, &ksize,
+				&val, &vsize)) {
+		return NULL;
+	}
+
+	Py_BEGIN_ALLOW_THREADS
+	rv = nmdb_set_sync(db->db, key, ksize, val, vsize);
+	Py_END_ALLOW_THREADS
+
+	return PyLong_FromLong(rv);
+}
+
+/* db delete sync */
+static PyObject *db_delete_sync(nmdbobject *db, PyObject *args)
+{
+	unsigned char *key;
+	int ksize;
+	int rv;
+
+	if (!PyArg_ParseTuple(args, "s#:delete_sync", &key, &ksize)) {
+		return NULL;
+	}
+
+	Py_BEGIN_ALLOW_THREADS
+	rv = nmdb_del_sync(db->db, key, ksize);
+	Py_END_ALLOW_THREADS
+
+	return PyLong_FromLong(rv);
+}
+
+
+
+/* nmdb method table */
+
+static PyMethodDef nmdb_methods[] = {
+	{ "add_tipc_server", (PyCFunction) db_add_tipc_server,
+		METH_VARARGS, NULL },
+	{ "add_tcp_server", (PyCFunction) db_add_tcp_server,
+		METH_VARARGS, NULL },
+	{ "add_udp_server", (PyCFunction) db_add_udp_server,
+		METH_VARARGS, NULL },
+	{ "cache_set", (PyCFunction) db_cache_set, METH_VARARGS, NULL },
+	{ "cache_get", (PyCFunction) db_cache_get, METH_VARARGS, NULL },
+	{ "cache_delete", (PyCFunction) db_cache_delete, METH_VARARGS, NULL },
+	{ "cache_cas", (PyCFunction) db_cache_cas, METH_VARARGS, NULL },
+	{ "cache_incr", (PyCFunction) db_cache_incr, METH_VARARGS, NULL },
+	{ "set", (PyCFunction) db_set, METH_VARARGS, NULL },
+	{ "get", (PyCFunction) db_get, METH_VARARGS, NULL },
+	{ "delete", (PyCFunction) db_delete, METH_VARARGS, NULL },
+	{ "cas", (PyCFunction) db_cas, METH_VARARGS, NULL },
+	{ "incr", (PyCFunction) db_incr, METH_VARARGS, NULL },
+	{ "set_sync", (PyCFunction) db_set_sync, METH_VARARGS, NULL },
+	{ "delete_sync", (PyCFunction) db_delete_sync, METH_VARARGS, NULL },
+
+	{ NULL }
+};
+
+/* new, returns an nmdb object */
+static PyObject *db_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
+{
+	nmdbobject *db;
+
+	db = (nmdbobject *) type->tp_alloc(type, 0);
+	if (db == NULL)
+		return NULL;
+
+	if (!PyArg_ParseTuple(args, ":new")) {
+		return NULL;
+	}
+
+	db->db = nmdb_init();
+	if (db->db == NULL) {
+		return PyErr_NoMemory();
+	}
+
+	/* XXX: is this necessary? */
+	if (PyErr_Occurred()) {
+		nmdb_free(db->db);
+		return NULL;
+	}
+
+	return (PyObject *) db;
+}
+
+
+static PyTypeObject nmdbType = {
+	PyObject_HEAD_INIT(NULL)
+	.tp_name = "nmdb_ll.nmdb",
+	.tp_itemsize = sizeof(nmdbobject),
+	.tp_dealloc = (destructor) db_dealloc,
+	.tp_methods = nmdb_methods,
+	.tp_new = db_new,
+};
+
+
+
+/*
+ * The module
+ */
+
+static PyModuleDef nmdb_module = {
+	PyModuleDef_HEAD_INIT,
+	.m_name = "nmdb_ll",
+	.m_doc = NULL,
+	.m_size = -1,
+};
+
+
+PyMODINIT_FUNC PyInit_nmdb_ll(void)
+{
+	PyObject *m;
+
+	if (PyType_Ready(&nmdbType) < 0)
+		return NULL;
+
+	m = PyModule_Create(&nmdb_module);
+
+	Py_INCREF(&nmdbType);
+	PyModule_AddObject(m, "nmdb", (PyObject *) &nmdbType);
+
+	return m;
+}
+
+
+
diff --git a/bindings/python3/setup.py b/bindings/python3/setup.py
new file mode 100644
index 0000000..54c1bcd
--- /dev/null
+++ b/bindings/python3/setup.py
@@ -0,0 +1,17 @@
+
+from distutils.core import setup, Extension
+
+nmdb_ll = Extension("nmdb_ll",
+		libraries = ['nmdb'],
+		sources = ['nmdb_ll.c'])
+
+setup(
+	name = 'nmdb',
+	description = "libnmdb bindings",
+	author = "Alberto Bertogli",
+	author_email = "albertito@blitiri.com.ar",
+	url = "http://blitiri.com.ar/p/nmdb",
+	py_modules = ['nmdb'],
+	ext_modules = [nmdb_ll]
+)
+
diff --git a/tests/python3/README b/tests/python3/README
new file mode 100644
index 0000000..9f857b7
--- /dev/null
+++ b/tests/python3/README
@@ -0,0 +1,4 @@
+
+These tests are identical to the ones in tests/python, except they've been
+modified to work under Python 3 (and obviously use the Python 3 bindings).
+
diff --git a/tests/python3/random1-cache.py b/tests/python3/random1-cache.py
new file mode 100755
index 0000000..71e8b9f
--- /dev/null
+++ b/tests/python3/random1-cache.py
@@ -0,0 +1,129 @@
+#!/usr/bin/env python
+
+import sys
+import nmdb
+from random import randint, choice
+
+
+class Mismatch (Exception):
+	pass
+
+
+# network db
+ndb = nmdb.Cache()
+ndb.add_tipc_server()
+ndb.add_tcp_server('localhost')
+ndb.add_udp_server('localhost')
+
+# local db
+ldb = {}
+
+# history of each key
+history = {}
+
+# check decorator
+def checked(f):
+	def newf(k, *args, **kwargs):
+		try:
+			return f(k, *args, **kwargs)
+		except:
+			if k in history:
+				print(history[k])
+			else:
+				print('No history for key', k)
+			raise
+	newf.__name__ = f.__name__
+	return newf
+
+
+# operations
+@checked
+def set(k, v):
+	ndb[k] = v
+	ldb[k] = v
+	if k not in history:
+		history[k] = []
+	history[k].append((set, k, v))
+
+@checked
+def get(k):
+	try:
+		n = ndb[k]
+	except KeyError:
+		del ldb[k]
+		del history[k]
+		return 0
+
+	l = ldb[k]
+	if l != n:
+		raise Mismatch((n, l))
+	history[k].append((get, k))
+	return True
+
+@checked
+def delete(k):
+	del ldb[k]
+	try:
+		del ndb[k]
+	except KeyError:
+		pass
+	history[k].append((delete, k))
+
+def find_missing():
+	misses = 0
+	for k in list(ldb.keys()):
+		if not get(k):
+			misses += 1
+	return misses
+
+# Use integers because the normal random() generates floating point numbers,
+# and they can mess up comparisons because of architecture details.
+def getrand():
+	return randint(0, 1000000000000000000)
+
+
+if __name__ == '__main__':
+	if len(sys.argv) < 2:
+		print('Use: random1-cache.py number_of_keys [key_prefix]')
+		sys.exit(1)
+
+	nkeys = int(sys.argv[1])
+	if len(sys.argv) > 2:
+		key_prefix = sys.argv[2]
+	else:
+		key_prefix = ''
+
+	# fill all the keys
+	print('populate')
+	for i in range(nkeys):
+		set(key_prefix + str(getrand()), getrand())
+
+	print('missing', find_missing())
+
+	lkeys = list(ldb.keys())
+
+	# operate on them a bit
+	print('random operations')
+	operations = ('set', 'get', 'delete')
+	for i in range(nkeys // 2):
+		op = choice(operations)
+		k = choice(lkeys)
+		if op == 'set':
+			set(k, getrand())
+		elif op == 'get':
+			get(k)
+		elif op == 'delete':
+			delete(k)
+			lkeys.remove(k)
+
+	print('missing', find_missing())
+
+	print('delete')
+	for k in lkeys:
+		delete(k)
+
+	print('missing', find_missing())
+
+	sys.exit(0)
+
+
diff --git a/tests/python3/random1.py b/tests/python3/random1.py
new file mode 100755
index 0000000..ae18bc6
--- /dev/null
+++ b/tests/python3/random1.py
@@ -0,0 +1,154 @@
+#!/usr/bin/env python
+
+import sys
+import nmdb
+from random import randint, choice
+
+
+class Mismatch (Exception):
+	pass
+
+
+# network db
+ndb = nmdb.DB()
+ndb.add_tipc_server()
+ndb.add_tcp_server('localhost')
+ndb.add_udp_server('localhost')
+
+# local db
+ldb = {}
+
+# history of each key
+history = {}
+
+# check decorator
+def checked(f):
+	def newf(k, *args, **kwargs):
+		try:
+			return f(k, *args, **kwargs)
+		except:
+			if k in history:
+				print(history[k])
+			else:
+				print('No history for key', k)
+			raise
+	newf.__name__ = f.__name__
+	return newf
+
+
+# operations
+@checked
+def set(k, v):
+	ndb[k] = v
+	ldb[k] = v
+	if k not in history:
+		history[k] = []
+	history[k].append((set, k, v))
+
+@checked
+def get(k):
+	n = ndb[k]
+	l = ldb[k]
+	if l != n:
+		raise Mismatch((n, l))
+	history[k].append((get, k))
+	return n
+
+@checked
+def delete(k):
+	del ndb[k]
+	del ldb[k]
+	history[k].append((delete, k))
+
+@checked
+def cas(k, ov, nv):
+	prel = ldb[k]
+	pren = ndb[k]
+	n = ndb.cas(k, ov, nv)
+	if k not in ldb:
+		l = 0
+	elif ldb[k] == ov:
+		ldb[k] = nv
+		l = 2
+	else:
+		l = 1
+	if n != l:
+		print(k, ldb[k], ndb[k])
+		print(prel, pren)
+		print(history[k])
+		raise Mismatch((n, l))
+	history[k].append((cas, k, ov, nv))
+	return n
+
+
+def check():
+	for k in ldb.keys():
+		try:
+			n = ndb[k]
+			l = ldb[k]
+		except:
+			print(history[k])
+			raise Mismatch((n, l))
+
+		if n != n:
+			print(history[k])
+			raise Mismatch((n, l))
+
+
+# Use integers because the normal random() generates floating point numbers,
+# and they can mess up comparisons because of architecture details.
+def getrand():
+	return randint(0, 1000000000000000000)
+
+
+if __name__ == '__main__':
+	if len(sys.argv) < 2:
+		print('Use: random1.py number_of_keys [key_prefix]')
+		sys.exit(1)
+
+	nkeys = int(sys.argv[1])
+	if len(sys.argv) > 2:
+		key_prefix = sys.argv[2]
+	else:
+		key_prefix = ''
+
+	# fill all the keys
+	print('populate')
+	for i in range(nkeys):
+		set(key_prefix + str(getrand()), getrand())
+
+	lkeys = list(ldb.keys())
+
+	# operate on them a bit
+	print('random operations')
+	operations = ('set', 'delete', 'cas0', 'cas1')
+	for i in range(nkeys // 2):
+		op = choice(operations)
+		k = choice(lkeys)
+		if op == 'set':
+			set(k, getrand())
+		elif op == 'delete':
+			delete(k)
+			lkeys.remove(k)
+		elif op == 'cas0':
+			# unsucessful cas
+			cas(k, -1, -1)
+		elif op == 'cas1':
+			# successful cas
+			cas(k, ldb[k], getrand())
+
+	print('check')
+	check()
+
+	print('delete')
+	for k in lkeys:
+		delete(k)
+
+	print('check')
+	check()
+
+	sys.exit(0)
+
+
+
+