Activity Not Available
I Use This!


Analyzed about 2 months ago. based on code collected 3 months ago.
Posted 2 days ago
I spent a day or so last week cleaning up @jonasarrow's demo patch for derivatives on vc4.  It had been hanging around on the github issue waiting for a rework due to feedback, and I decided to finally just go and do it.  It unfortunately involved ... [More] totally rewriting their patches (which I dislike doing, it's always more awesome to have the original submitter get credit), but we now have dFdx()/dFdy() on Mesa master.I also landed a fix for GPU hangs with 16 vertex attributes (4 or more vec4s, aka glsl-routing in piglit).  I'd been debugging this one for a while, and finally came up with an idea ("what if this FIFO here is a bad idea to use and we should be synchronous with this external unit?"), it worked, and a hardware developer confirmed that the fix was correct.  This one got a huge explanation comment.  I also fixed discards inside of if/loop statements -- generally discards get lowered out of ifs, but if it's in a non-unrolled loop we were doing discards ignoring whether the channel was in the loop.Thanks to review from Rhys, I landed Mesa's Travis build fixes.  Rhys then used Travis to test out a couple of fixes to i915 and r600.  This is pretty cool, but it just makes me really want to get piglit into Travis so that we can get some actual integration testing in this process.I got xserver's Travis to the point of running the unit tests, and one of them crashes on CI but not locally.  That's interesting.The last GPU hang I have in piglit is in glsl-vs-loops.  This week I figured out what's going on, and I hope I'll be able to write about a fix next week.Finally, I landed Stefan Wahren's Raspberry Pi Zero devicetree for upstream.  If nothing goes wrong, the Zero should be supported in 4.9. [Less]
Posted 2 days ago
Lately I have been working on a simple terrain OpenGL renderer demo, mostly to have a playground where I could try some techniques like shadow mapping and water rendering in a scenario with a non trivial amount of geometry, and I thought it would be ... [More] interesting to write a bit about it. But first, here is a video of the demo running on my old Intel IvyBridge GPU: Your browser does not support the video tag. And some screenshots too: Note that I did not create any of the textures or 3D models featured in the video. With that out of the way, let’s dig into some of the technical aspects: The terrain is built as a 251×251 grid of vertices elevated with a heightmap texture, so it contains 63,000 vertices and 125,000 triangles. It uses a single 512×512 texture to color the surface. The water is rendered in 3 passes: refraction, reflection and the final rendering. Distortion is done via a dudv map and it also uses a normal map for lighting. From a geometry perspective it is also implemented as a grid of vertices with 750 triangles. I wrote a simple OBJ file parser so I could load some basic 3D models for the trees, the rock and the plant models. The parser is limited, but sufficient to load vertex data and simple materials. This demo features 4 models with these specs: Tree A: 280 triangles, 2 materials. Tree B: 380 triangles, 2 materials. Rock: 192 triangles, 1 material (textured) Grass: 896 triangles (yes, really!), 1 material. The scene renders 200 instances of Tree A another 200 instances of Tree B, 50 instances of Rock and 150 instances of Grass, so 600 objects in total. Object locations in the terrain are randomized at start-up, but the demo prevents trees and grass to be under water (except for maybe their base section only) because it would very weird otherwise :), rocks can be fully submerged though. Rendered objects fade in and out smoothly via alpha blending (so there is no pop-in/pop-out effect as they reach clipping planes). This cannot be observed in the video because it uses a static camera but the demo supports moving the camera around in real-time using the keyboard. Lighting is implemented using the traditional Phong reflection model with a single directional light. Shadows are implemented using a 4096×4096 shadow map and Percentage Closer Filter with a 3x3 kernel, which, I read is (or was?) a very common technique for shadow rendering, at least in the times of the PS3 and Xbox 360. The demo features dynamic directional lighting (that is, the sun light changes position every frame), which is rather taxing. The demo also supports static lighting, which is significantly less demanding. There is also a slight haze that builds up progressively with the distance from the camera. This can be seen slightly in the video, but it is more obvious in some of the screenshots above. The demo in the video was also configured to use 4-sample multisampling. As for the rendering pipeline, it mostly has 4 stages: Shadow map. Water refraction. Water reflection. Final scene rendering. A few notes on performance as well: the implementation supports a number of configurable parameters that affect the framerate: resolution, shadow rendering quality, clipping distances, multi-sampling, some aspects of the water rendering, N-buffering of dynamic VBO data, etc. The video I show above runs at locked 60fps at 800×600 but it uses relatively high quality shadows and dynamic lighting, which are very expensive. Lowering some of these settings (very specially turning off dynamic lighting, multisampling and shadow quality) yields framerates around 110fps-200fps. With these settings it can also do fullscreen 1600×900 with an unlocked framerate that varies in the range of 80fps-170fps. That’s all in the IvyBridge GPU. I also tested this on an Intel Haswell GPU for significantly better results: 160fps-400fps with the “low” settings at 800×600 and roughly 80fps-200fps with the same settings used in the video. So that’s it for today, I had a lot of fun coding this and I hope the post was interesting to some of you. If time permits I intend to write follow-up posts that go deeper into how I implemented the various elements of the demo and I’ll probably also write some more posts about the optimization process I followed. If you are interested in any of that, stay tuned for more. [Less]
Posted 6 days ago
Clickbait titles for the win!First up, massive thanks to my major co-conspirator on radv, Bas Nieuwenhuizen, for putting in so much effort on getting radv going.So where are we at?Well this morning I finally found the last bug that was causing ... [More] missing rendering on Dota 2. We were missing support for a compressed texture format that dota2 used. So currently dota 2 renders, I've no great performance comparison to post yet because my CPU is 5 years old, and can barely get close to 30fps with GL or Vulkan. I think we know of a couple of places that could be bottlenecking us on the CPU side. The radv driver is currently missing hyper-z (90% done), fast color clears and DCC, which are all GPU side speedups in theory. Also running the phoronix-test-suite dota2 tests works sometimes, hangs in a thread lock sometimes, or crashes sometimes. I think we have some memory corruption somewhere that it collides with.Other status bits: the Vulkan CTS test suite contains 114598 tests, a piglit run a few hours before I fixed dota2 was at:[114598/114598] skip: 50388, pass: 62932, fail: 1193, timeout: 2, crash: 83 - |/-\So that isn't too bad a showing, we know some missing features are accounting for some of fails. A lot of the crashes are an assert in CTS hitting, that I don't think is a real problem.We render most of the Sascha Willems demos fine.I've tested the Talos Principle as well, the texture fix renders a lot more stuff on the screen, but we are still seeing large chunks of blackness where I think there should be trees in-game, the menus etc all seem to load fine.All this work is on the semi-interesting branch of only has been tested on VI AMD GPUs, Polaris worked previously but something derailed it, but we should fix it once we get the finished bisect. CIK GPUs kinda work with the amdgpu kernel driver loaded. SI GPUs are nowhere yet.Here's a screenshot: [Less]
Posted 9 days ago
Last week I finally plugged in the camera module I got a while ago to go take a look at what vc4 needs for displaying camera output.The surprising answer was "nothing."  vc4 could successfully import RGB dmabufs and display them as planes, even ... [More] though I had been expecting to need fixes on that front.However, the bcm2835 v4l camera driver needs a lot of work.  First of all, it doesn't use the proper contiguous memory support in v4l (vb2-dma-contig), and instead asks the firmware to copy from the firmware's contiguous memory into vmalloced kernel memory.  This wastes memory and wastes memory bandwidth, and doesn't give us dma-buf support.Even more, MMAL (the v4l equivalent that the firmware exposes for driving the hardware) wants to output planar buffers with specific padding.  However, instead of using the multi-plane format support in v4l to expose buffers with that padding, the bcm2835 driver asks the firmware to do another copy from the firmware's planar layout into the old no-padding V4L planar format.As a user of the V4L api, you're also in trouble because none of these formats have any priority information that I can see: The camera driver says it's equally happy to give you RGB or planar, even though RGB costs an extra copy.  I think properly done today, the camera driver would be exposing multi-plane planar YUV, and giving you a mem2mem adapter that could use MMAL calls to turn the planar YUV into RGB.For now, I've updated the bug report with links to the demo code and instructions.I also spent a little bit of time last week finishing off the series to use st/nir in vc4.  I managed to get to no regressions, and landed it today.  It doesn't eliminate TGSI, but it does mean TGSI is gone from the normal GLSL path.Finally, I got inspired to do some work on testing.  I've been doing some free time work on servo, Mozilla's Rust-based web browser, and their development environment has been a delight as a new developer.  All patch submissions, from core developers or from newbies, go through github pull requests.  When you generate a PR, Travis builds and runs the unit tests on the PR.  Then a core developer reviews the code by adding a "r" comment in the PR or provides feedback.  Once it's reviewed, a bot picks up the pull request, tries merging it to master, then runs the full integration test suite on it.  If the test suite passes, the bot merges it to master, otherwise the bot writes a comment with a link to the build/test logs.Compare this to Mesa's development process.  You make a patch.  You file it in the issue tracker and it gets utterly ignored.  You complain, and someone tells you you got the process wrong, so you join the mailing list and send your patch (and then get a flood of email until you unsubscribe).  It gets mangled by your email client, and you get told to use git-send-email, so you screw around with that for a while before you get an email that will actually show up in people's inboxes.  Then someone reviews it (hopefully) before it scrolls off the end of their inbox, and then it doesn't get committed anyway because your name was familiar enough that the reviewer thought maybe you had commit access.  Or they do land your patch, and it turns out you hasn't run the integration tests and then people complain at you for not testing.So, as a first step toward making a process like Mozilla's possible, I put some time into fixing up Travis on Mesa, and building Travis support for the X Server.  If I can get Travis to run piglit and ensure that expected-pass tests don't regress, that at least gives us a documentable path for new developers in these two projects to put their code up on github and get automated testing of the branches they're proposing on the mailing lists. [Less]
Posted 15 days ago
Wrapping libudev using LD_PRELOAD Peter Hutterer and I were chasing down an X server bug which was exposed when running the libinput test suite against the X server with a separate thread for input. This was crashing deep inside libudev, which led ... [More] us to suspect that libudev was getting run from multiple threads at the same time. I figured I'd be able to tell by wrapping all of the libudev calls from the server and checking to make sure we weren't ever calling it from both threads at the same time. My first attempt was a simple set of cpp macros, but that failed when I discovered that libwacom was calling libgudev, which was calling libudev. Instead of recompiling the world with my magic macros, I created a new library which exposes all of the (public) symbols in libudev. Each of these functions does a bit of checking and then simply calls down to the 'real' function. Finding the real symbols Here's the snippet which finds the real symbols: static void *udev_symbol(const char *symbol) { static void *libudev; static pthread_mutex_t find_lock = PTHREAD_MUTEX_INITIALIZER; void *sym; pthread_mutex_lock(&find_lock); if (!libudev) { libudev = dlopen("", RTLD_LOCAL | RTLD_NOW); } sym = dlsym(libudev, symbol); pthread_mutex_unlock(&find_lock); return sym; } Yeah, the libudev version is hard-coded into the source; I didn't want to accidentally load the wrong one. This could probably be improved... Checking for re-entrancy As mentioned above, we suspected that the bug was caused when libudev got called from two threads at the same time. So, our checks are pretty simple; we just count the number of calls into any udev function (to handle udev calling itself). If there are other calls in process, we make sure the thread ID for those is the same as the current thread. static void udev_enter(const char *func) { pthread_mutex_lock(&check_lock); assert (udev_running == 0 || udev_thread == pthread_self()); udev_thread = pthread_self(); udev_func[udev_running] = func; udev_running++; pthread_mutex_unlock(&check_lock); } static void udev_exit(void) { pthread_mutex_lock(&check_lock); udev_running--; if (udev_running == 0) udev_thread = 0; udev_func[udev_running] = 0; pthread_mutex_unlock(&check_lock); } Wrapping functions Now, the ugly part -- libudev exposes 93 different functions, with a wide variety of parameters and return types. I constructed a hacky macro, calls for which could be constructed pretty easily from the prototypes found in libudev.h, and which would construct our stub function: #define make_func(type, name, formals, actuals) \ type name formals { \ type ret; \ static void *f; \ if (!f) \ f = udev_symbol(__func__); \ udev_enter(__func__); \ ret = ((typeof (&name)) f) actuals; \ udev_exit(); \ return ret; \ } There are 93 invocations of this macro (or a variant for void functions) which look much like: make_func(struct udev *, udev_ref, (struct udev *udev), (udev)) Using udevwrap To use udevwrap, simply stick the filename of the .so in LD_PRELOAD and run your program normally: # LD_PRELOAD=/usr/local/lib/ Xorg Source code I stuck udevwrap in my git repository:;a=summary You can clone it using $ git git:// [Less]
Posted 16 days ago
A Preliminary systemd.conf 2016 Schedule is Now Available! We have just published a first, preliminary version of the systemd.conf 2016 schedule. There is a small number of white slots in the schedule still, because we're missing confirmation from a ... [More] small number of presenters. The missing talks will be added in as soon as they are confirmed. The schedule consists of 5 workshops by high-profile speakers during the workshop day, 22 exciting talks during the main conference days, followed by one full day of hackfests. Please sign up for the conference soon! Only a limited number of tickets are available, hence make sure to secure yours quickly before they run out! (Last year we sold out.) Please sign up here for the conference! [Less]
Posted 16 days ago
Last week I mostly worked on getting the upstream work I and others have done into downstream Raspbian (most of that time unfortunately in setting up another Raspbian development environment, after yet another SD card failed).However, the most ... [More] exciting thing for most users is that with the merge of the rpi-4.4.y-dsi-stub-squash branch, the DSI display should now come up by default with the open source driver.  This is unfortunately not a full upstreamable DSI driver, because the closed-source firmware is getting in the way of Linux by stealing our interrupts and then talking to the hardware behind our backs.  To work around the firmware, I never talk to the DSI hardware, and we just replace the HVS display plane configuration on the DSI's output pipe.  This means your display backlight is always on and the DSI link is always running, but better that than no display.I also transferred the wiki I had made for VC4 over to github.  In doing so, I was pleasantly surprised at how much documentation I wanted to write once I got off of the awful wiki software at freedesktop.  You can find more information on VC4 at my mesa and linux trees.(Side note, wikis on github are interesting.  When you make your fork, you inherit the wiki of whoever you fork from, and you can do PRs back to their wiki similarly to how you would for the main repo.  So my linux tree has Raspberry Pi's wiki too, and I'm wondering if I want to move all of my wiki over to their tree.  I'm not sure.)Is there anything that people think should be documented for the vc4 project that isn't there? [Less]
Posted 19 days ago
So we have two jobs openings in the Red Hat desktop team. What we are looking for is people to help us ensure that Fedora and RHEL runs great on various desktop hardware, with a focus on laptops. Since these jobs require continuous access to a lot of ... [More] new and different hardware we can not accept applications this time for remotees, but require you to work out of out office in Munich, Germany. We are looking for people with people not afraid to jump into a lot of different code and who likes tinkering with new hardware. The hardware enablement here might include some kernel level work, but will more likely involve improving higher level stacks. So for example if we have a new laptop where bluetooth doesn’t work you would need to investigate and figure out if the problem is in the kernel, in the bluez stack or in our Bluetooth desktop parts. This will be quite varied work and we expect you to be part of a team which will be looking at anything from driver bugs, battery life issues, implementing new stacks, biometric login and enabling existing features in the kernel or in low level libraries in the user interface. You can read more about the jobs at the That link lists a Senior Engineer, but we also got a Principal Engineer position open with id 53653, but that one is not on the website as I post this, but should hopefully be very soon. Also if you happen to be in the Karlsruhe area or at GUADEC this year I will be here until Sunday, so you could come over for a chat. Feel free to email me on if you are interested in meeting up. [Less]
Posted 20 days ago
A couple of weeks ago, I hinted at a presentation that I wanted to do during this year's GUADEC, as a Lightning talk.Unfortunately, I didn't get a chance to finish the work that I set out to do, encountering a couple of bugs that set me back. ... [More] Hopefully this will get resolved post-GUADEC, so you can expect some announcements later on in the year.At least one of the tasks I set to do worked out, and was promptly obsoleted by a nicer solution. Let's dive in.How to compile for a different architectureThere are four possible solutions to compile programs for a different architecture: Native compilation: get a machine of that architecture, install your development packages, and compile. This is nice when you have fast machines with plenty of RAM to compile on, usually developer boards, not so good when you target low-power devices. Cross-compilation: install a version of GCC and friends that runs on your machine's architecture, but produces binaries for your target one. This is usually fast, but you won't be able to run the binaries created, so might end up with some data created from a different set of options, and won't be able to run the generated test suite. Virtual Machine: you'd run a virtual machine for the target architecture, install an OS, and build everything. This is slower than cross-compilation, but avoids the problems you'd see in cross-compilation. The final option is one that's used more and more, mixing the last 2 solutions: the QEmu user-space emulator.Using the QEMU user-space emulatorIf you want to run just the one command, you'd do something like:qemu-static-arm myarmbinaryEasy enough, but hardly something you want to try when compiling a whole application, with library dependencies. This is where binfmt support in Linux comes into play. Register the ELF format for your target with that user-space emulator, and you can run myarmbinary without any commands before it.One thing to note though, is that this won't work as easily if the qemu user-space emulator and the target executable are built as a dynamic executables: QEmu will need to find the libraries for your architecture, usually x86-64, to launch itself, and the emulated binary will also need to find its libraries.To solve that first problem, there are QEmu static binaries available in a number of distributions (Fedora support is coming). For the second one, the easiest would be if we didn't have to mix native and target libraries on the filesystem, in a chroot, or container for example. Hmm, container you say. Running QEmu user-space emulator in a containerWe have our statically compiled QEmu, and a filesystem with our target binaries, and switched the root filesystem. Well, you try to run anything, and you get a bunch of errors. The problem is that there is a single binfmt configuration for the kernel, whether it's the normal OS, or inside a container or chroot.The Flatpak hackThis commit for Flatpak works-around the problem. The binary for the emulator needs to have the right path, so it can be found within the chroot'ed environment, and it will need to be copied there so it is accessible too, which is what this patch will do for you.Follow the instructions in the commit, and test it out with this Flatpak script for GNU Hello.$ TARGET=arm ./[...]$ ls org.gnu.hello.arm.xdgapp 918k org.gnu.hello.arm.xdgapp Ready to install on your device!The proper wayThe above solution was built before it looked like the "proper way" was going to find its way in the upstream kernel. This should hopefully land in the upcoming 4.8 kernel.Instead of launching a separate binary for each non-native invocation, this patchset allows the kernel to keep the binary opened, so it doesn't need to be copied to the container.In shortWith the work being done on Fedora's static QEmu user-space emulators, and the kernel feature that will land, we should be able to have a nice tickbox in Builder to build for any of the targets supported by QEmu.Get cross-compiling! [Less]
Posted 20 days ago
Three years after my definitive guide on Python classic, static, class and abstract methods, it seems to be time for a new one. Here, I would like to dissect and discuss Python exceptions. Dissecting the base exceptions In Python, the base exception ... [More] class is named BaseException. Being rarely used in any program or library, it ought to be considered as an implementation detail. But to discover how it's implemented, you can go and read Objects/exceptions.c in the CPython source code. In that file, what is interesting is to see that the BaseException class defines all the basic methods and attribute of exceptions. The basic well-known Exception class is then simply defined as a subclass of BaseException, nothing more: /* * Exception extends BaseException */SimpleExtendsException(PyExc_BaseException, Exception, "Common base class for all non-exit exceptions."); The only other exceptions that inherits directly from BaseException are GeneratorExit, SystemExit and KeyboardInterrupt. All the other builtin exceptions inherits from Exception. The whole hierarchy can be seen by running pydoc2 exceptions or pydoc3 builtins. Here are the graph representing the builtin exceptions inheritance in Python 2 and Python 3 (generated using this script). Python 2 builtin exceptions inheritance graph Python 3 builtin exceptions inheritance graph The BaseException.__init__ signature is actually BaseException.__init__(*args). This initialization method stores any arguments that is passed in the args attribute of the exception. This can be seen in the exceptions.c source code – and is true for both Python 2 and Python 3: static intBaseException_init(PyBaseExceptionObject *self, PyObject *args, PyObject *kwds){ if (!_PyArg_NoKeywords(Py_TYPE(self)->tp_name, kwds)) return -1;  Py_INCREF(args); Py_XSETREF(self->args, args);  return 0;} The only place where this args attribute is used is in the BaseException.__str__ method. This method uses self.args to convert an exception to a string: static PyObject *BaseException_str(PyBaseExceptionObject *self){ switch (PyTuple_GET_SIZE(self->args)) { case 0: return PyUnicode_FromString(""); case 1: return PyObject_Str(PyTuple_GET_ITEM(self->args, 0)); default: return PyObject_Str(self->args); }} This can be translated in Python to: def __str__(self): if len(self.args) == 0: return "" if len(self.args) == 1: return str(self.args[0]) return str(self.args) Therefore, the message to display for an exception should be passed as the first and the only argument to the BaseException.__init__ method. Defining your exceptions properly As you may already know, in Python, exceptions can be raised in any part of the program. The basic exception is called Exception and can be used anywhere in your program. In real life, however no program nor library should ever raise Exception directly: it's not specific enough to be helpful. Since all exceptions are expected to be derived from the base class Exception, this base class can easily be used as a catch-all: try: do_something()except Exception: # THis will catch any exception! print("Something terrible happened") To define your own exceptions correctly, there are a few rules and best practice that you need to follow: Always inherit from (at least) Exception: class MyOwnError(Exception): pass Leverage what we saw earlier about BaseException.__str__: it uses the first argument passed to BaseException.__init__ to be printed, so always call BaseException.__init__ with only one argument. When building a library, define a base class inheriting from Excepion. It will make it easier for consumers to catch any exception from the library: class ShoeError(Exception): """Basic exception for errors raised by shoes""" class UntiedShoelace(ShoeError): """You could fall""" class WrongFoot(ShoeError): """When you try to wear your left show on your right foot""" It then makes it easy to use except ShoeError when doing anything with that piece of code related to shoes. For example, Django does not do that for some of its exceptions, making it hard to catch "any exception raised by Django". Provide details about the error. This is extremely valuable to be able to log correctly errors or take further action and try to recover: class CarError(Exception): """Basic exception for errors raised by cars""" def init(self, car, msg=None): if msg is None: # Set some default useful error message msg = "An error occured with car %s" % car super(CarError, self).init(msg) = car class CarCrashError(CarError): """When you drive too fast""" def init(self, car, other_car, speed): super(CarCrashError, self).init( car, msg="Car crashed into %s at speed %d" % (other_car, speed)) self.speed = speed self.other_car = other_car Then, any code can inspect the exception to take further action: try: drive_car(car)except CarCrashError as e: # If we crash at high speed, we call emergency if e.speed >= 30: call_911() For example, this is leveraged in Gnocchi to raise specific application exceptions (NoSuchArchivePolicy) on expected foreign key violations raised by SQL constraints: try: with self.facade.writer() as session: session.add(m)except exception.DBReferenceError as e: if e.constraint == 'fk_metric_ap_name_ap_name': raise indexer.NoSuchArchivePolicy(archive_policy_name) raise Inherits from builtin exceptions types when it makes sense. This makes it easier for programs to not be specific to your application or library: class CarError(Exception): """Basic exception for errors raised by cars""" class InvalidColor(CarError, ValueError): """Raised when the color for a car is invalid""" That allows many programs to catch errors in a more generic way without noticing your own defined type. If a program already knows how to handle a ValueError, it won't need any specific code nor modification. Organization There is no limitation on where and when you can define exceptions. As they are, after all, normal classes, they can be defined in any module, function or class – even as closures. Most libraries package their exceptions into a specific exception module: SQLAlchemy has them in sqlalchemy.exc, requests has them in requests.exceptions, Werkzeug has them in werkzeug.exceptions, etc. That makes sense for libraries to export exceptions that way, as it makes it very easy for consumers to import their exception module and know where the exceptions are defined when writing code to handle errors. This is not mandatory, and smaller Python modules might want to retain their exceptions into their sole module. Typically, if your module is small enough to be kept in one file, don't bother splitting your exceptions into a different file/module. While this wisely applies to libraries, applications tend to be different beasts. Usually, they are composed of different subsystems, where each one might have its own set of exceptions. This is why I generally discourage going with only one exception module in an application, but to split them across the different parts of one's program. There might be no need of a special myapp.exceptions module. For example, if your application is composed of an HTTP REST API defined into the module myapp.http and of a TCP server contained into myapp.tcp, it's likely they can both define different exceptions tied to their own protocol errors and cycle of life. Defining those exceptions in a myapp.exceptions module would just scatter the code for the sake of some useless consistency. If the exceptions are local to a file, just define them somewhere at the top of that file. It will simplify the maintenance of the code. Wrapping exceptions Wrapping exception is the practice by which one exception is encapsulated into another: class MylibError(Exception): """Generic exception for mylib""" def __init__(self, msg, original_exception) super(MylibError, self).__init__(msg + (": %s" % original_exception)) self.original_exception = original_exception try: requests.get("")except requests.exceptions.ConnectionError as e: raise MylibError("Unable to connect", e) This makes sense when writing a library which leverages other libraries. If a library uses requests and does not encapsulate requests exceptions into its own defined error classes, it will be a case of layer violation. Any application using your library might receive a requests.exceptions.ConnectionError, which is a problem because: The application has no clue that the library was using requests and does not need/want to know about it. The application will have to import requests.exceptions itself and therefore will depend on requests – even if it does not use it directly. As soon as mylib changes from requests to e.g. httplib2, the application code catching requests exceptions will become irrelevant. The Tooz library is a good example of wrapping, as it uses a driver-based approach and depends on a lot of different Python modules to talk to different backends (ZooKeeper, PostgreSQL, etcd…). Therefore, it wraps exception from other modules on every occasion into its own set of error classes. Python 3 introduced the raise from form to help with that, and that's what Tooz leverages to raise its own error. It's also possible to encapsulate the original exception into a custom defined exception, as done above. That makes the original exception available for inspection easily. Catching and logging When designing exceptions, it's important to remember that they should be targeted both at humans and computers. That's why they should include an explicit message, and embed as much information as possible. That will help to debug and write resilient programs that can pivot their behavior depending on the attributes of exception, as seen above. Also, silencing exceptions completely is to be considered as bad practice. You should not write code like that: try: do_something()except Exception: # Whatever pass Not having any kind of information in a program where an exception occurs is a nightmare to debug. If you use (and you should) the logging library, you can use the exc_info parameter to log a complete traceback when an exception occurs, which might help debugging on severe and unrecoverable failure: try: do_something()except Exception: logging.getLogger().error("Something bad happened", exc_info=True) Further reading If you understood everything so far, congratulations, you might be ready to handle exception in Python! If you want to have a broader scope on exceptions and what Python misses, I encourage you to read about condition systems and discover the generalization of exceptions – that I hope we'll see in Python one day! I hope this will help you building better libraries and application. Feel free to shoot any question in the comment section! [Less]