- Hovercraft! is a tool to make Prezi-style presentations from human-readable text files. Version 2.1 fixes a bunch of bugs, drops Python 3.2 support and adds support for having multiple css-files.
- svg.path is a collection of objects that implement the different path commands in SVG, and a parser for SVG path definitions. 2.1.1 adds an error-parameter to many functions so you can specify the amount of error you need in calculations, where a higher error means faster calculations.
- Pyroma tests your Python-project’s packaging friendliness. 2.0.0 changes how data is extracted from setup.py, and now works with pbr and other tools that extends distutils to store the metadata outside setup.py.
- tzlocal returns a tzonfo oject with the local timezone information. 1.2.1 is a bugfix release.
- Spiny tests your package under multiple Python versions, like tox, but I like it better. If you can run your tests with “setup.py test” and you have specified supported versions as package classifiers, you don’t need any project configuration. Version 0.5 is the first release that is usable, really.
- Skynet is a module that doesn’t want to die. It’s purpose is to demonstrate Python’s exit hooks and signal handling.
Here are some random notes on RHEL 7.1. Mostly complaints. I think also all complaints from my Fedora 20 post remains. As usual this is posted after I actually stopped using the distro I’m talking about.
In the “Installation summary” page you must first go to the lower right to turn on the network and then go to upper left for setting date and time, or “network time” (ie sync to a time server) will not be on. I don’t know if that settings is preserved after the installation too, but I don’t want to find out.
The box where you type in your desired hostname is really in a discreet location, it’s easy to miss, and then your system is called “locahost.localdomain”. Annoying.
The partitioning screen is highly confusing. Make sure you have backups, because you are likely to not know what you are actually doing, and even if you aren’t likely to erase a hard disk by mistake, you’ll be scared to death to press the install button.
You have to set a root password, as most GUI tools does not sudo, but runs as root. And if you set it to something different that your own password you will get confused and type the wrong one.
After install make sure you don’t have beta or htb the repos selected. It was selected by default for me, possibly because I use employee subscriptions. If you have that selected and run yum update it will upgrade you to 7.1 Beta.
I ran Fedora 20 on my laptop for over 6 months, and was not happy with Gnome Shell. As a result, I installed several shells on RHEL7, to try them out. I had many recommendations for Cinnamon, so even though it is was to archaic and windows-like for me, I figured maybe I could tweak it or install extras that would give me a more modern behavior with a dash/dock/launcher. Essentially I like Unity, but the effort to run Unity on Red Hat Linuxes seems to have died.
However, Cinnamon was consistently unstable. Often the screen would not turn on when I lifted the lid to unsuspend the computer. If it had turned off the screen to save power, sometimes the screen was just garbled pixels when it came back. The login screen started behaving weirdly. I don’t know why these things happened, the login screen is shown before Cinnamon is even started, but it only happened when I uses Cinnamon as the shell session.
I also tried Xfce, but it didn’t seem to do anything I wanted at all, and I couldn’t find any information on how to make it more modern.
I the decided to make another try with configuring Gnome Shell more to my liking, and after discussions with very helpful people at the #gnome-shell IRC channel, I got to know that a complete reworking of the notifications in Gnome Shell is underway. They are getting rid of the idiotic Message Area (or whatever it’s called) and moving notifications back to where they should be: Visible. Some of the other annoyances I had was also recognized as being so, and in at least one case actually a regression bug.
Between the time of writing and the time of publishing this, the new designs are done, and available in Fedora 22. It will take a long time to get into future RHELs, but perhaps some kind soul can backport future Gnome releases. Red Hat Enterprise Linux is primarily a server OS, so that it’s desktop lags behind Fedora is not surprising. I just hope that Red Hat can incorporate a newer version of Gnome Shell quickly, so that RHEL can compete as a desktop as well.
Still, while waiting for that day to come, there are things you can do to lower the pain of Gnome Shell on RHEL as well. This is much thanks to the one area where Gnome Shell wins over Unity: It’s highly configurable, and there are many extensions.
Gnome Shell tweaks
For Gnome Shell extensions to work you must have rhel-7-workstation-optional-rpms repo enabled, and install gnome-shell-browser-plugin. I have quite a lot of extensions, but here I’ll only mention the absolutely necessary ones.
There are two System monitors, System Monitor, which works but is useless since it shows up in the message tray, so you can’t see it without opening the message tray, and system-monitor, which I used on Fedora 20 and is fantastic. But doesn’t work on RHEL7! Hey ho.
Dash to Dock is an extension that is a must-have. It’s listed as outdated on https://extensions.gnome.org/ but works fine. This may be since RHEL is running a rather old version of Gnome. There is also Simple Dock, but it silently fails with no errors, probably for the same reason.
TopIcons is another must-have. It moves status icons from the Message Tray, where you can’t see them, to the top-bar, where you can. Unfortunately it won’t move the System Monitor widget so it stays useless.
Dual screen pains
The laptop I’m using still has the same resolution as my external screen, despite being much smaller. There is no solution for this that I can find in RHEL either. Switching between a high DPI and a low DPI is very annoying as you will constantly find yourself having too large or too small texts. Using both screens in dual-screen mode is not an option.
I solved this by having both a stationary computer as my main work-machine in my office, and using the laptop at home and for traveling. That way I never switch screens and never switch screen resolution. This is a problem that needs fixing for all Linux variations, and it’s probably going to take a very long time to fix, as it means software needs to stop thinking in pixels.
You have to install the EPEL repos, of course, and the Nux! Desktop repo is absolutely essential. http://li.nux.ro/repos.html
With these two I could install xchat, KeePassX, Quod Libet, skype, and more. And of course things necessary for work, such as git, git-review, subversion, etc.
To get multimedia working (and install Quod Libet, the best music library/player I’ve found so far):
sudo yum install gstreamer gstreamer-plugins-base gstreamer-plugins-good gstreamer-plugins-bad-free gstreamer-plugins-bad gstreamer-plugins-ugly gstreamer-ffmpeg quodlibet
RHEL was definitely tricky and quirky to use as a personal desktop for “power users” like me. But it’s main target is of course as a standardized enterprise install, where users can’t really install software at all, but everything including software selection is managed by the IT department. And in those cases even the problems of Gnome 3 isn’t a drawback, because these departments will typically install the more Windows 95-like Gnome Classic mode.
But for personal use Fedora is better, especially Fedora 22, which I’ve renecently switched to. More on that later.
This is an overview of the various date/time implementations in Python (that I know of) and why they aren’t good enough. I’m not mentioning why they are good, they may or may not be. The standard library’s datetime module has a lot going for it. This only explains why it’s not good enough. If I’ve missed a library, tell me and I’ll take a look.
The standard library modules
Too many separate modules, datetime, time, calendaring, etc, with some overlap (and omissions).
Naive and intentionally crippled timezone implementation.
Stores datetimes internally as the local time, which makes time zones overly complicated.
timedelta has a days attribute which is actually not 1 day, but 24 hours.
Except that the intentionally crippled datetime implementation makes it 23 or 25 hours on DST changes.
Arrows main criticism towards the standard library is that there is too many classes, like date, time and datetime. However, Arrow does not implement a date() replacement or a time() replacement, an Arrow() object only replaces datetime(). You can’t represent a date or a time with it, Arrow(2014, 5, 6) will return midnight that day. It also will return a timedelta if you subtract two Arrow objects. So it does not improve the situation. Even if you replace all your datetime objects with Arrows, just still need to deal with the exact same number of classes.
And how should time arithmetic work on a date? What is “The 1st of February plus three minutes”? That’s not obvious. A date is the whole day so it actually should be three minutes after midnight on February 2nd”, but if you implement that I think most people would be very surprised. They would get even more surprised that datetime(2014, 11, 2, 5) – datetime(2014, 11, 2) doesn’t return a 5 hour timedelta. (Actual elapsed time between midnight and 5 am that day is six hours). And even if you implement one object that can behave both as a date and a datetime. How can you tell the difference between a date and a datetime without asking the object if they are the same type?
I therefore think there should be separate date and datetime objects.
In addition I’m not happy about how Arrow.replace() works, with Arrow.replace(month=3) sets the month to March, but Arrow.replace(months=3) increases it with three months. The difference is to subtle, and the replace() method should replace, not add. A shift() method would have been better.
All Arrow objects are time zone aware, and default to UTC. But separating time zone aware objects and time zone naive classes are essential, as again, they can’t reliably be compared or subtracted. Just assuming that not specifying a time zone means UTC is not good enough for all cases.
- Uses Arrow as it’s implementation, so only included here for completeness.
Datetimes are mutable, and they shouldn’t be, because then you can’t use them as keys etc. Possibly there could be mutable datetimes as well, but that should not be the default.
Merges dates and datetimes into one class.
- Old, inconsistent and confusing due to being written/extended piecemeal.
- A lot of old cruft like supporting timezones named “EST”.
- No real time zone support.
Implements a relativedelta class, but uses the stdlib for everything else, so only included for completness.
A wrapper around datetime and pytz, chiefly for a more convenient API (I think).
I’m used to the Ubuntu installer, so I obviously think it’s SUPER EASY! I do wish it was smarter about the locales, I select a Swedish keyboard but an English language for the OS, it should install both the Swedish and English language packs. The Swedish is needed because I want to have international standard date times.
The overlay scrollbars
I wish I could get IBM SAA CUA Scrollbars. They were the shit. You can page up and down with them for example. Ubuntu however has invented some sort of “overlay scrollbars”. I do not like them at all. You can disable the overlay scrollbars, but the “normal” mode isn’t great either. It for example can’t page up and down.
System Load Indicator is in the standard Ubuntu repositories. The Fedora version allows having separate indicators per processor. Yes, that’s 8, but that’s much more usable than having only 1. Running one process at 100% is barely visible as it’s only 1/8th of the indicator graph.
Compiling Python on Ubuntu is a bit of a pain, generally, as you need to patch older Pythons to compile, because library files are not where they expect. However, pyenv contains “python-build” a tool to make it easy to build any Python on pretty much any Linux. As a result, this is no longer a big issue.
Issues with Unity
I like the ideas behind Unity and Gnome Shell. I like them a lot. But the move there is not always smooth. In mouse-driven UI design, corners are important. This is because you can “throw” the mouse into a corner without aiming. Move the mouse fast and vaguely in the right direction, and it ends up in the corner. Unlike Gnome shell, Unity has all the window buttons that should be there: Minimize, Maximize and Close. They are on the left, so that Unity can move them into the top bar, hence taking less space. This is one of the things I like with Unity, it leaves the screen space to the applications. This also means that throwing the mouse into the top left corner and clicking closes the window. This is a good behavior, it makes it easy and fast to close windows. But it means that to open the main Unity menu with a mouse, you have to actually aim at the menu button, as it’s up there in the top left, but not actually in the corner. The top right corner is for system settings, logging out and rebooting. The bottom corners are not used for anything, and that kinda seems like a shame. Did they choose the top left corner just to not have people call it “The Start Menu”? I don”t know, but it seems to me that bottom left was not such a bad place after all. The decision to move the window buttons into the top bar also sees the application menus moved there. This causes problems with some applications, for example in some applications the text will continue to be black, and hence invisible on the black menu background.
How it compares with Gnome Shell/Fedora 20
I stayed on Ubuntu 12.04 for a long time because I wanted to use Unity 2D, as Unity 3D was buggy. This was mostly because of driver issues. I’ve now used Unity 3D on 14.04 without a single problem for more than half a year. While Gnome Shell is an exercise in being bombarded by annoyances, Unity is smooth and friendly. The notifications work well, they show for a long time and they go semi-transparent if you hover the mouse over them, so you can still click on whatever is below the notification. Status icons show up in the icon bar, where you expect them. You can choose to auto-hide the launcher or not. With today’s modern wide screens there is plenty of space on the side, so hiding it is really not necessary. Almost any open source Unix software you can think of has repos with distributions you can install. The update manager actually works. It will show updates once a day, and ask to reboot when needed after an update. You can scale the menus and title bars separately for separate screens, that’s a real nice feature. Unfortunately of course this is not a setting of DPI, and applications themselves will ignore it. It wold be really nice if you could use both the laptop screen and an external screen without getting a headache. I have no problems with my processors running on 100% for no apparent reason. OK, fair enough, right now while typing this one core is on 100% doing “nothing”, but the process running that is qemu running some sort of virtual machine. I don’t know what THAT machine is doing that is taking 100%, but the main OS isn’t doing anything anyway.
Ubuntu 14.04 is so nice. It and OS X are the only real contenders for “Top Desktop OS” in my opinion. Much of that is thanks to Unity. Fedora, and especially Gnome 3, has a lot of catching up to do.
Related: Fedora 20
Unexpectedly and amazingly even the third €1200 goal was reached in my funding campaign! The Python community is fantastic! Thanks to everyone!
This is the last day of funding (it says 0 days left), so there is just hours to go (I’m not sure at exactly what time it closes). It’s unlikely that the last goal of €2500 will be reached, but I can find ways to put any money over €1200 to good use. For example, I could print copies to give away at PyCon in Montreal. But what to do with that money depends a lot on how much it is.
The second goal of my crowd-funding campaign to make Porting to Python 3 into a community book has been reached! This means I will rename the book “Supporting Python 3”, create a contributor licensing scheme and update the “About this book” page to reflect the books community status and contributors.
But that doesn’t have to be the end! There are more things that can be done! And although we are unlikely to reach the goal of €1200, where I also set up automated testing and PDF generation, any money donated from now on will go towards the goal of automating this. I just do not promise that I’ll finish that work unless we reach the target!
We have 6 people who have opted for a special signed funders edition of the new book, and 2 funders have opted for home made smoked sausages! Get yours too!
I’ve reached the €400 goal of my fund-raiser, which means that I will clean up the book source and move it to Github, so anyone can contribute with just a pull request!
Next up is the stage I really want to reach: Spending the time to rename the book to “Supporting Python 3”, create a contributor licensing scheme and update the “About this book” page to reflect the books community status and contributors.
Come on community, you can do it!
My book “Porting to Python 3” needs updating, but I don’t have time. So I have decided that I will make it into a community book, put it up on GitHub and give the Python Software Foundation a license to use it to make money.
To that end I have created a crowd funding campaign to fund this transformation. I only need €400 to to the basic work, but there are also stretch goals. Rewards include smoked ham and baking!
I ran some statistics on PyPI:
- 50377 packages in total,
- 35293 unmaintained packages,
- 15084 maintained packages.
Of the maintained packages:
- 5907 has no Python classifiers,
- 3679 support only Python 2,
- 1188 support only Python 3,
- 4310 support Python 2 and Python 3.
- A total of 5498 packages support Python 3,
- 36% of all maintained packages declares that they support Python 3,
- 24% of all maintained packages declares that they do NOT support Python 3,
- and 39% does not declare any Python support at all.
So: The 59% of maintained packages that declare what version they support, support Python 3.
And if you wonder: “Maintained” means at least one versions released this year (with files uploaded to PyPI) *or* at least 3 versions released the last three years.
SCM tools: devs, ops or devops?
This is a somewhat disjointed brain dump on the topic. It may not always follow a logical readable path.
Software Configuration Management tools are originally aimed at operations people. They are system admin tools, created to make it possible to deploy a specific set of software in a consistent manner on several machines. The first such system I saw was in the mid-90’s and would replace the Windows 3 program manager. You would see a Microsoft Word icon, but when you clicked on it you did not start Word, instead you started a program that would check if you had Word installed, and that it was the correct version, etc. If not, it would install Word while you went for coffee, and when you came back Word would be running.
Devops has also started using SCM systems the last few years, as it allows you to quickly and easily deploy the last version of your web application onto your servers.
But developers have generally not used configuration management to set up their development environment. In the Python web community there is Buildout, most popular amongst Plone people and also some Django people. Buildout is very similar to a Software Configuration Management system, but it’s not designed to be one, so it lacks the modules for generic operating system tasks. It’s used to set up development environments and deploy websites.
So maybe developers don’t need SCM systems? Well, as the title already tells you, I disagree. And I will explain why by giving some examples.
Example 1: TripleO’s devtest.sh
In my new job at Red Hat, working with TripleO there has been a constant frustration in getting a development environment up and running. This is partly because OpenStack is a complex system made up of many separate parts and just installing this is complex in itself. But it is also partly because the “canonical” way of getting a TripleO development environment running is to run a script called “devtest.sh”. And that doesn’t sound bad, but so far I have only been able to run it successfully once. And since it easily can take an hour or two to fail, trying to run it is a frustrating exercise in patience. And I am a very impatient person, so I fail.
The basic problem with the devtest.sh script is that it is a script. It knows how to install things. To make sure it doesn’t try to install something that isn’t already installed, it first uninstalls it. So if the script fails in the end of the setup, re-running it means deleting a lot of what was done, and then doing it again. Often it failed because a website was down, or because my DNS server didn’t answer correctly or fast enough. And each error would kill the whole process and require me to start over.
You can also run it step by step, but when doing that it is often unclear if the step finished correctly or not. So I only managed to finish it properly after I got the recommendation to first run it in a mode to build all images, and then run the script in a mode that only did the configuration and did not try to build the images. Even so, I had to set up a caching proxy and a local DNS server to avoid network problems, so the image-building could finish. It’s also worth mentioning that I don’t have network problems, really. Only devtest.sh would claim it couldn’t reach servers or look up names. I don’t know why it’s so brittle.
I should note that last week TripleO became installable with Instack, so the situation has reportedly improved, but I haven’t tried yet, because I’m afraid of touching what is now finally a working situation. But this serves as a problem of how bad it can be. It took me three months, and probably a week or two of actual work to get devtest.sh running. It would most likely have been faster to run each step manually, but the only description of how to do that was in the massive collection of scripts that goes by the name devtest.sh. And following what happens in those scripts, containing a lot of global environment variables, is a lot of work. I know, because I tried.
Example 2: Zope/Plone and Buildout
In the typical project I would be involved in when I worked at Nuxeo, we usually had several Zope servers, a ZEO (ZODB) server, some other database, a Lucene server for searching, and a load balancer. And you needed most of these things to be able to develop on the project. Getting started on a project was typically a matter of following instructions in one or several README files, often inaccurate or outdated. Setting up a project so you could start working on it took in average around a day.
Then Buildout arrived. Buildout worked so that you defined up each of the services you wanted, and then you just ran bin/buildout and it would set up the environment. With Buildout setting up an environment to get started all it took was a few command line commands and a coffee break. Or, in the case of really complex projects, a lunch break. OpenStack is not an order of magnitude more complex than a typical Plone setup, yet it is an order of magnitude (or two) easier to get a development environment up and running with Plone than with OpenStack. I think that’s a good indication that Plone is on the right path here.
Buildout is somewhat limited in scope, it’s a tool designed largely for Python and it also helped isolate the project from the rest of the world, so you could have several projects on your computer at the same time with different versions of Python modules. It therefore has special handling for Virtualenv, as it also does environment isolation for Python, and Setuptools, which it requires and does many magic things with. But when developing Python software, and especially Python software for web, it’s an amazing tool.
It allows you to install specific versions of software and modules (and easily change which version). But storing the Buildout configuration in a version management system you can also tag the configuration when you deploy it at a customer. That way you can also easily and quickly replicate the customer deployment which helps you replicate issues.
It also has grown a very useful toolset. As en example, mr.developer allows you to make a configuration in such a way that if you suddenly need to develop on a specific module, you can quickly switch from the released version to using a trunk checkout. This means that you can choose which parts of the project that should be in “development mode”. Having a big project like Plone itself in development mode makes your environment to unstable. Somebody will often check in a change in one module that breaks another module, and should you happen to update your checkouts then your whole environment is broken, even though that change did not directly affect you. You want most of your environment to run stable versions, and only run the development trunk of the modules you are directly working on. Mr.developer allows you to do that.
Example 3: TripleO’s “diskimage-builder”
For creating images to use in virtual machines, TripleO has diskimage-builder. It’s also similar to an SCM system as it actually creates virtual machines, and than installs software and configures these machines. It does this based on “elements”, so you can define what elements you want to have installed on your machine-image.
A bad thing with it is that it’s script based. Each element is a script, meaning that overriding and extending them is tricky. It also means that you may have problems in mixing and matching elements, as they might step in it’s does, by for example each providing their own configuration file for a software. I don’t think this is a big problem for diskimage-builder, because it’s use case is installing OpenStack and Red Hat’s OpenStack products. The risk of different elements stepping on each other is therefore small. And it’s always used to create images from scratch, so the start-environment is know, it’s always a cleanly installed new operating system. This makes the whole configuration issue simpler.
But it also has several good ideas. The first of these are the elements itself. They define up how to install and configure a specific piece of the puzzle. This can be the operating system, or a software, like for example glance. And what is also nice is that the elements may have definitions for different kinds of installs. The glance element for example contains both installs from packages and from source.
So what do we developers really need?
We need a generic Software Configuration Management tool that can install all the bits that are needed for a development environment.
Instead of script that are doing things in a specific order, no matter what, a good system would instead just have a description of the state we want to achieve, and make it so. This is both for reliability and speed.
Buildout, as mentioned in my previous blog post on SCM’s, will in fact not re-run a successfully configured part unless the configuration for it changed (or told to explicitly).
This means that each attempt of running the script does not become a 4-hour monster run that will fail after 3 hours because of a temporary network error. If there is a temporary error, we just run the tool again, and it picks up more or less where it left off. Changing a configuration would only re-run the parts affected by that change.
A module repository
Since a module written in C has completely different ways of building than one in Ruby or Python, and also different languages have different ways of declaring dependencies (if any at all) there needs to be support for package repositories that declare these things.
A repository would need to declare how the module is called in different types of Unices, so it can be installed both by apt, yum, nix or the other package managers for Linux. You also need to declare where the source code for different versions can be downloaded, and how to compile it, for those systems that do not have a package. Most likely we would want Buildout style “recipes” to handle common cases, like a recipe that can download code and run “configure, make, make install” on it, and another recipe for installing Python modules, and another for Ruby modules etc.
You want to be able to declare the exact versions to use, and you want this to be done in a separate file. This is so that you can pin versions for deployment so that you can later easily replicate your customers deployment when looking at fixing bug reports.
Multiple version installs
As a developer, we want to be able to have several versions installed at the same time. This is easy with some software, and hard with other software. A developer centric SCM probably needs to integrate Nix and Docker to be able to isolate some things from the operating system. See also Domen Kozar’s blog on the topic.
Stable or development mode per module
A standard development build should probably install the latest released version of everything by default. In an OpenStack context that would mean that you install whatever comes with your operating system for the most cases. On Ubuntu it would install OpenStack with apt-get and on Fedora with yum.
But then you should be able to say that you want to get a specific piece, for example the Nova service, in development mode, and re-running the configuration after that change would remove or stop the OS-installed Nova, checkout Nova master, and build it, but probably not start it, as you would likely want to run it from a terminal so you could debug it.
When changing a module to deployment mode, the version pinning should of course be ignored. And if the modules checkout declares a different version than what is pinned, you should get an error.
Online and offline modes
You don’t always want your machines to reach out to the internet to install. That means that there needs to be a way to package both the configuration and all dependent files in a package that can be transferred to the target computer for installing.
What we probably do not want
We don’t want complicated ways to run installation over SSH. I don’t know why almost every SCM system seems to implement it’s own version of this. It’s probably better that the SCM system installs things locally, and that you use a separate tool to deploy different configurations to different machines. Fabric seems to be a good tool for this.
Does this already exist?
I don’t know. You tell me.