Posts filed under ‘linux’

icecc setup how to

I wrote this article back last February just to document the way I did setup icecc in my home and decided to put it here for easy access. Just ignore the time of compilation for both Qt and webkit (since then, both have changed considerably and I also added another node into the cluster, an Intel i7). For reference, the trick of  combining ccache and icecc is not very well know and this is why I think it is still worthwhile to document it here.

Maybe those are techniques that you are already familiar with, so my advanced apologies. If you don’t have used icecc/ccache before or just want to known a little more about it, let’s proceed.

Ok, so this is the scenario: you are working with a big code base or maybe you need to recompile a specific version of Qt/WebKit. Or it can be another reason, maybe you just want to self compile your own kernel/desktop like real programmers do, right?

Even with today’s multicore/gigabyte based systems, it may take a little longer than you wish to compile it all: http://xkcd.com/303/

So, what to do in this cases? There are some projects that can help you out, let’s examine the case of ccache first.

Ccache (http://ccache.samba.org/) is a tool created by samba team, that project that offers CIFS/SMB protocol implementation over *nix kernels. It works with a simple principle of generating a cache of compiled files and reusing those object files if no changes in the source code were detected. Instalation is quite easy in Linux (sudo apt-get install ccache), and you only need to point your PATH environment variable to ccache.

An alternative is to have a file in your home (e.g. ~/.samba.bashrc) with the following function (and source it in .bashrc):
function sambacache {
export PATH=/usr/lib/ccache:$PATH
}

So that you can enable samba by doing:
adenilson@macmini:~$ which gcc
/usr/bin/gcc
adenilson@macmini:~$ sambacache
adenilson@macmini:~$ which gcc
/usr/lib/ccache/gcc

The first compilation will take the same time as you are used to, but the following ones (after the cache is populated i.e. check ~/.ccache directory) will give huge speed ups. So, as an example, a favorite of mine (http://cellardoor.googlecode.com) takes 4s to compile in a Intel i5@2.3Ghz the fist time but next it will take only 0.48s to recompile after a make clean. Pretty neat, hum?

Ccache adds almost no overhead in the compilation and is a perfect helper if you have just 1 computer to do your compilation jobs.

There is an issue with the way that ccache works. It is easy to see that if your project has many self-generated files by each new compilation, the cache will not be reused. So, if for recompiling just Qt there are not big benefits (it self generates lots of header files at beginning of compilation), for WebKit it brings the time compilation down to just 2 minutes.

The next project is icecc (http://en.opensuse.org/Icecream) and it allows to use several computers in the same network to distribute the compilation jobs. It is the ‘evolution’ of distcc, offers some extra features and has a great monitoring tool called ‘icemon’. Instalation is easy in Linux (sudo apt-get install icecc icecc-monitor), configuration not that easy and the documentation could be better.

The speed ups are almost linear, being the network speed the most probable bottleneck. I’ve used it in my previous job with up to 15 computers connected and it allowed me to do a cold recompile of Qt in less than 8 minutes (bear in mind that those 15 computers were also compiling other projects at same time…).

To make it easier to follow, I will explain my home setup. I have 2 machines:
a) macmini: Intel i5@2.3Ghz running Linux 32 bits (dual boot with OSX Lion).

b) blackbloat: Acer notebook (cheapest one I could find, so portability is not its strongest feature…) Intel i5@2.67Ghz running linux 64 bits (dual boot with Windows 7). IP is 192.168.1.135

What is neat about icecc is the fact that you can have varied nodes in your cluster and still perform compilation jobs distribution. It does that by allowing you to create a ‘rootstrap’ of libraries and compilers to be used, being those sent to the other nodes in your first compilation. To do it, you should run:

adenilson@blackbloat:~$ icecc –build-native
adding file /usr/bin/gcc
adding file /lib/x86_64-linux-gnu/libc.so.6
adding file /lib64/ld-linux-x86-64.so.2
adding file /usr/bin/g++
adding file /usr/bin/as
adding file /usr/lib/libopcodes-2.21.53-system.20110810.so
adding file /usr/lib/libbfd-2.21.53-system.20110810.so
adding file /lib/x86_64-linux-gnu/libz.so.1
adding file /lib/x86_64-linux-gnu/libdl.so.2
adding file /usr/bin/cc1=/usr/lib/gcc/x86_64-linux-gnu/4.6.1/cc1
adding file /usr/lib/libmpc.so.2
adding file /usr/lib/libmpfr.so.4
adding file /usr/lib/libgmp.so.10
adding file /usr/bin/cc1plus=/usr/lib/gcc/x86_64-linux-gnu/4.6.1/cc1plus
adding file /usr/lib/gcc/x86_64-linux-gnu/4.6.1/liblto_plugin.so
adding file /etc/ld.so.conf=/tmp/icecc_ld_so_confpM2h21
creating de07a31507267d47693646853b78125e.tar.gz

adenilson@macmini:~$ icecc –build-native
adding file /usr/bin/gcc
adding file /lib/i386-linux-gnu/libc.so.6
adding file /lib/ld-linux.so.2
adding file /usr/bin/g++
adding file /usr/bin/as
adding file /usr/lib/libopcodes-2.21.53-system.20110810.so
adding file /usr/lib/libbfd-2.21.53-system.20110810.so
adding file /lib/i386-linux-gnu/libz.so.1
adding file /lib/i386-linux-gnu/libdl.so.2
adding file /usr/bin/cc1=/usr/lib/gcc/i686-linux-gnu/4.6.1/cc1
adding file /usr/lib/libmpc.so.2
adding file /usr/lib/libmpfr.so.4
adding file /usr/lib/libgmp.so.10
adding file /usr/bin/cc1plus=/usr/lib/gcc/i686-linux-gnu/4.6.1/cc1plus
adding file /usr/lib/gcc/i686-linux-gnu/4.6.1/liblto_plugin.so
adding file /etc/ld.so.conf=/tmp/icecc_ld_so_confWHgW78
creating 73c127da41faa91bfdbae1faedbb2113.tar.gz

And later point the environment variable ICECC_VERSION to where is the rootstrap tarball. In my case, I added a file in my home (i.e. .icecc.bashrc) and renamed the tarball with the following:

function coolice {
export ICECC_VERSION=/home/adenilson/Desktop/ice_rootstrap.tar.gz
export PATH=/usr/lib/icecc/bin/:$PATH
}

Similar to ccache, icecc requires to add it in the PATH before the standard compiler.
adenilson@macmini:~$ which gcc
/usr/bin/gcc
adenilson@macmini:~$ coolice
adenilson@macmini:~$ which gcc
/usr/lib/icecc/bin//gcc

The final step is to choose which machine will run the scheduler, responsible for distributing the compiler jobs between the nodes. In my case, I picked blackbloat, for making the changes permanent, added in /etc/default/icecc
START_ICECC=”true”
START_ICECC_SCHEDULER=”true”

And all the nodes (including blackbloat, since I doubled it as both a scheduler and a compiler node) should know which network to connect to (I decided to call my network ‘tucks’) thus editing the file /etc/icecc/icecc.conf:

ICECC_NETNAME=”tucks”
ICECC_ALLOW_REMOTE=”yes”
ICECC_SCHEDULER_HOST=”192.168.1.135″

There are other parameters you can tweak. For example, by default icecc will assign a compiler job for each CPU/Core/virtual cpu in your system, so it would run at maximum 4 jobs in each node in the case of an intel i5 (2 cores + hyper thread). You can change that by assigning a value in ICECC_MAX_JOBS filed in icecc.conf. Another tweak is the nice level of each running job process, default is 5 (which is good for not disturbing the normal workflow in a computer, but you can get a slightly better performance by changing it to a lesser value).

To monitor the machines in the cluster, just use: icemon -n tucks (see attached image).

It is important to known if *really* the jobs are being distributed. You can check:
a) If in your machine there are several ‘g++’ named processes (using basically no CPU) and a few ‘cc1plus’ named processes using all the available cpu;
b) If the other node(s) has several ‘cc1plus’ processes;
c) If there is considerable I/O going through the network interface (in my case spikes of up to 2.8MB/s)
d) If icemon shows the jobs moving through the nodes.

I once thought that I’ve configured it all fine and then started ‘make -j30’ with all the jobs running in my machine… needless to say, the machine locked.
🙂

Alright, enough talk, let’s see some numbers… To compile Qt 4.8 from the git repository it takes 34 minutes in macmini, running with ‘make -j6’. By using icecc, and running ‘make -j12’ it compiles in 21 minutes (almost half the time). It is important to remember that there are steps that can not be made parallel (e.g. qmake creating the .pro files, moc running, linking and so on).

For webkit plus Qt5, the numbers are even better: 1h 10m X 19minutes (it is almost 400% faster!). Those 50 minutes *by compilation* can add a lot at end of day.

Finally, it is also possible to combine ccache *with* icecc. To do it, just define your path as: export PATH=/usr/lib/ccache:/usr/lib/icecc/bin/:$PATH. In my desktop, I added the following file in my home:

adenilson@macmini:~/apps/webkit/Webkit$ more ~/.bamf.bashrc
function bamf {
export ICECC_VERSION=/home/adenilson/Desktop/ice_rootstrap.tar.gz
export PATH=/usr/lib/ccache:/usr/lib/icecc/bin/:$PATH
}

Attached you can check how icemon looks like while recompiling Qt 4.8 with 2 different views (starview and gantt view).

November 23, 2012 at 2:58 pm 1 comment

Amora: 14001 downloads

Around 2 years ago, I did the release of Amora (A mobile remote assistant) for Nokia smartphones. This started as a pet project, mostly driven by my own requirements to have a good software for reliably controlling slides and movies using my cellphone through bluetooth. It has a server part, installed in the desktop (written in ANSI C) and a client part, installed in the cellphone (written in Python for S60).

Recently, I haven’t being really active developing it, mostly thanks to some factors:

  • At the time, I was waiting for PyS60 2.0 being released (and it took a long time to finally being made public);
  • This is my perception, but I feel that there is a shift from python to javascript as a scripting language in general;
  • I have being waiting for Qt in Symbian starting to support some features required (like bluetooth, for example). There is a QBluetooth project, but it requires some special capabilities to even install in a real device (and AFAICT, there is not an official build of it for third-party developers to use, say like Qt mobility);
  • Finally, I have being quite busy with other projects.

Last week, while checking the project’s webpage, I noticed this:

14001 downloadsNot too bad, hum? If you consider that from all the demographics, it targets only the Nokia, smartphone users (3rd and 2nd edition), non-touch, that runs linux as desktop (i.e. a subset of a subset of a subset of…). There is the added complication that the client is also available in other websites for download, so this number probably is not that accurate.

I guess that the fact that it followed the Unix philosophy (“do one thing, do it well”) and the great artwork (provided by my friend Wilson Prata and Alexis Younes) helped. And the fact that it is included in several Linux distros too! Special thanks to all the pacman (package mantainer) who packaged amora.

Amora background while connected

ps: initially, I thought about naming this post “Amora: 14k downloads”, but that would be incorrect, since 14k == 14336.

September 23, 2010 at 6:33 pm 5 comments

libgcal 0.9.4 released

Sweet! But what the heck is this ‘libgcal’ thingie? For starts, the name can and is misleading, since it should read as ‘library for google calendar’ but in reality, it implements both Contacts and Calendar google data protocol.

When I got it started, back in February of 2008, it was supposed to implement just calendar, but later on I realized that adding contact support only required +25% of code on it (thanks for well modular software design). Back then,  there was no other good alternative for any C/C++ programmer that would fit the following requirements:
  • easy to use;
  • well documented;
  • few dependencies;

So, I got my library started! After studying the google data protocol 1.0 (at the time) for while, I realized that using XPath would make my life way easier than say, browsing through the DOM tree searching for the attributes and tags that I wanted.

At that time, Qt didn’t have support for XPath (it only started with 4.5), so I went with libxml. For networking I used libcurl, which is fast/reliable and has great documentation (and a very welcoming community, from time to time I asked for help and always got answers for my questions).

From the very early beginning, I set a high quality standard in the development (after all, parsing XML in C is already prone to errors by itself) and followed a TDD (Test Driven Development) approach where *first* I write the test and *later* write the implementation of functions. Having an average of test coverage of 80% helped a lot when google released version 2.0 of the protocol (back in december 2008) and now more recently, version 3.0 of Contact’s protocol. I did the porting from version 1.0 to version 2.0 of the protocol in few hours, mostly because I could detect any regression by simply running the test suites.

IIRC in about 4 months I got the basic (authenticate, retrieve, add/edit/delete, query for updates) done and the library even got featured in the google official blog (that was surprising to say the least)!

So, the library was ‘done’, let’s put it to good use. I decided to integrate it with Opensync (it was rather cool, I got google contacts and calendar sync for my Nokia N95 over bluetooth working 6 months before google decided to release a syncml server for S60 devices). You can see a pre-jurassic video of this here. I think up until now libsyncml is a pretty good syncml implementation, it is just a shame that there is not a good UI bounded to it.

But I needed a good UI and the alternative seemed to write an akonadi resource. In just 3 weeks I got contacts working, while also implementing missing features in libgcal to make possible to do fast-sync (i.e. when you download only what has changed in server side). Contacts resource was done in just 3 days (I think this is clearly a good signal that akonadi API is well designed).

Developing the akonadi resources gave the opportunity to better understand how KDE community works and also to start running KDE trunk as my default desktop (after all, pre-packaged software are for sissies and developers should eat their own dog food).

So, why I wrote all this story? Well, to help to understand some numbers:
  • 10 months ……. since last release (0.9.3)
  • 6 distros ………. pre-packages libgcal (Debian, Ubuntu, OpenSuse, Gentoo, Mandriva, FreeBSD) and counting
  • 2326 downloads .. directly from libgcal project website (hey, this is a source code tarball of a library and not some porn!)
  • 20681 views …… reported by google analytics in a 1 year period
  • 3000 ms……… the lag to ping google servers in a bad day in Manaus/Amazonas (yeah, truly it is unbeliavable how I managed to write a *networking* library in this enviroment but what doesn’t kill you makes you stronger)
  • 6762 LOC ……. lines of C code (34% are unit tests)
  • 76.5 % ……… current *real* code coverage (here I slipped a bit, it used to be 80%)
  • 10th ……….. of most wanted KDE features

So, what this new release brings? For start, support for multiple email addresses, patch by Stefano Avallone (Andre Loureiro helped to fix the unit test) and migration to Google Contacts API 3.0. Next, support for structured names and several other fields (nickname, blog, etc) by Holger Kral. I think currently only IM field is missing from the library (but is quite easy to implement that).

You can have access to both the library and the akonadi resource in libgcal website. The only issue is that is required to purge your akonadi resource and do a slow-sync again because the ETags and urls of contacts have changed thanks to migration to version 3.0 of protocol.

So, what is missing in the library for a 1.0 release? The following features:
  • support multiple calendars (easy to do, is a matter of using another URL as base to do the network operations)
  • support recurrent events in calendar (thanks to the fact that google uses an invalid iCal to represent it, it gets tricky to implement it since a iCal parser would fail to read this data). An idea to implement this would be to ‘convert’ the invalid iCal from google to a valid one and do the opposite when sending data back to the google server.
  • batch commit (nice to have, but not a hard requirement)
  • port/rewrite it all to Qt (seriously, this was actually started already: http://code.google.com/p/libgdata-cpp/). Here I’m somewhat unsure, if Qt has support for the XPath/XQuery in Symbian (it seems that RTTI is not supported in this OS).

Oh well… so, why not give it a try? If you got the skills, go on and download the sources (please check the README and INSTALL files) and feel free to report to me how things worked (or not…).

If you are a normal user, I think in a couple of weeks it should get packaged for your loved distro.

June 11, 2010 at 2:16 am 19 comments

Secure memory (a.k.a. mlock)

Last week, while trying to slim down the software dependencies of an application, I figured it out that one library could be dismissed if we could provide a secure/safe memory block to store a key.

This is a common requisite in 2 areas (real time systems and security applications): having a way to ensure that a segment of memory will not be swapped to the disk. For real time, swapping from the disk back to the memory can ruin the purpose of having deterministic performance and for security let’s just say that is bad to save passwords/keys in a non encrypted filesystem (even worst if you run our swap in a filesystem that can’t couple with user privileges).

After googling a lot to no avail, I decided to ask Arnaldo Carvalho (a.k.a. acme) if there was a syscall for letting the O.S. know that a memory block should not be swapped. The answer: man mlock (it turned to be a POSIX 2001 function to my surprise).

The idea is quite simple:

  • malloc some memory;
  • ask the O.S. to lock it (make it un-swappable);
  • use it;
  • unlock the memory and clean it later.

The only tricky part is that thanks to COW (Copy On Write), you can have access to a segment of memory that is being shared by your process parent (so the way to ensure that the memory is actually being duplicated to your process is by writing on it). A memset could do it, but I decided to be fancier and use another POSIX call sysconf (getpagesize is marked as deprecated in the man pages) to mark as dirty only one byte at each page. You can have the code here.

The way to use it:

unsigned int size;
char *ptr;
ptr = alloc_secure_memory(size);
if (ptr) {
//do something with the memory and later clean it

free_secure_memory(ptr, size);

}

I limited the amount of memory lockable to 20K, since it is way more than you usually need for storing a key and besides linux (in my case, Ubuntu) will limit the amount of lockable memory to 64K (of course if you are the root user you can set another value with ‘ulimit -l xxxx’).

Pay attention that mlockall can lock the whole memory of your application (with the risk of bringing the whole system performance down if you are running out of memory for other processes) and you should not use it.

March 30, 2010 at 11:00 am 3 comments

Plasma new animation classes

One of KDE 4 objectives was to create an organic and even more pleasant environment (and it is being achieved with both plasma-desktop and plasma-netbook). One of the features that contributes significantly to achieve a natural look and feel (together with really *great artwork*) are animations.

KDE 4 introduced plasma Animator class with the purpose of applying effects and animations to plasma widgets. With the new Qt 4.6 animation framework (a.k.a. kinetic), plasma effects started to be ported to it, introducing some new animations (e.g. Pulse, Rotation, Stacked rotation, kinetic scrolling) which are already being used in the upcoming KDE SC 4.4.

Kinetic scrolling made its debut in Plasma::ScrollWidget, used internally in uBlog (the twitter/identica plasmoid client) and plasma-netbook (i.e. to scroll through icons in application containment). It has a long story and was rewritten at least 4 times:
– Using a single timer and coordinates for scrolling (with bouncing effect);
– Using percentages for scrolling and properties for position;
– Using coordinates with properties again and implementing the concept of scrolling manager;
– Using QPropertyAnimation to do the animation instead of a timer (and having the bouncing by just changing the easing curve)

The new plasma animation classes has also an interesting story, being submitted through at least 4 big refactorings:
– Initial import based on gSoC project done by Mehmet Ali Akmanalp;
– Animation objects caching;
– Using QAbstractAnimation as base class;
– Reimplementing QAbstractAnimation::updateCurrentTime and non longer using an internal QPropertyAnimation/Group object to actually do the animation;

The good news are that the code is being made more flexible and paradoxically simpler at each review session. What about an example? Say that you want to have that nice pulse effect when a widget name button (i.e.that has QGraphicsWidget as base) is selected you just need to write something like this:


Animation *pulseAnim = Animator::create(Animator::PulseAnimation);
pulseAnim->setWidgetToAnimate(button);
connect(button, SIGNAL(clicked()), pulseAnim, SLOT(start()));

And the same concept is used for the following animations: rotation (2D), fade, grow, zoom, slide, stacked rotation (‘3D’), geometry. Obviously, that depending on the animation type, you got to setup more parameters, like movement direction/reference/distance/axis. Selecting easing curves is also possible, but we are working to have good pre-selected curves that makes sense for each animation class.

Those plasma animations can easily be integrated together with your own animations and directly used in animation groups (i.e. QAnimationGroup) being parallel or linear. Finally, all those classes are being bind with javascript making it dirty easy to have future js plasmoids with nice animations.

So, where to start now? My suggestion is to have a look in kdeexamples where there is a test C++ plasmoid that exercises pretty much all the current available animations.

Next post: video showing the animations. 🙂

December 10, 2009 at 7:59 pm 2 comments

Nokia Booklet 3G: KDE and plasma

As in the previous post, I wrote that the Intel GMA 500 drivers for Linux are problematic. The result is that trying to enable composite effects using OpenGL will result in rendering artifacts and crashes in both KDE and Gnome (in a side note, qtdemo will fail with “Application calling GLX 1.3 function “glXCreatePixmap” when GLX 1.3 is not supported!” but at least works with -graphicssystem=raster).

Last Sunday, I compiled Qt 4.6 and KDE from trunk  in the booklet and started to play with it to make it run well. The only way that I found for composite was to use XRender  (yeah, it will run way slower, but it is a workaround while we don’t have good graphics drivers).

As the result, the KDE effects based on KWin were running slow (so the workaround was to change the animation speed to ‘Instant’). The result of this hacks can be seen in the following videos, which I recommend watching in fullscreen (sorry about my Portuguese accented English):

Concluding this 3 parts saga:
– The Nokia booklet is a great hardware and Ubuntu Karmic support most of it out of the box
– Intel GMA 500 is at very best, a problematic GPU in Linux
– KDE and the plasma-netbook both runs smoothly in the booklet (even with lacking drivers)

November 12, 2009 at 1:52 pm 7 comments


Calendar

May 2024
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Posts by Month

Posts by Category