Simple Multithreading 2.0

Nearly one year ago I published an article on multithreading to share a library I had created. The point of that library was to simplify the usage of the BackgroundWorker by wrapping it with a micro-DSL. So far I used this library in two projects with very good results and since the code has evolved I am publishing a new version.

The download is a VS.NET 2008 solution with a single project that includes two folders: Tests and Worker. The project has two external dependencies: MbUnit 2.4 and log4net 1.2. The library itself uses log4net for debugging purposes. If you don’t use log4net, delete the handful of _log.DebugFormat in the WorkerPool class and the _log declaration at the top. To use the library copy all the files in the folder Worker into your project.

There are a few unit tests in the project and they all pass as long as you run them in isolation. If you run all tests in one go then some might fail. I am not 100% sure of the cause but it appears to be due to timing problems. When run in isolation they don’t fail at all. That is multithreading unit tests for you.

I hope that the unit tests and the previous article are enough to get you started. I was planning to write a series on this subject but time has been short. If you have any questions or find bugs drop them in the comments.

The download is here.

Using microDSLs to Replace Helper Classes


Helper classes are the dumping ground for more or less closely related methods that do not warrant a top level type by themselves. They tend to be not very well organized, semantically disjoint and hard to use. One particularly insidious type of helper class is the third party control helper. It hides all the knowledge required to link the controls with the application data behind a series of methods created piecemeal and consequently with poorly defined argument lists. Just today I was coding away yet another one of these helpers, this time to configure Telerik’s chart for WPF. After some time the code that called the helper was:

RadChartHelper.CollectionToPieChart(chart, myDataObject);

And the helper was composed of a series of methods with names such as AddSeries , AddMapping, etc, etc. It was obvious that this was not going to end well, especially because I want to use this chart in many different places in the application.

The solution I came up with was the creation of a very small DSL (Domain Specific Language) to take care of the chart configuration. The DSL is composed through method chaining JQuery’s way and the main advantages of this approach are that it can enforce specific sequences of actions and can be extended in a predictable way. The resulting calling code needs to know more about Telerik’s structures and that is the price to pay for the expandability of the interface. On the other hand the intent is more clearly expressed, for example:

    .AddSeriesDefinition(new PieSeriesDefinition())
    .AddMapping("Values", DataPointMember.YValue)
    .AddMapping("ColumnNames", DataPointMember.LegendLabel)

Where “chart” is a RadChart control. I am re-using Telerik’s PieSeriesDefinition and DataPointMember which require some insider knowledge of the control but hopefully it is obvious that the first one defines the type of chart whereas the second one determines which property of the data source is bound to which part of a chart.

You may have noticed the CloseMappings() method which is not required for the chart but that is used to transition between “branches” of the DSL. This is an interesting technique I applied by varying the returned type depending on the internal state of the configuration. There are two states in the process which I am calling branches: the main branch which is composed by AddSeriesDefinition and BindTo methods and the mappings branch which is composed by the IAddMapping interface (AddMapping and CloseMappings). The point here is that I want to enforce the addition of mappings after the addition of the series definition. So AddSeriesDefinition returns IAddMapping instead of RadChartBuilder forcing the developer to add mappings. CloseMappings returns to the main branch. It is hard to explain the mechanics of the process but the code is very simple and should be readily understandable.

This approach can be used for any type of configuration process and, I suspect, to most helper classes out there. Projects such as NHibernate and NInject use this approach which has been gaining support as a replacement for XML configuration files. I think this is a way to turn tedious code into a productive exercise in language definition.

You can download the code from here.

The usual caveats and disclaimers apply. The code is specific to Telerik’s WPF RadChart which is a commercial product.

Big, Fast and (nearly) Silent

Building your own machine used to be a way to save money but recent price drops on entry and medium level PCs inverted the situation. It is actually cheaper to walk into PC World and buy a no frills box than to build something equivalent. That is not true in the high end of the market, though. Granted, you can buy a Dell with the latest iCore 7 for around £800 but it is housed in an anaemic box without any extra space for further expansion. It still makes sense to build your own if you are looking for some fun and good performance at a reasonable price.

When I set out to build a new PC I was using a cheap two core AMD based box that had been “supercharged” with extra memory, two reasonable graphics cards and an extra hard drive. This setup drove four monitors: two 24′ plus two 19′, and did not cope very well with moving windows between monitors connected to different cards. This was a minor annoyance; the big issue were the ten plus seconds pauses whenever I started any WPF application, longer if started from within Visual Studio. For non developers this may sound irrelevant but when you start the same application over and over, hundreds of times a day the pauses can become rather painful. With this in mind and after looking at the alternatives I settled for the following components:

  • Intel iCore 7 920 CPU
  • ASUS P6T motherboard with three PCI express 16/8x
  • 6 Gb of Corsair Dominator RAM
  • Two Asus GeForce GTS 250 SLI 512MB graphics cards
  • Two Samsung 64GB 2.5″ SATA-II MLC Solid State Hard Drives
  • Hitachi 500 Gb hard drive.
  • Antec 550 TruePower PSU
  • Antec Twelve Hundred case

I considered a cheaper option built around AMD’s Phenom II but feared that the end result wouldn’t be such a big departure from what I already had. So I sacrificed another hundred pounds and went for the fastest processor in the market (crazy priced CPUs aside, of course). Altogether the grand total was close to nine hundred pounds but I did not have to buy the PSU, the DVD Drive and the Hitachi hard drive so it would have cost somewhere around the eleven or twelve hundred pounds. This is not bad considering that Dell’s top of the line workstations start at £1,200 and don’t have SSDs or even space to install two graphics cards in some cases.

The components I am happier with are the SSDs and the CPU. The SSDs are installed in RAID0 and form the boot drive. With them Windows 7 takes a few (very few) seconds to reach login and there are no pauses after login. It is a huge departure from what I had where start up would take up to a minute and after login Windows would continue to hit the disk for a while. On the application side, Visual Studio Team System starts up in about three or four seconds despite having Resharper and several other add-ins enabled. SSDs are the single best upgrade ever and are worth the premium price. This is a game changer for the storage industry and it will not be very long before conventional hard drives are history.

This is the fastest PC I have ever built by a very long shot and it is does not run hot at all so I can keep it rather silent. The CPU is never maxed out and keeps a steady 45 degrees throughout the day with the odd 50 here and there. The iCore has eight hardware threads and four cores which is still overkill for most everyday computing including development. The most I have seen in CPU utilisation was 25% which is one core maxed out, nevertheless this will change very soon since one of the big new features in the next version of GeneXproTools is multi-processing.

The biggest disappointments were the graphics cards and the case. The graphics cards are quite good in the graphics department but noisy to the point of distraction and I nearly returned them as I could not find any aftermarket solution. After a lot of searching I found Artic Cooling’s Accelero Twin Turbo that, despite the name, is nearly silent. Not all bits of this cooler fit the Asus card perfectly so I had to improvise a bit but the end result was very good. One tip, the cooler has two power cables for the fans and you only need to connect one. Make sure you try both as one runs the fans faster and noisier than the other. At normal operation one GPU is at 49 degrees and the other at 55. Not sure why they are so different, maybe because one is closer to the CPU? Also in the cooling area, I bought an Xtreme Freezer for the CPU but, although it is less noisy, the standard cooler that came in the box was perfectly appropriate. The coolers added another ninety pounds to the grand total.

The case was another adventure. It looks stunning and has loads of space but it is a bit noisy so I had to turn off several fans. But the main problem was the top monster fan. It did not work. I asked for a replacement and was asked for pictures of the fan and then told to break it (!) and send a picture of the broken fan or they would not send a new one. In the end I received a new fan and although it starts it has to turn too fast to be silent. It is off at the moment. Antec’s customer service was not nice at all. Nobody likes to be assumed a liar after receiving a partly working one hundred pounds case. I will avoid buying Antec products in the future despite quite liking them.

A final minor note: Setting up the RAID in the motherboard was quite a chore. The P6T BIOS was quite finicky and the instructions were not very clear. After a lot of wasted time I found out that the two first Sata connectors are dedicated to an easy to use RAID feature that did not recognise both SSDs. A side effect of using these two connectors was that the RAID start up prompt is hidden which caused endless reboots and frustration. The solution was not to use the two first ports but the poor instructions made me loose a couple of hours.

As for performance, I am quite happy with the result. The PC registers a 6.1 Windows Index for what is worth it, the lower being the graphics card and the higher at 7.2 the CPU. I am closing with a few pictures of the box and internals. Enjoy.

The case has several holes that allow passing all the wires to the back. A big plus.

An overview of the front with the CPU to the right and the PSU to the left, which is the bottom of the case. The coolers of the graphics cards cover all the PCIe slots reducing the expandability of the system.

The SSDs are still waiting for the rails to arrive…

The original and very noisy cooler on the GTS 250 graphics card.

Removing the original cooler was bit scary since it was quite well glued. In the picture you can see the small heat sinks that ship with the Twin Turbo. I had to improvise with the heat sinks on the far right because they did not fit very well between the capacitors that surround the voltage regulators.

The final result: a very fat card that takes more than two slots.

This cooler is huge with a 12 cm fan inside it.

The mythical one hundred lines of code per day

If you want to start a flame war, mention lines of code per day or hour in a developer’s public forum. At least that is what I found when I started investigating how many lines of code are written per day per programmer. Lines of code, or loc for short, are supposedly a terrible metric for measuring programmer productivity and empirically I agree with this. There are too many variables involved starting with the definition of a line of code and going all the way up to the complexity of the requirements. There are single lines that take a long time to get right and there many lines which are mindless boilerplate code. All the same this measurement does have information encoded in it; the hard part is extracting that information and drawing the correct conclusions. Unfortunately I don’t have access to enough data about software projects to provide a statistically sound analysis but I got a very interesting result from measuring two very different projects that I would like to share.

The first project is a traditional client server data mining tool for a vertical market mostly built in VB.NET and WinForms. This project started in 2003 and has been through several releases and an upgrade from .NET 1.1 to .NET 2.0. It has server components but most of the half a million lines of code lives in the client side. The team has always had around four developers although not always the same people. The average lines of code for this project came in at around ninety lines of code per day per developer. I wasn’t able to measure the SQL in the stored procedures so this number is slightly inflated.

The second project is much smaller adding up to ten thousand lines of C# plus seven thousand lines of XAML created by a team of four that also worked on the first project. This project lasted three months and it is a WPF point of sale application thus very different in scope from the first project. It was built around a number of web services in SOA fashion and does not have a database per se. Its average came up around seventy lines of code per developer per day.

I am very surprised with the closeness of these numbers, especially given the difference in size and scope of the products. The commonality between them are the .NET framework and the team and one of them may be the key. Of these two, I am leaning to the .NET framework being the unifier because although the developers worked on both projects, three of elements on the team of the second project have spent less than a year on the first project and did not belong to the core team that wrote the vast majority of that first product. Or maybe there is something more general at work here?

Visual Studio 2010 installation


The Visual Studio 2010 Beta 1 download is about 1.2 G for Team System and a little less for the professional edition. I started installation on my main development machine which is a dual core AMD running Windows 7. It required a restart after the installation of the framework itself and during the installation of Microsoft Help 3.0 Beta 1 it could not find the file helpviewer_ENU.exe which is not in the DVD at all (update: not true, it is but Windows 7’s search did not find it). I tried a few cycles of cancelling the dialog but no luck. Back to MSDN to download the professional edition and try my luck with that one…

I have the error log if anyone wants to have a look.

Now installing the Professional Edition and it installed the Microsoft Help 3.0 Beta 1 without any problems and the file can be found in the Professional Edition DVD at <drive>:\WCU\Viewer

And it is installed less than 20 minutes later which is an impressive short amout of time! VS 2008 seems to be working, at least I can open a large solution and rebuild normally.

In VS 2010 the first thing that stands out, is that the default font is now Consolas. Not sure but worth a try. The UI does not look too different from 2008 as everything appears to be in the same place. Unfortunately it will not open 2008 solutions without converting them to 2010.

Moved to Windows 7. Not Going Back.


I quite like Windows Vista but was unable to move away from XP for a long time because Vista did not support the mix of graphic cards I had at the time. As soon as this was sorted I installed Vista x64 on my development machine and went through a series of hoops to make two graphic cards and four monitors work properly. The two graphic cards are a GeForce 9500 GT on a PCI Express slot and an older GeForce 6200 on a PCI slot. As far as I can remember I went through driver hell and it took a few tries to get everything right. My experience with Windows 7 RC couldn’t have been more different. It detected the cards automatically, installed the drivers and all I had to do was move them into the correct positions. It even detected the motherboard’s graphic card but I don’t have a fifth monitor handy to try it. Very impressive given my previous experience!

OS Installation

The installation itself was fast; I started it around 20.00 and was finished before midnight. And that included installing Visual Studio, Office and several other applications. The OS itself was ready in less than an hour and the only questions I remember answering were my location, whether my network was a home or work network and the name of the PC and first user.

I tried to put up with UAC for a while but ended up disabling it for the duration of the installations. Mind you, the place where it is managed was moved and all the web pages with instructions on disabling UAC pointed to a non-existent Control Panel link. The best way to find it is to cause a UAC prompt and click the link on the prompt that takes you to the correct screen. The new UI for UAC is very slick and easy to operate so much so that I even went back and re-enabled it. The only complaint I have is that when a UAC prompt pops up the monitors connected to the PCI card do not redraw properly for a few seconds after the prompt is dismissed. Not serious but a bit annoying and probably it is the drivers fault, not Windows’.

Another unexpected thing I came across was that my old Laserjet 6 was not listed in the printer list. It is a very old printer but it was listed in Vista. The solution was easy but I nearly missed it. The printer list – can you please make this dialog resizable at some point – has a button with the caption Windows Update and there’s some text elsewhere in the dialog which goes something like, “Click Windows update to see more models”. It would be more obvious if the button caption was “More Printers”, for example.


I managed to install most of the software without any problems. The exceptions were SlickRun that installs but does not start and Chrome which needs a parameter (–in-process-plugins) added to the Target in its shortcut properties. Even with this change Chrome still fails regularly which is something I wasn’t used to in Vista. I have been using Chrome since the very first day of its release and it has been rock solid but there’s something not quite right with this installation. By the way, I am running Windows Seven Ultimate x64 on an AMD Athlon X2 4000+. The other application I was wary of was Visual Basic 6 but I ignored all the usual warning prompts and it installed and run just fine. It feels a little nimbler in the designer redrawing which was a dog under Vista. I just had Word 2007 SP2 crash for no obvious reason which is worrying but we will see how it goes. Our software, GeneXproTools, by the way, installs and runs perfectly on Windows 7 without any UI or performance degradation.

The Shell

The Windows 7’s shell is quite pretty and usable. I like that Microsoft is moving away from the dark mood of Vista into a pleasing bluish theme. Whoever created Vista’s unfortunate colour scheme moved to the Expression team but that’s a rant for another day. The background pictures are quite beautiful and the automated background changer was a pleasant surprise. I ended up turning it off because some of the pictures were too distracting in a multi monitor setup but I would use it in a laptop, for example.

The taskbar is way prettier than Vista’s or XP’s but I am still not used to the way it operates. The hardest part for me has been the window switching. I am always surprised when all the windows disappear and only the one I am hovering over shows up. I understand the idea but it looks a bit overkill. Interestingly, if I show off the feature to someone else it makes a lot of sense and is very pleasing but when I am working and switch windows I get distracted by the sudden change. The good part is that I really like the extra taskbar space and being able to preview IE’s tabs.

The window gestures are interesting but they only work at the edges of the monitors. This is unfortunate in multi monitor setups because the main monitor is in the middle and it’s where I do comparisons, for example. It would be useful to have this feature in any window although I am not sure if it would interfere with dragging windows between monitors. On the other hand, the maximize window gesture is annoying because when I move windows between monitors with different resolutions the window “grows” suddenly and is maximized. This has happened enough times that I have learned to avoid dragging windows along the top of the screen.

Perhaps the single best feature in Windows 7 is the changes to Explorer and the settings windows. The layout and headers are cleaner and simpler than in Vista and the sorting and resizing of folders lists is easier and less error prone. And a big pet peeve of mine with Vista went away: Now all folders seem to default to their proper type instead of randomly being assigned to music folder like in Vista. The control panel is also better focused and some of its applications are very pleasant and expertly designed. I quite like that when I connect my Sony MP3 player the icon matches the device appearance. Nothing earth shattering here but it highlights the attention to detail that went into Windows 7.

Final Words

I am sticking with Windows 7. I am very impressed with the interface and installation experience and so far I haven’t been compelled to switch off or change as many default settings as I used to in Vista and XP. So far I am only disabling the restore option on my C drive as I think this was the cause of a major slowdown in the previous Vista installation. This is not recommended but since I have daily backups I am fine even if I need to reinstall from scratch. Another minor annoyance is the Sleep function that does not work reliably. I always can make it go to sleep but sometimes it crashes and reboots. Nevertheless the boot time is quite acceptable and I only start the PC once a day. Finally, the performance on this PC, which is a low end PC, is quite good and certainly feels faster than my previous Vista installation.

DDD Southwest Session Voting


If you are attending DDD Southwest next May in Taunton, UK we need you to vote for your favourite sessions before the 25th of April.
Your vote will help us distribute the sessions over the several rooms to ensure that the most popular sessions get the larger rooms. You are not committing to attend that session on the day, we are just asking for your opinion!

Go to login and vote.

P.S. If you haven’t registered yet then there are still places available.

WPF Quick Reminder #1


WPF is all about markup and the composition experience in Visual Studio is so seamless that most of the time I forget that each one of those markup types are indeed real .NET types. Also pervasive in XAML are markup extensions and creating our own custom markup extension is as simple as creating a new class that inherits from MarkupExtension and overriding the ProvideValue method.

A few days ago I was reminded of all the above when I found a neat way of reducing the amount of plumbing required by a converter in XAML:

Making Value Converters More Accessible in Markup

Worth a complete read so I won’t repeat it here.

And a concise and to the point How To for implementing your own MarkupExtensions can be found here

Have fun.

*This is the beginning of a series of very short entries designed to store links to articles, software, tips and any other matter I want to follow up or use in the near future or that is just plain cool.

.NET Quick Reminder #1



I am a big fan of extension methods and I quite enjoyed the idea of replacing the OnxxEvent pattern with an extension method to EventHandler<T>.

EventHandler Extension Method

Saves a lot of boilerplate code!

*This is the beginning of a series of very short entries designed to store links to articles, software, tips and any other matter I want to follow up or use in the near future or that is just plain cool.

Making Simple Things Difficult

It had to happen, if you rant about something, WPF in my case, it will come back to bite you.

The story begins with my evaluation of Xceed’s datagrid. They have an express version which is free, it is nearly feature complete and can be used for commercial software. There is also a paid for version that adds some whiz and a couple of nice to haves like 3D views and Office 2007 Themes but I am impressed with their courage to release so much for free. I am happy with it so far and will probably buy it to get the themes.

I installed the express version and replaced the Toolkit DataGrid I have been using in next to no time. This is when the power or WPF and proper separation of concerns shine. I literally deleted the old grid, dropped the new grid in, let Resharper do its thing with the namespaces and I was ready to roll! Or almost… I was back to square one on the background highlighting I lost so much time with. After losing another couple of hours trying to highlight the background of cells with certain values I threw in the towel and asked the question in Xceed’s newsgroups. And I got an answer! And what is more I got the WPF-ish correct answer that solved the problem and demonstrated that I was really going to great lengths to ignore the obvious solution: use a style for the type, the DataCell type in this case.

<Style TargetType=”xcdg:DataCell”>
        <Trigger Property=”Content”>
                <System:Double>1.0</System: Double >
            <Setter Property=”Background” Value=”Red” />

Kudos to the Xceed support team for solving the problem.

 I tried this method with the Toolkit DataGrid but could not make it work anyone with a solution?