Do you remember the days of Pocket PC 2002 and 2003? Life was pretty simple with no need for certificates, a single screen size and SQL CE only ran on devices. Then Microsoft released SQL Compact 3.1 and suddenly the small database was also on the desktop. By this time I already had a small set of tools that remotely accessed the engine via a streamed RAPI interface and was confronted with the need to port the code to the desktop.
In its classical form, the code runs in two different processors with an asynchronous link. Porting the code to the desktop, where both the client and the server ran on the same process, meant either a major rewrite or a smart adaptation. I chose the latter and implemented a DMA data stream that is implemented by the "server" DLL on the desktop. The client merely uses a virtual stream that can be attached to a variety of server endpoints, like a RAPI server, a TCP/IP server or even a local server running in the same process (but on different threads).
This has worked quite successfully until last week when I got a report from a customer complaining about a mysterious error. After exporting a very large (and I mean very large) database from SQL Server to SQL Compact 3.5 on the desktop, the database file would end up with over 350 MB of size but zero tables. After repairing the database it would shrink down to the 20 KB empty databases have.
I first suspected of the transaction mechanism because the whole process (table creation, data transfer and final script execution) is executed under a single transaction. After confirming with Microsoft that there was no known transaction size limitation, I got back to the drawing board in bewilderment. To make matters worse, no error was being reported!
The product that caused this issue (Data Port Wizard) has a very simple usage pattern: you set the source and target databases, set some options and run the data transfer process. At the end the application reports success or error and then quits (after user confirmation). As it turned out, the problem happened when the application exited.
When transferring data to a desktop SDF file, everything works under the same desktop process. The SQL Compact data provider runs in its own thread (not on the UI thread) due to the reason I pointed out at the beginning: I wanted to reuse the remote provider code that runs on a different CPU, so I implemented it as running on a different thread with a DMA communication to the client code. When the client application closes, it signals the SQL Compact data provider to shut down and waits a little while for it to reply. If no reply arrives on time, the desktop client proceeds to terminate the application.
Apparently with very large databases, the SQL Compact engine takes a bit longer to shut down. This is in fact something to be expected because there are larger buffers to flush to storage and there is also the auto-shrink feature that runs when the connection closes. When this whole thing is running on a remote device, your desktop client can afford not to wait for the remote provider to shut down because it is running in a different CPU. When running under the same process, you don't have that luxury. If you don't wait for the SQL Compact engine to shut down in its own thread and you shut down the application, you will be preventing the database engine from correctly shutting down altogether. You can imagine the mayhem this causes to your database...
Lesson: if you run SQL Compact on a different thread, make sure you wait until the connection closes before you shut down your application. You have been warned!
Vincent Richomme just posted an article on Code Project on how to use the GDI+ libaries through native code. If you think it's hard to add that extra cool look to your native GDI app, do take a look at Vincent's article!
On my previous post I described a very simple solution to improve the performance of IImage::Draw - caching the converted image. This does speed up the process of drawing the alpha blended picture but has one drawback: you store the image twice. From what I understood from the IImagingFactory::CreateImageFromBuffer online documentation, the supplied buffer is not immediately used and must be there throughout the IImage object lifetime. So when you cache the image you are essentially storing two representations of the same thing (the original encoding and the decoded version). While this may not be a big issue for small images, this becomes a serious problem when using lots of images and / or larger images. So what can we do about this?
The idea is to have a single 32 bpp ARGB HBITMAP in memory per image. As I showed before, using the shell API to load the images is a bad idea because it shrinks them to 16 bpp. Is there any other alternative to this? We can try to create a 32 bpp bitmap in memory and then use the IImage object to paint it. After releasing the IImage object, we should get a properly-formatted 32 bpp ARGB bitmap. On my next post I will illustrate this technique that was validated by master guru Alex Feinman.
I finally found a solution for the speed problem of rendering the carousel icons: caching the image. After a big wild goose chase, the solution was under my very nose all the time. To see the dramatic performance improvement, open the CarouselDemo2 project I published on my last post, and go to the image.h file. At the top of the file, there is a private method named CreateImage, where the IImage object is created. Right after creating the object:
Now run the demo again and see how much the performance has improved!
Now that this issue is solved, let me tell you about the wild goose chase. My original idea was to use the AlphaBlend API that has been added to Windows Mobile 5. This function does support per-pixel alpha blending so it looked very promising. Unfortunately, there seems to be no easy way to load a 32 bpp bitmap image with Windows Mobile. I tried using SHLoadImageResource (I read somewhere that this would keep the alpha channel intact) but the returned HBITMAP was smashed down to 16 bpp, so the alpha channel was lost. I still have not given up to find an easy way to load the PNG files into memory, and when I do you will be the first to know. Oh, by the way, if you do know please post a comment here!
What's the difference between the two QVGA carousel implementations above? And what about the two VGA implementations below?
The top carousel icons look much better than the ones below, especially the ones with round shapes. What's the difference? While the images at the bottom carousel are regular Windows icons drawn from an image list container, the images at the top carousel are PNG images drawn using the IImage object. These PNG files were created with an "alpha channel", an extra byte of information that specifies how transparent each pixel is. Contrary to Windows CE icons which support only the concept of full transparency or full opaqueness, an alpha value determines how transparent each pixel is, so it can blend with the background and create very smooth transitions (so much nicer and easier on the eye). Using the right tools (these icons were produced using Axialis Icon Workshop), you can easily create these images and render them on the device screen.
The sample project (CarouselDemo2) illustrates how this can be implemented in code. First, I removed all the icon-related code and added two new classes (both on the image.h file): CImage and CImageArray. The first class encapsulates an IImage object and provides minimal services for loading the image from resources and drawing it on a given DC. The second class merely groups instances of the first in an array for convenience.
The great thing about IImage is that it allows us to load a raw byte stream of data making up a recognizable bitmap (more codecs can be added) and transforms it into a usable bitmap (not an HBITMAP, mind you). The supported image formats include JPEG, GIF and PNG with the nice alpha channel feature. All you have to do is load the image and draw it. Simple!
Well, not so fast... Really: it's dreadfully slow! If you run the new demo application you will see what I mean: the carousel scrolls like mud.
I previously had a similar experience where using IImage::Draw on a graphics-intensive GDI application proved impossible to use. The solution was to pre-render the PNG files (no transparencies here) to an HBITMAP and cache them, so I was not really surprised that the painting would be slower (but not that slow!). Is there any way around it? Hopefully yes, and I will write about it (or about my failure to achieve it) on my next blog post.
It's definitely not there. Some people have complained about this when parsing XML files using the .NET CF, but I found out about this yesterday night while trying to custom parse an iso-8859-1 encoded KML file. These files are used by Google Earth to add layers of geographical information on top of the displayed map. The content is regular XML with a specialized syntax that is recognized by the Google Earth application and you can find lots of sources of this type of files. Currently I'm developing a native Virtual Earth map browser for Windows mobile and I want to add the option to read these files and dynamically add the information to the map - that's why I met this issue.
When trying to convert a string using the MultiByteToWideChar API, I got an invalid parameter error when using the iso-8859-1 (28591) code page. A very brief search showed me that this is aknown issue and that, apparently, it's the device manufacturer's decision to include a given code page. Thankfully there seems to be a solution that should solve most of the conversion cases: use the windows-1252 code page.
Now I'm glad that I'm custom-parsing the KML file because I can cheat the encoding section and replace the iso-8859-1 with windows-1252 on-the-fly. I'm not sure I would be so successful if using an automated parser (native or managed). Now the question is: why is iso-8859-1 not there?