Monday, November 24, 2008

Connection Manager Article

The latest issue of MSDN Magazine is out and it features a great article from Marcus Perryman: Going Places: How Connection Manager Connects. The reason why I'm also referring to this article here is that its code samples are in C, a rarity these days.

Sunday, November 23, 2008

SQL Compact Schemas

Now that I have looked into the most basic principles of using a SQL Compact database from native code (base table cursors and command execution), I will now look at the available tools to enumerate and manipulate the database schema. There are essentially two ways to go about this:
  1. SQL commands
  2. OLE DB interfaces
Using SQL commands
This may be the easiest approach and is also language-independent - you can use these techniques from both native and managed code. To determine what's in your database you can execute SELECT commands against the INFORMATION_SCHEMA system views. These work like regular SELECT commands and return specific rowsets:
  • COLUMNS
  • INDEXES
  • KEY_COLUMN_USAGE
  • PROVIDER_TYPES
  • TABLES
  • TABLE_CONSTRAINTS
  • REFERENTIAL_CONSTRAINTS (not documented for 2.0)
To retrieve information from one of these views, you just execute a simple query like:

SELECT * FROM INFORMATION_SCHEMA.TABLES

This will return all tables in your database. The advantage of using a SELECT command is that you can add a WHERE clause and filter the result. If you need to make changes to your database schema you can use all the supported DDL commands. Since version 3.0 came out, Microsoft even provided us with a stored procedure to rename tables: sp_rename. Unfortunately you cannot rename columns which sometimes is needed.

Using OLE DB interfaces
All of this (and a bit more) can also be done through some of the exposed OLE DB interfaces, namely the following Session interfaces:
  • IDBSchemaRowset - Generates rowsets that are similar to the ones returned by the INFORMATION_SCHEMA views.
  • IIndexDefinition - Creates and drops indexes.
  • ITableCreation - Retrieves the table definition (full schema).
  • ITableDefinition - Creates drops tables, adds and drops columns.
  • ITableDefinitionWithConstraints - Same as above, but also manages table constraints.
  • IAlterIndex - Only renames indexes in SQL Compact.
  • IAlterTable - Allows changing table properties like the table name, column defaults and identity properties. You can also use this interface to rename columns (not available in managed code).
Note that when using these interfaces, you are accessing the storage engine directly, circumventing the Query Processor.

On my next post I'm planning to publish a sample that illustrates how to use these interfaces.

Thursday, November 20, 2008

Logo Certification Article

Here's a very nice article for native mobile developers that addresses the required coding for some of the strict Windows Mobile Logo Certification. A worthwhile read.

Tuesday, November 18, 2008

Implementing SQL Compact Command Parameter BLOBs

After much ramblings, tests and misconceptions, I finally closed the chapter on SQL Compact command parameters. The sample code (see below) illustrates how to use a BLOB command parameter using the "variable length" binding buffer. Whenever the total size of the columns exceeds the allocated binding buffer size, the binding process is repeated increasing the binding buffer size. Although this is not an optimal solution (being able to use storage objects for this purpose would be much better), it frees you from having to specify the maximum size for BLOB parameters - the code calculates this for you.

Before you can execute a command with parameters, you must prepare it by calling the Prepare method (failing to do so will cause a run-time error). Here the code determines the number of existing parameters and allocates memory for a "user buffer". This user buffer is an array of CDbValue objects, a VARIANT-like class whose sole purpose is to store parameter values. When the user sets a parameter it is first written to one of these objects and upon execution it is copied to the data buffer. Meanwhile, the code determines if the new set of parameters on the CDbValue array requires rebinding, does so if needed and finally copies the values to the data buffer. This way you just have to set the parameters and execute the command (see the sample code).

As a side-effect of all these travails, I ended up creating a new class - CDbValueRef - from which CDbValue derives that I use as the "engine" that sets and gets data from various types, performing the necessary data type conversions. As you can now see, the column data access on the CRowset class is performed through an array of such objects that directly reference the bound data buffer, and they know how to handle storage object BLOBs. Now you can access your CRowset data using a type-safe interface.

Sample: InsertBlob.zip (152 KB)

Monday, November 17, 2008

I was right...

... for the wrong reasons. In fact, you cannot use storage objects to set BLOB command parameters. I found that the usual mechanism does not work because:
  1. When retrieving command parameter information through GetParameterInfo, you don't get the DBPARAMFLAGS_ISLONG bit set for BLOBs;
  2. Binding a BLOB parameter through a storage object causes the Execute method to fail reporting a DBSTATUS_E_CANTCONVERTVALUE on the BLOB column.
So I had to revert to the old code that buffers the parameter values in a buffer and forces a rebind when the new set of parameters is larger than the old parameter buffer (used by the accessor). Now that I got this straight, it's time to finish the new demo that inserts BLOBs using a parametrized INSERT command. Please stay tuned for the next post.

Thursday, November 13, 2008

I was wrong

Tomorrow I will deliver (using the English language) my second presentation in an international Microsoft conference. My first time ever was in 2005 in Nice for the now defunct MEDC (Mobile and Embedded Developers Conference). I have been splitting my time in attending interesting presentations, preparing my own presentation and completing the OLE DB Client library code. It was while working some final details of the presentation that I had an epiphany: I was wrong about BLOB command parameters!

The fact that the OLE DB specification limits us to using just one parameter accessor handle to set all the parameters does not mean that we are unable to use storage objects. Why? When I discussed the limitation of the SQL Compact engine that prevents it to create more than one storage object at the same time, it actually applies to when the data flows from the provider to the consumer, like when you are reading data from a base table cursor or from a query cursor. It does not apply to when data flows from the consumer to the provider. In fact, we experienced no such limitation when inserting or updating data through a base table cursor. The consumer application can create as many storage object as it pleases to, and this is the exact same situation with command parameters: the consumer application can only write to them because SQL Compact does not support output parameters.

Right now this is still a theory that seems to make a lot of sense to me and I will focus now on proving it right. Meanwhile, I developed a new set of classes to manage column values in such a way that they would do double duty when used on a rowset and on command parameters. For the latter, these classes would work as intermediate data buffers that would be used as temporary storage for application-provided parameter data. When needed, the CCommand code would bind these to a real data buffer, recreating the accessor handle when needed. In fact, I have just tested the code and it works... But this can't be right. If I can bind the parameters before setting them (just like in the CRowset class) then I will have the best solution.

Back to the drawing board. (If you want to see the code as it is, please drop me a line.)