Sergio Navarro


Software architecture, development resources, multimodal and multilingual Information retrieval techniques and an improvable english grammar. This is all you will can find in my blog.

view:  full / summary

Portable Libraries para WP7 + WP8 + WinRT con Async/Await y Task

Posted on April 28, 2013 at 11:35 AM Comments comments (0)

En este vídeo muestro qué pasos hay que dar para añadir Windows Phone 7 como nuevo target framework a una Portable Class Library (PCL) que inicialmetne tiene solo como target a Windows Phone 8 y Windows 8 App Store. Si el código de nuestra PCL utiliza los keywords Async/Await o el namespace Tasks, la tarea no es tan simple como añadir el nuevo target al proyecto PCL. Veremos cómo añadiendo las librerías Microsoft.PCL y Microsoft.PCL.Async, y evitando ciertas incompatibilidades en el código, podemos hacer que nuestra PCL sea compatible con Windows Phone 7.

Multiplatform Development: Two strategies for migrating a synchronous module to WinRT

Posted on February 5, 2013 at 1:20 PM Comments comments (0)

I have spent a time working on a business multiplatform app which shares almost all its code between two "plaforms": Command line app (Net 4.0) and Windows Phone 7.

Last decision we took was to add Windows 8 to the list of supported platforms, and obviously the doubts arised early when I took a look to the new async I/O methods offered by WindowsRT API.  So, this post is an attempt to help others in the same situation I was to don't make same mistakes I made.

Let's go ... 

¿What's the big change on WindowsRT async APIs?

Well, one of the attempts made by Microsoft with WindowsRT is to fix a long standing problem of Windows applications: the unresponsive UI effect. It usually happens due to bad programming practices used by developers. So to avoid this problem in WinRT the I/O APIs has been designed such that any method taking longer than 50 milliseconds has been declared as an async method. Indeed, if you try to transform those async methods in sync methods and your UI thread spends too many time waiting the I/O operation to finish, the operation is cancelled by the OS.

With this in mind let's see the two ways I see for porting to WinRT a synchronous module that has to be called from the UI thread.  

First option, the one I like to call "Brute Force", is to change the code of your synchronous module to make it asynchronous. Despite async/await is a wonderful mechanism that makes easy asynchronous programming, when you have several layers in your module things can turn complicated and dirty when all your methods use async/await keywords. In fact, synchronous programming always will be easier than asynchronous one, no matter how easy is the asynchronous mechanism you are using. So, most sensible strategy should be use it only when it is needed not everywhere.

Looking at the figure you could have guessed this option is the tough one. However, it is the one usually used on a first attempt by newcomers to WinRT. ¿Why? Well, the first problem when you port your code to a WnRT project comes at compilation time, you get errors because your I/O calls to WinRT API are no longer available. Their replacements are asynchronous ones. So in order to harness them you start to add awaits to your code and its related async keyworks. Next you add more asyncs to the methods which call previous ones. At the end of the day you find you have modified your entire module putting those new keywords everywhere. It is just in that moment when you think to yourself: "Who's the xxxxxx at Microsoft who thought async/await mechanism was brilliant?". Stand and breathe maybe he was right and you are doing something wrong, don't you think?

Second option has two steps, first to create a synchronous I/O service to communicate with the WinRT API. It will convert asynchronous I/O operations in synchronous one. At this point your application will compile properly. Problems indeed will come at runtime when your UI thread attempts to execute the I/O methods you transformed in synchronous with the .Wait() method, It will throw an exception. So, in order to solve it we will add the second step, this is the THE TRICK ;) :  we will make a kind of Agent Service for your synchronous module which will allow the UI to communicate with your module asynchronously removing the need of making asynchronous almost all the methods of your module. With only this two changes you have ported your module to WinRT. Obviously, you have to be sure that you programmed it to be thread-safe to avoid problems.

Next you can see a little sample of the kind code you will need to include for your Agent Service and for your I/O Service:


Summarizing, my interpretation of WindowsRT intent is that if expects you left the UI thread free of I/O responsibilities (non blocked waiting for I/O) and put this dirty work on a different thread. My understanding says me I'm not violating any WinRT rule with this approach, in fact I think it is more natural than the first one because it uses Async only where it is needed not everywhere. Even though after discuss it with other partners I cannot state that I'm 100% secure that it will pass Microsoft Store control, just because I still have not published any app.

So I will be pleased to hear your comments validating or rejecting what I state. Specially if you are someone with deep knowledge on WinRT and its restrictions (MVPS, Microsoft Store publishers ...).

If I'm right then this is a point to take into consideration by developers working on database libraries and other I/O related operations for WinRT. By the moment I have seen some APIs offered for WinRT versions of those kind components and surprisingly I found they were only asynchronous. They didn't offer the alternate synchronous version of their methods. The reason maybe is they follow what Microsoft has done for their APIs. However, if they would offer a synchronous version of their methods we won't need to develop a Synchronous I/O service for their APIs when we port or share code with other platforms that were programmed without async/await mechanism.

"Microsoft feels that when a developer is given the choice of a synchronous and an asynchronous API, developers will choose the simplicity of a synchronous API. The result usually works fine on the developer system, but is terrible when used in the wild." Miguel de Icaza 

The reason why Microsoft doesn't offer the synchronous version of its I/O methods is they don't expect developers use them properly, so in that way they want reeducate developers to adopt the new paradigm and avoid erros of the past. In fact, despite it is possible to convert an asynchronous method to synchronous they by default don't offer the synchronous one. They don't take risks delegating on developers the responsibility of using a different thread for I/O operations.


What I propose to other I/O library developers is that they offer synchronous versions of their methods. Why?, because in that way they help to their customer. Currently, only offering asynchronous methods they only help to Microsoft to spread the new paradigm but at the prize of force its customers to write code for wrapping  each of the synchronous method we use from their APIs or use an async on every method of your app.

A clarification I would like to make is that I'm neither saying async/await mechanism is bad nor that restrictions and rules for WinRT are a bad idea. In fact, I think they are necessary if we want to move forward the user experience in our apps. The focus of this post is to stand out that when a technology or mechanism is good for solve a problem it doesn't mean it has to be used everywhere. Just because in other contexts its use is useless.and adds complexxity without need.

Well, now is your turn ;), please write comments I will be more than happy to hear your thoughts. Do you think second option will pass the Windows Store control? Do you think third-party I/O APIs should ease our life offering both versions of their methods, synchronous and asynchronous.

See you soon!





The WP7 Databases Cup: Siaqodb vs SQLCE. Part 1: Inserts (English)

Posted on February 23, 2012 at 1:10 PM Comments comments (0)

Hello another time!

The day of writing this post has arrived at last! ... Today the post is about a little benchmark which compares two of the most popular WP7 databases.

On the one hand, at the end of last year, Josue Yeray started an interesting series of posts attempting to find the most efficient way of doing a massive insert operation on SQLCE for WP7. In fact those series of post were a continuation of a funny challenge between two friends that competed using SQL Server and MongoDB for finding which of these two databases were faster inserting data. The challenge was quite simple they simply tried to insert a number of records in a table in less time than the other. Josue found out as he expected that desktop databases vs mobile databases comparison was not a fair competition :D. Eventhough it was a funny pretext to make a first public benchmark test to SQLCE for WP7.

On the other hand, months before Josue Yeray posted his series of posts I had to take a decision regarding which database to use in a project: Siaqodb or SQLCE. I selected Siaqodb for the project. So after reading his post I felt it would be worth for the community that I answer his "challenge" in order to have a comparison with other WP7 database.

For me the main reasons to select this object oriented database were the following ones:

  1. The time needed for the unit tests of a little application using Siaqodb was a half of the time needed by this application when it used SQLCE.
  2. Siaqodb is a true cross-platform database (Silverlight, WPF, WindowsForms, WP7, WindowsMobile, Android and iOS).
  3. Siaqodb has an Include method like the one available on Entity Framework. It allows you to set which of the nested objects of a class you want to load for a LINQ query executed by the database.
    SQLCE doesn't have this method available. You only can predefine which nested objects will be loaded each time you load its parent object. The problem is it cannot be changed. So no matter in which circumstances you load an object, you always will have to load the predefined nested objects you set on dbcontext configuration.

After theses words showing my clear preference for Siaqodb, I also reckon that between my reasons there is a lack of an objective benchmark test that shows which of the two databases has better performance for the different typical operations performed by a database. At least one benchmark more reliable than the data layer unit tests I performed with my little application.

The article has been divided in a series of four posts: Inserts, Inserts with Nested Objects, Queries and Updates. So this is the first post of the series.

The tests are based on the Missile 1 example used by Josue Yeray on its tests. In order to make a fair comparison I have added to this example some of the improvements he uses for his Missile 2 example. Concretely I removed the IsVersion annotation and I used the InserAllOnSubmit method in order to obtain the best performance when insert operations are performed using SQLCE.

However I have not added other improvements proposed on Missile 2 like the use of threads. The reason is that I am looking for meaningful differences between the two databases. So despite I consider that this other improvements would improve performance for both databases I hope you will agree it should not return meaningful result for comparing the two databases.

The source code I have used can be downloaded from the link at the bottom of this post. Results has been obtained using the WP7 emulator. Values showed on graphs and tables are measured in minutes.

The tests carried out can be divided in two axes. For the first axe I focus on how many flushes (or commits) are needed for inserting data on the database :

  • 1 Flush: The flush is performed only one time for inserting all the objects in the database. It is the most efficient strategy and it can be used on scenarios where you have available all the data you are going to insert into a table of the database.
  • N Flushes: It uses one flush operation per object to add. This strategy is less efficient than previous one. However it is the one we will use when our scenario forces us to gradually add data to our database along all the life cycle of the application.

The second axe for my tests is related to the constraints which have to fulfil the class to be saved on the database. For example while SQLCE forces us to set a primary key for our object between its properties. Siaqodb does not forces us to set a primary key but forces us to declare an OID property that will be used by Siaqodb as a kind of autonumeric primary key. So, with this constraints in mind and using as a base the test class used in the "challenge" by Josue Yeray I defined the next two test groups for this axe:

  • SQLCE Conditions: The test class used for inserting data in the database has to contain a primary key using a Guid property. It doesn't matter if besides it has an autonumeric property OID.
  • Siaqodb Conditions: The test class used has to have an autonumeric primary key. The Guid property is a simple field which has neither an index nor an unique constraint.

Regarding the way to decide the winner, I think it could be funny to present it as a soccer knockout of
a championship. On each knockout the two teams in the competition play two matches: One as local and the other as visitor. :P

First match: SQLCE (local) vs Siaqodb (visitor)

First Half: 1 Flush

The knockout starts on the Microsoft field. Both teams start the match ready to win and supporters of both teams sign their chants as aloud that we cannot hear anything else. :)

The SQLCE runs are the following:

  • SQLCE (1 Flush) Index + Unique + Guid : It uses the autonumeric property OID as primary key. The Guid property uses an index an has an unique constrain set.
  • SQLCE (1 Flush) Index + Unique + Guid + Not Autonumeric : It uses the Guid property as primary key. Moreover, it does not use any OID property just because it is not a must for SQLCE.

The Siaqodb runs are the following:

  • SIAQODB (1 Flush) Index + Unique + Guid : It uses an indexed Guid field which also has an unique constraint. Furthermore it contains the autonumeric OID field that is a must for Siaqodb.
  • SIAQODB (1 Flush Massive Insert) Index + Unique + Guid : It uses the same configuration than previous one but instead of using the standard method for store objects, it uses the Siaqodb specific one for massive inserts.

Looking at the results it is not easy to decide which database has scored the first goal. In absolute terms and by a slightly difference Siaqodb obtained the best time for 500.000 objects inserted in the database. However if we take a look to the result for 100.000 elements we see the results are just the opposite.

So in consequence I consider the result at the end of the half-time is 0-0 tie. Just because anyone of the two contenders has been able to impose to the other. Players walk back to the dressing room ...

Second Half: N Flushes

In the Microsoft field there is a tension feeling before the start of the second part. They did not expect that Siaqodb were a fearsome rival. It is a newcomer and no-one thought they will put so much pressure to one of the leaders of the competition.

It is important to stand out that the database which will rule this second half will be the one which will show better behaviour for the insertions performed along the life-cycle of an application. I am talking about applications that perform a big number of insertions with a low number of objects per insertion.

Let's see the changes on the line-up for each team ...

SQLCE runs:

  • SQLCE (N Flush) Index + Unique + Guid : It uses the autonumeric OID property as primary key. Furthermore it uses the Guid field with an index and with an unique constraint.
  • SQLCE (N Flush) Index + Unique + Guid + No Autonumeric : It uses the Guid property as primary key. However, it does not use the OID property because it is not a must for SQLCE.

Siaqodb run:

  • SIAQODB (N Flush) Index + Unique + Guid : It uses the Guid property with an index and with an unique constraint. Moreover, it uses an autonumeric property OID which is mandatory for Siaqodb.

Taking a look to the results we can see how both databases have difficulties for inserting 100,000 objects or more to the same table when the insertions are performed using one flush per insert. So it is important to take into account how expensive can be to add new objects to a table that contains a high number of elements before our adding operation.

Both databases has similar behaviour, eventhough Siaqodb is a clear winner because while it only needed 1 minute for inserting 10,000 objects, SQLCE needed half hour in order to make the same operation.

So despite anyone of the two teams has showed a great style at this second part. Siaqodb has made the goal that gives them the victory of this match as visitor on the Microsoft field.

The parity showed on this first match makes us think in a passionate second match of the knockout.

Second Match: Siaqodb (Local) vs SQLCE (Visitor)

First Half: 1 Flush

The second match starts on the Siaqodb field. This a more modest stadium that the Microsoft one. However Siaqodb team overcomes it with the enthusiasm of each one of its members. These are the lines-up.

Siaqodb runs:

  • SIAQODB (1 Flush) Index + Autonumeric Key : It uses the mandatory autonumeric OID. Moreover the Guid property is used without neither index nor unique constraint.
  • SIAQODB (1 Flush Massive Insert) Index + Autonumeric Key : It uses the same configuration than the previous one run. However instead of using the standard insert method it uses a specific method for massive inserts provided by Siaqodb.

SQLCE run:

  • SQLCE (1 Flush) Index + Autonumeric Key : It uses an autonumeric OID as primary key. Furthermore the Guid property is used without neither index nor unique constraint.

Looking at the results we can see that when Siaqodb plays on its field (with its conditions) it achieves better results than SQLCE. In fact, its advantage is bigger when the amount of data inserted increases.

So, at the end of half-time result is 1-0. The newcomer has taken a little advantage in the match and a big advantage in the global score of the knockout.

Second Half: N Flushes

Siaqodb stadium is a party at the begining of the second half. Will at least SQLCE able of coming from behind to tie the game?

Siaqodb run:

  • SIAQODB (N Flush) Autonumeric Key: It uses the mandatory autonumeric OID. Moreover the Guid property is used without neither index nor unique constraint.

SQLCE run:

  • SQLCE (N Flush) Autonumeric Key : It uses an autonumeric OID as primary key. Furthermore the Guid property is used without neither index nor unique constraint.

Another time as in second half of the first match Siaqodb shows better performance than SQLCE for inserting bigs amounts of data gradually on the database . Also for these runs both databases have serious problems for inserting data on tables with 100,000 objects.

The match finish with 2-0. Siaqodb is the winner of this match

Knockout Result: Conclusion

The knockout score is clear: 3-0. There is no doubt about the winner. Maybe we could think that this big difference on the score of the match shows an exaggerated difference between this two databases when in fact the differences in general did not were so big. However, if we take into account the fact that Siaqodb is a crossplatform database which can run on Android or IPhone then we can understand the final result is fair. So Siaqodb is a clear winner of this knockout.

On next posts we will see what happen when I evaluate other types of operations with these two databases ...

Source Code:

Nota: In order to run the project you will need to add a reference to the Siaqodb dll beta file for Mango. Moreover you will need to set a 30 days license code in the source code. Both can be obtained at the Downloads section of Siaqodb webpage .

The WP7 Databases Cup: Siaqodb vs SQLCE. Part 1: Inserts (Spanish)

Posted on January 30, 2012 at 3:25 PM Comments comments (0)

Hola de nuevo!


Ya llevo un mes queriendo escribir este post y por fin ha llegado el día, ... hoy el post es una comparativa entre dos bases de datos muy populares para WP7.

Por un lado a finales de 2011 Josue Yeray, comenzaba una serie de posts en los que estudiaba diferentes aproximaciones utilizables para la inserción de datos masivas en una base de datos SQLCE para WP7.


Y por otro lado unos meses antes yo me había visto en la obligación de decidir entre Siaqodb y SQLCE. Y mi elección fue Siaqodb. De ahí que me propusiera responder al "reto" lanzado por Josue Yeray. ;)


Los motivos principales para tomar mi decisión fueron los siguientes :

  1. Los conjuntos de unit tests de la capa de datos de una pequeña aplicación corrían el doble de rápido cuando utilizaba Siaqodb que cuando utilizaba SQLCE.
  2. Siaqodb es multiplataforma (Silverlight, WPF, WindowsForms, WP7, WindowsMobile, Android and iOS).
  3. Siaqodb dispone de la instrucción Include para establecer en cada consulta que objetos anidados dentro de otro objetos deseamos cargar junto con cada objeto devuelto. SQLCE dispone de una funcionalidad parecida, pero sólo permite establecer que objetos anidados se cargaran al cargar un objeto padre en el momento de creación de un dbcontext. No pudiéndose variar este comportamiento dependiendo de la situación en la que se vaya a utilizar la consulta.

Dicho esto y a pesar de mi manifiesta preferencia por Siaqodb algo que reconozco que se echa en falta en la lista es un test de benchmarking que muestre cuál de las dos bases de datos es más eficiente para los diferentes tipos de operaciones posibles. Al menos un test más fiable que un pequeño conjunto de unit tests para una aplicación.

El artículo lo he dividido en una serie de 4 posts: Inserts, Inserts con Objetos Anidados, Consultas y Updates. Siendo este el primer artículo de la serie.

Para los tests me he basado en el ejemplo de Josue Yeray Misile 1 al cuál he incorporado las mejoras específicas para SQLCE descritas en su Misile 2. Me refiero a la eliminación del campo IsVersion para acelerar las inserciones y la utilización del método InserAllOnSubmit.

He deshechado el uso de hilos pues he considerado que al fin y al cabo es una mejora que beneficia a ambas bases de datos por lo que no debería aportar diferencias significativas entre ambas que es lo que al fin y al cabo trato de sacar a la luz con estas pruebas.

El código fuente utilizado para las pruebas se puede descargar desde el enlace indicado al final del post y los resultados han sido obtenidos usando el emulador de WP7. Apuntar también que los valores de tiempos indicados en las tablas de resultados son dados en minutos.

Las pruebas llevadas a cabo se pueden encuadran en dos ejes. El primer eje se refiere a si la actualización de la base de datos se lleva a cabo de una vez para todos los registros insertados o una vez por registro insertado :

  • 1 Flush: La actualización de la base de datos se lleva a cabo de una sóla vez para todos los registros a añadir. Esta es la estrategia más eficiente y reproduce un escenario en el cual se dispone de todos los datos a insertar en una tabla de la base de datos.
  • N Flushes: Por cada registro de la base de datos a añadir se lleva a cabo una actualización del archivo de la base de datos. Esta estrategia es menos eficiente pero reproduce un escenario en el que los datos van incorporándose gradualmente a la base de datos. Sería un escenario similar al que se dio para los unit test de la capa de datos que ejecuté en su día y en general es muy similar a un escenario en el que los datos vayan a añadirse de forma gradual durante el ciclo de vida de la aplicación.

El segundo eje de pruebas se refiere a las condiciones que debe cumplir la clase de test a guardar en la base de datos. Por ejemplo mientras que SQLCE obliga a que definamos una clave primaria de entre los campos de la clase a guardar, Siaqodb no nos obliga a ello pero si que nos obliga a que definamos un campo OID que es una especie de campo autonumérico que hace las veces de clave primaria y es gestionado por Siaqodb. A partir de estas restricciones y partiendo de la propuesta de clase de test establecida por Josue Yeray, estos son los grupos de pruebas que he definido:

  • Condiciones SQLCE: La clase de test usada debe tener como clave primaria un campo Guid, independientemente de si tiene o no además definido un campo autonumérico OID.
  • Condiciones Siaqodb: La clase de test usada debe tener una clave autonúmerica, el campo Guid es un simple campo más no es ni indexado ni con restricción unique.

En cuanto a la forma de decidir el ganador me ha parecido divertido plantearlo como una eliminatoria de la copa del rey a ida y vuelta con sus correspondientes primera y segunda parte :P.

Partido de ida: Condiciones SQLCE

Primera Parte: 1 Flush

Comienza la eliminatoria en la cancha de Microsoft. Ambos equipos arrancan el partido con todo por delante y buscando no defraudar a sus respectivas aficiones ...

Las ejecuciones de SQLCE:

  • SQLCE (1 Flush) Index + Unique + Guid : Utiliza el campo OID autonumérico como clave primaria y el campo Guid como campo indexado con restricción Unique.
  • SQLCE (1 Flush) Index + Unique + Guid + No Autonumeric : Utiliza el campo Guid autonumérico como clave primaria. Se prescinde del campo autonumérico OID por no ser necesario para SQLCE.

Las ejecuciones que utilizan Siaqodb son:

  • SIAQODB (1 Flush) Index + Unique + Guid : La tabla de test utiliza el campo Guid indexado y con restricción Unique, además de el campo OID autonumérico obligatorio para Siaqodb.
  • SIAQODB (1 Flush) Index + Unique + Guid : Se utiliza la misma tabla de test que la anterior ejecución pero en lugar de utilizar el método estándar de almacenamiento en bd, se utiliza un método específico de Siaqodb para acelerar las inserciones masivas.

A la vista de los resultados no es fácil decidir cuál de las dos bases de datos se ha adelantado en el marcador. En términos absolutos, por poca diferencia Siaqodb a conseguido el mejor tiempo para 500.000 elementos insertados en la base de datos pero a muy poca distancia de los resultados obtenidos por las ejecuciones de SQLCE. Sin embargo si hablamos de un número de elementos insertados en la base de datos algo inferior, 100.000 elementos, los resultados se invierten.


Por tanto considero que el resultado de la primera parte es un empate a 0 pues ninguna de las dos bases de datos a conseguido imponerse claramente a su contrincante.


Con esto nos vamos al descanso ...

Segunda Parte: N Flushes

En el campo de Microsoft se respira la tensión en los minutos previos al comienzo de la segunda parte. Al fin y al cabo Siaqodb no es más que un recién llegado a la competición y nadie esperaba que le pusiera las cosa tan difíciles a un líder de la competición como SQLCE.


Hay que tener claro que la segunda parte será para aquella base de datos que mejor comportamiento tenga para la mayoría de inserciones que se realizan a lo largo del ciclo de vida de la aplicación, me refiero a aplicaciones con numerosas inserciones pero de pocos elementos cada una. Veamos que cambios han habido en las alineaciones de ambos equipos.


Ejecuciones con SQLCE:

  • SQLCE (N Flush) Index + Unique + Guid : Utiliza el campo OID autonumérico como clave primaria y el campo Guid como campo indexado con restricción Unique.
  • SQLCE (N Flush) Index + Unique + Guid + No Autonumeric : Utiliza el campo Guid como clave primaria. Se prescinde del campo autonumérico OID por no ser necesario para SQLCE.

Ejecuciones con Siaqodb :

  • SIAQODB (N Flush) Index + Unique + Guid : La tabla de test utiliza el campo Guid indexado y con restricción Unique, además del campo OID autonumérico obligatorio para Siaqodb.


Vemos como a ambas bases de datos le cuesta más de lo esperado añadir 100000 elementos o más a una tabla, cuando estos son añadidos uno a uno. En ciertos escenarios donde sea necesario añadir registros a tablas que ya cuenten con un alto número de elementos deberemos ser conscientes del alto coste temporal que puede suponer añadir nuevos registros a las mismas.


A pesar de este comportamiento similar para ambas, en esta parte de las pruebas el claro ganador es Siaqodb ya que mientras que Siaqodb sólo necesita 1 minuto para insertar 10000 elementos en una tabla, SQLCE necesitaría media hora para hacer esa misma operación.


Luego sin haber mostrado un gran juego ninguno de los contendientes en esta segunda parte, Siaqodb se ha apuntado un tanto que le ha dado la victoria por la mínima en el partido de ida en el feudo de Microsoft SQLCE. La verdad es que después de ver la igualdad en este primer partido, el partido de vuelta en casa de Siaqodb promete muchas muchas emociones.

Partido de vuelta: Condiciones Siaqodb

Primera Parte: 1 Flush

Arranca el partido en el campo de Siaqodb. Un estadio más modesto que el de Microsoft pero que suple con creces sus carencia gracias a la entrega y el entusiasmo de cada uno de sus miembros. Aquí van las alineaciones


Las ejecuciones que utiliza Siaqodb son :

  • SIAQODB (1 Flush) Index + Autonumeric Key : La tabla de test utiliza el OID autonumérico obligatorio para Siaqodb y el campo Guid que ni es indexado ni tiene restricción unique.
  • SIAQODB (1 Flush Massive Insert) Index + Autonumeric Key : Se utiliza la misma tabla de test que la anterior ejecución pero en lugar de utilizar el método estándar de almacenamiento en bd, se utiliza un método específico de Siaqodb para acelerar las inserciones masivas.

Las ejecuciones de SQLCE son :

  • SQLCE (1 Flush) Index + Autonumeric Key : Utiliza el campo OID autonumérico como clave primaria y el campo Guid sin indexar y sin restricción unique.

A la vista de los resultados comprobamos como Siaqodb cuando juega en su campo (con sus condiciones) obtiene mejores tiempos que SQLCE. Ampliándose de hecho su ventaja a medida que el conjunto de datos aumenta.


Con esto al final de la primera parte el resultado es un 0-1 a favor de Siaqodb que parece estar sentenciando la eliminatoria del lado del aspirante.

Segunda Parte: N Flushes

El campo de Siaqodb es una fiesta cuando comienza la segunda parte. ¿Será capaz SQLCE de al menos de igualar el marcador en esta segunda parte?


De nuevo como en la segunda parte del partido de ida, Siaqodb se muestra muy superior a su contrincante a la hora de insertar muchos datos de uno en uno en la base de datos. Eso si ambas muestran sus limitaciones cuando alcanzan los 100000 elementos insertados en una tabla uno a uno.


Finaliza el partido con un claro 2-0 para Siaqodb

Resultado de la Eliminatoria: Conclusión

La eliminatoria se saldó con un marcador contundente de 3-0 a favor de Siaqodb. Quizás este marcador resulte demasiado abultado si sólo miramos las diferencias de tiempo entre ambas bases de datos, pues al fin y al cabo estamos hablando de situaciones un tanto excepcionales en las que trabajamos con una gran cantidad de datos, cuando lo habitual en una aplicación para un dispositivo móvil es que se utilicen cantidades de datos más reducidas. Sin embargo si recordamos los puntos al principio del post, donde mencionaba que Siaqodb es multiplataforma, claramente me parece que Siaqodb es digno merecedor de tan abultado resultado.


Veremos que ocurre en los siguientes posts cuando evaluemos otros tipos de operaciones...

Código Fuente:

Nota: Para hacer funcionar el proyecto necesitareiss añadir una referencia a la dll de Siaqodb para Mango. Además de obtener un código de licencia de prueba de 30 días. Ambos se pueden obtener desde la sección Downloads de la web de Siaqodb .

Taliban Rules for Scrounger Twitter Users

Posted on June 21, 2011 at 7:25 PM Comments comments (0)

The success of an user on Twitter is usually measured by its number of followers. It reflects its influential ability. Eventhought some people states this is not necessarily true, they point more precise measures are those which evaluate the influential ability of an user based on the number of mentions or retweets obtained by the user. In my opinion it is a sensible idea but still it is not enough. The point is it doesn't take into account the scrounger users.

Usually you can see between your followers there is a group which never reply or comment your tweets and never retweets them. Accept it, the most obvious reason for this it is your twits are not so interesting as you think!. Are you sure?

Then if it is the reason, why on earth they continue following you for ever and ever?. It has sense since that currently still a small number of Internet users produce the contents consumed by millions of people. So in spite you don't have retweets maybe you are influencing silently almost all of your followers.

Collecting impressions with other twitter users, I have find three typical groups of users who do not retweet you even if you produce worthy info. Each of these groups follow one of this motivations:

  • Those which are following you just to try that you follow them because they want to have the maximum followers possible for marketing purposes.
  • Those for which the subject they usually tweet is different from the subject of your tweets so obviously they don't want to share your tweets with their followers.
  • The scrounger users, those users who doesn't share meaningful information and are in Twitter only for obtainig information but not for sharing truly interesting info.

Obviously Twitter, forums, blogs, ... are alive thanks to people that understood that meaningfull sharing is the only way for moving forward individually and collectively. In order to convert scrounger users in collaborative users it would be useful to add extra features to twitter that penalizes those kind of users.

For finding a way of measuring how much scrounger an user is, let's review which behaviors could be typical scrounger behaviors:

  • To hide information: You receive a worthy tweet but you don't want to share it with your followers just because you consider it could make the difference with other competitors (Silly idea, I know). So instead of retweetting it, in the best case, you mark it as favourite to review it in the future.
  • To follow silently new users. When you receive an interesting retweet from someone you follow and instead of retweetting it you silently start to follow the original user who wrote the retweet but don't retweet its tweet.
  • To discredit a competitor user you follow retweetting only those tweets from the person you follow that can be misinterpreted out of its context with the only purpose of smearing the author.

The first two behaviors can be detected and measured, the third one is your responsibility, simply take care of what you tweet.

Furthermore, another strategy Twitter can carry on is to give points for sharing. For example if you are the author of a retwetted tweet you receive as much points as number of retweets the tweet receives. Furthermore those users who retweet a tweet receive 0.5. point per user that retweets its retweet and so on ... In order to avoid to cheat, if you receive a retweet from someone and instead of retweeting his retweet, you retweet the original author tweet then you receive no points and your points will be for the user who submitted it to you.

Finally, in order to promote collaborative behaviors, it should exist a limit of number of follows you can have if you are under a threshold of points. Moreover the points per share should be visible next to the user.

For concluding, I would like to give thanks to god for don't giving me the responsibility of managing Twitter cause applying this taliban rules for scrounger users its is likely it would reduce the current levels of popularity and success of Twitter just because almost the internet users are passive users.

So for me will be enough if this post helps a bit to make Twitter a more collaborative tool simply remembering everybody what is fair to demand to ourselves as followers of other people.

Related and more interesting reference than this aloud thought:

Any Similarity Between Cloud Computing and Nuclear Power?

Posted on June 1, 2011 at 10:55 AM Comments comments (0)

It is very extended the comparison of the electric grid with cloud computing to explain the benefits of adopting cloud computing, This comparison tells us that as in the industrial age, where factories passed from producing their own electricity to consume it from third parties as they were doing with other commodities, now in the information age, computing is going to be provided as a cheap commodity instead of been provided by local servers which has an expensive maintenance for a company. This parallelism is very sensible, however, maybe we have reviewed this comparison only to the extent that cloud sellers were interested in.

In this post what I propose is to extend the comparison in order to look for similarities between the life cycle of power plants and the life cycle of SaaS (Software as a Service) in the cloud.

All of us know the advantages of nuclear power and SaaS in the cloud which can be synthesized on these ones: A very low maintenance cost, "robustness", high scalability.

A counterpart of the SaaS in the cloud can be proprietary software running in our local machine. It shares characteristics with home power plants like solar power plants and in general any source of energy you can produce locally in the place you consume its service. Its main advantage is its predictability as you do not depend from external decisions to know for how many time are you going to have available the same software for using it.

Looking the list of advantages it is easy to decide that SaaS offers more advantages than traditional software. Just like nuclear power SaaS is cheaper and cleaner than any other source of energy, then, why do not embrace SaaS without more doubts?

Well, I mentioned SaaS shares Nuclear power disadvantages, then it should not be as robust and safe as was told to us in the past. If we bypass nature catastrophes what's the other problem which tackle nuclear power plants?: It is the cost of disposing its nuclear waste and the unknown cost for the people living on its surroundings. In fact, a big part of these cost is not taken into account for the sake of providing a "cheap energy". Then, what happens with Saas? it does not produces any waste as far as I know, or does it?. Well, I am going to call waste to a software which has lost commercial interest for the company who produces it.

If the abandoned software is a traditional life cycle software the company can decide to do not support any more this software. The good news for the company are it will have a 0 cost for them. The good news for the user are he can continue using this software as it is as much time as he wants.

However, what happens if the company who provides you a SaaS decides the software you are consuming is not commercially viable? If they want to maintain this software to their customers it will have a cost to maintain the service. So, the most likely is you lost the software because the company don't want to assume the cost of its maintenance. I call this "software dispose cost".

In conclusion the cost of losing the software you are using is the cost you should evaluate when you have to take a decision to use a SaaS service or a tradicional one. So, at the end the forces to measure the convenience of a nuclear plant or the use of a SaaS service are very similar: practical forces vs long term responsability forces.

Finally, just to thank to all the cloud sellers who used electrical grid to explain the benefits of the cloud, their sample was better than they never would thought because it also explains pretty well the drawbacks of the cloud.

Related and more interesting references than this aloud thought:

When a Silverlight / XAML translator to HTML5 / JScript?

Posted on November 24, 2010 at 1:52 PM Comments comments (0)

 As many other people who is evaluating Silverlight, this is the question which came to my mind as soon as I learned a bit about Silverlight.

There is a lot of similarities between Silverlight / XAML and HTML5 / JScript, with the difference that the former has reached a level of maturity that HTML5 will take long time to reach. What's waiting the Microsoft guys to announce they are planning to create a translation tool from XAML to HTML5? There are many undecided people who are afraid to invest their time and money on creating designs and lines of code that will only run through a plug-in. which is not a safe bet if we think on its coverage for the different platforms in the next future. The announcement of a XAML to HTML5 translator would help all of us to take easily the decision of migrate the UI of our applications to Silverlight technologies.

Indeed, no movements towards this direction will mean once again Microsoft seems more comfortable doing guerrilla war than going on to compete in the open field. This is a very reasonable strategy from the economic point of view, as Apple also has shown, but honestly it is not very ambitious about the role they should play as one of the industry engines. In fact, in my opinion is at this point where Google has taken advantage, developing less quality products than Microsoft ones but with higher strategic ambition.

Related and more interesting articles than this aloud thought: