dinsdag 27 december 2011

Memory profiling a Silverlight Application

It’s the nightmare of most developers:  a memory leak.  Not knowing which pesky reference is holding that object in its grip or what eventhandler refuses to let go of that viewmodel.  Today I’ve had the tremendous joy (notice the sarcasm please) of solving a memory leak.  To make matters even worse, the memory leak was caused by a Silverlight application.  I’ve seen a couple of suggestions for memory profiling a Silverlight application like Davy Brion’s solution, or even tried out a couple of memory and performance profilers like the one of Jetbrains, but none that felt comfortably to work with and looked too complicated for such a simple issue.

Then after I was just about to give up, I came across RedGates ANTS memory profiler.  At first I was skeptic because it seemed like a sales person wrote the introduction page of the tool but I was pretty desperate so I filled in my personal data and downloaded the trial.  Beware though cause RedGate actually uses your data to call you later on to get feedback and to try to talk you into buying their products.

The installation was easy and straightforward and after restarting visual studio I saw the ANTS submenu appear in the menu bar.  It didn’t take me much longer after that to find the memory leak.
The actual memory leaked wasn’t mine but was introduced by a class I found on the internet, the bindinghelper http://www.scottlogic.co.uk/blog/colin/2009/02/relativesource-binding-in-silverlight/ ).  The class helps a Siverlight developer to relatively bind to a property outside a datatemplate or whatever.  

Let me explain how I found the memory leak:

At first I launched the ANT memory profiler by clicking ANTS => Profile Memory.  It then started profiling my webpage by default so I stopped profiling and then clicked File => New Profiling Session.  It then launched to following screen:



I switched the Application Type to profile to Silverlight 4 and filled in the Silverlight Application URL.  After that I clicked Start Profiling.  

After some profiling it was very obvious I was dealing with a memory leak here:



So I then took a memory snapshot:



So now what is causing this dammed memory leak?   I then clicked the Class List button which gave me a list of classes and how many instances there were and how much memory it consumed.  



I then clicked the Instance Categorizer to see the dependency graph that references all these strings.  On the far left of the screen I saw the cause of all my troubles :-).



I immediately opened the BindingHelper class and saw that objects added to its dictionary weren’t removed when they weren’t needed anymore so it was just a matter of adding the right line and voila! Memory leak solved ;-)

From this I can draw two conclusions:

The first is never to trust code coming from the internet.  Allot of bloggers submit code that hasn’t been tested on the long run and is merely intended as a reference and not as a solution.

The second is that RedGates ANTS memory profiler is one sweeeeet profiler.  You don’t have to be a rocket scientist to use it and it show some nice graphs, and admit it, who can say no to pretty graphs and diagrams. 

Till next time!

donderdag 8 december 2011

Legacy code retreat

This Saturday I had the privilege of attending another code retreat, only it wasn’t just a normal everyday code retreat but a legacy code retreat! This code retreat was hosted by J. B. Rainsberger so big thanks to him!

Whats the difference?

The basic setup is pretty much the same.  You pair with another coder and start an intensive 45 minutes session of coding.  After each session there is a retrospective of what kind of stuff you tried and how well it went.  The big difference in between a normal code retreat and a legacy code retreat is that you don’t start from scratch but you begin with an existing code base.  It is then your task to refactor the code as best as you can in 45 minutes without breaking existing functionality.

Test as documentation and failsafe

The first session wasn’t that special.  The code was handed to us and the purpose was to try and figure out what it does.  I started out writing tests for the class with the most behavior and just started from top to bottom to figure out each individual behavior.  I enjoyed this allot because not only do you get to cover the existing code with tests so you wouldn’t break the functionality when refactoring later, the tests in itself is an excellent way of discovering how the code base works and behaves.

After the first session a retrospective was held and it was surprising to see how many pairs actually didn’t even run the code!  To be honest I didn’t run it too.  Another surprising feat was that some couples even managed to find bugs in the code from the first couple of unit tests!

The golden master

So then came the second session.  Here we were asked to write end to end tests with a technique called the golden master.  The idea behind this is that you save the output of the program to a separate file.  There should be enough variation in the input of your parameters to try and cover as much of your program as possible.  As a rule for the code handed by us, we wrote away 10.000 different variations of the parameters. The output you’ve written away should be confirmed as the correct behavior of the program as it will be used as a reference for the next tests.

When you refactor the code you can then run the golden master to save the output of the refactored code.  The correctly verified output will then be compared to the output of the refactored code and any differences between the two will be displayed.

This technique is very useful for when you have an application with zero tests in it and the code is too coupled to test.  You can then slightly change the code to make unit testing easier and still have a small safe-zone in which you can verify the correctness of your application.

Some couples (including me again -_-‘ ) tried to save all the possible values the parameters could form.  This turned out to be a mistake as the variations ran up to a couple of millions.

I did have one issue though and that is that if you weren’t able to get the end to end tests done in this exercise you didn’t have a backup testing mechanism in the other sessions.  I addressed this to JB and he said he was going to plan to upload the Golden Master so that next code retreat this wouldn’t be a problem any more.

Mocking behavior by sub classing

This technique was already known to me but I didn’t know it would prove as useful as it did when dealing with legacy code.  Subclass To Test… basically the name says it all.  When to code has a strong coupling to it and mocking behavior of the class proves difficult we can make the methods of that class virtual (overridable/@override).  We can then subclass this class and override the methods that we want to mock in order to test our initial method.  I’ll demonstrate this by some code:

Say we have the following legacy code:

public class BootyShaker
{
    private List<string> _booties;

    public int NrOfBooties
    {
        get { return _booties.Count(); }
    }

    public void CreateSomeBooties()
    {
        _booties = new List<string>();
        // Legacy code should contain magic numbers!!!
        for (int i = 0; i < 12; i++)
        {
            _booties.Add(CreateBooty());
        }
    }

    /// <summary>
    /// Disclaimer: I do not take any responsibility for suicides as a consequence of utilizing this method
    /// </summary>
    /// <returns></returns>
    public string CreateBooty()
    {
        string thaBootie = string.Empty;
        // Some complex behavior
        return thaBootie;
    }

}

So we want to test the CreateSomeBooties method but the developer didn’t make it easy for us.  To make matters even worse the CreateBootie method is called by a couple of dozen other methods that we haven’t tested yet!  So in order to mock the behavior without interfering with the rest of the untested (!) code we’ll subclass the BootyShaker.

public class BootyShaker
{
    private List<string> _booties;

    public int NrOfBooties
    {
        get { return _booties.Count(); }
    }

    public void CreateSomeBooties()
    {
        _booties = new List<string>();
        // Legacy code should contain magic numbers!!!
        for (int i = 0; i < 12; i++)
        {
            _booties.Add(CreateBooty());
        }
    }

    /// <summary>
    /// Disclaimer: I do not take any responsibility for suicides as a consequence of utilizing this method
    /// </summary>
    /// <returns></returns>
    public virtual string CreateBooty()
    {
        string thaBootie = string.Empty;
        // Some complex behavior
        return thaBootie;
    }

}

[TestFixture]
public class BootyShakerTest
{
    [Test]
    public void CreateSomeBooties_should_create_12_booties()
    {
        var bootyShaker = new BootyShakerSubclassTest();
        bootyShaker.CreateSomeBooties();
        Assert.AreEqual(bootyShaker.NrOfBooties, 12);
    }

}

public class BootyShakerSubclassTest : BootyShaker
{
    public override string CreateBooty()
    {
        return "Mah booty";
    }
}

We need to make the method that we want to mock virtual (or overridable) so we can change the behavior in the subclass.  Then when we have successfully mocked the behavior we can write a test for the original method.  When the test is written we can refactor CreateSomeBooties.

Delegate the call by using Subclass To Test

In the session after that we went a step further.  Now that we’ve created the base class that mocks the behavior of the CreateBooty method we can refactor this method to the appropriate type and delegate the call.  Phew… I think I’ve lost you there :-P.  I’ll try to speak the universal language then… code:

So simply put the CreateBooty behavior doesn’t belong inside the BootyShaker so we’re going to try and extract the behavior out of the class.  So the first thing we need to do is inject the new type into the original BootyShaker:

public interface IBootyCreator
{
    string CreateBooty();
}

private readonly IBootyCreator _bootyCreator;

public BootyShaker(IBootyCreator bootyCreator)
{
    _bootyCreator = bootyCreator;
}

Don’t forget to adjust your base class to so you can still run your test:

public BootyShakerSubclassTest(IBootyCreator creator):base(creator)
{
       
}

Then refactor away the code in our base class inside a mock:

public class MockBootyCreator : IBootyCreator
{
    public string CreateBooty()
    {
        return "Mah booty";
    }
}

Then delegate the call in our BootyShaker:

public virtual string CreateBooty()
{
    return _bootyCreator.CreateBooty();
}

And then remove the base class and use the original Bootyshaker with the injected 

MockBootyCreator:

[Test]
public void CreateSomeBooties_should_create_12_booties()
{
    var bootyShaker = new BootyShaker(new MockBootyCreator());
    bootyShaker.CreateSomeBooties();
    Assert.AreEqual(bootyShaker.NrOfBooties, 12);
}

So now you’ve coupled away the responsibility of creating booties to a new type!

Extracting pure functions

The next session was a bit different.  This time we had to refactor by extracting pure functions.  The good thing about doing this is that you do not alter the state of the object in which you’re working thus this makes it very easy to test.

This wasn’t really new to me and I try to do this as much as possible since working without state reduces allot of the risk and makes testing allot easier.

Actually do stuff

The last 2 sessions JB decided to do something different.  This time we weren’t required to throw away our code so we could actually get some work done! To be honest, at the end of the day my head felt like it was going to explode so I needed the two extra sessions to be a bit more productive.

The problem I had with the two sessions is that you still had to switch pairs.  You would then have to adjust to the code of your new partner.  I would think it would be better if it was just one extra-long session since it takes a couple of minutes to get into each other’s mindset.

So that was it for the legacy code retreat.  Big thanks to Erik for the organization and to JB for taking the time to guide us :-).

Join us next time!  Till then...

maandag 14 november 2011

Silverlight Developers are worried

It kind of feels like another episode of a dramatic television soap.  If you’ve been in a coma for the previous 5 months let me refresh your memory:


Then in episode 2 we then saw that HTML and Javascript will be technologies that you can use in Windows 8 but you can still write your everyday XAML and use the WinRT API’s for your metro apps.  You can even still use .NET for your 'legacy' applications.

In episode 3 Microsoft announced that they’ll be ditching browser plugins in the metro version of IE and will only make them available in legacy desktop mode.  Another sad episode indeed, the part that made allot of people cry was quote:

For what these sites do, the power of HTML5 makes more sense, especially in Windows 8 apps.

Now for the cliffhanger of this seasons ‘bye bye silvy’ rumors are going that Silverlight 5 will be the last Silverlight version released.  Will this turn out to be true?



Since the release of this rumor my blog visitors have spiked and one post in particular received allot of views.  Based on this and the ranting that’s going on several forums I think it’s safe to say that allot of Silverlight developers are worried for their future.

But should you be worried?  After all the technology isn’t likely to go away the next couple of years, Silverlight 5 still has to be released.  But what company is going to invest in a project that makes use of a technology that is already deprived of its future?  Wouldn’t it make more sense to switch to HTML 5 instead?

Then again Silverlight will still be used in Windows Phone 7 and Xbox…. not really useful if you’re a corporate developer (unless you work at a company that uses Xbox's for their business apps).  Okay granted that there will certainly be a future for Silverlight in Windows Phone 7, but how long will it take for Microsoft to merge their desktop OS with their Phone OS, seeing that we already have quad core mobile CPU’s.

I don’t think Silverlight developers should worry just yet, the transition will go smoothly and there are still a ton of Silverlight applications that need to be maintained (or rewritten).  Though it won’t hurt you to start refreshing that good old HTML and Javascript as it looks that you’ll be needing it in the near future…

donderdag 10 november 2011

Remote Control your PC with Windows Phone 7

I just wanted to share a cool app with you.  It’s called cool remote and it allows you to remote control the desktop of your PC from iPhone or Windows Phone 7.  I’ve installed it this weekend and been trying it out since then and it just works great.  Even on 3G, the connection is stable and good enough to use. 

Default the application tries to use port 80 or 81 but I had to switch it to a custom port (in my case 1222) that I’ve had to open on my router.  Once that was done it was just a matter of connecting to the PC.

Another great thing about this app is that it’s free!

Here are the download links:


Have fun!

vrijdag 4 november 2011

Wordprocessing Serialization: move content between wordprocessing files

At the moment I’m working on a couple of side projects (which explains why I forgot to close last month off with a funny post). I’ll be blogging about them a bit more in the couple of weeks or months ahead.  A small one-man’s project I’m working on has to do with OpenXml.  For those that have been following my blog, it won’t sound much of a surprise that I’m pretty fond of doing document generation with OpenXml.  I wanted to make it a bit easier though.

A recurring requirement I’ve been confronted with is the copying of content from one document to another.  It can even go so far that you must be able to store this content and be able to inject it into multiple documents at a later moment.  One might think that this is an easy think to do but think again.  If you’re document is crammed with custom style, numbering and tables you’ll soon notice that a simple copy of the paragraphs won’t do the trick.  

So what I’ve done is create a library that allows you to transfer content between 2 wordprocessing documents and keep the format of the table and styles.  Not only can you transfer the content between documents, you can also save the content as a blob to your database or any other file.  The way that you can mark the content that you want to serialize is by bookmarks.  You place two bookmarks in the source document to determin the start and end of the content, and one bookmark in the target document to indicate where to insert the copied content.

So how does it work? There are basically 3 main objects to the library that you need to know about.

  • ContentSerializer
  • ContentDeserializer
  • ContentInserter

ContentSerializer

The content serializer will serialize OpenXmlElements between the two bookmarks that you specify.  Here is a small example:

var memoryStream = new MemoryStream();

IContentSerializer serializer = new ContentSerializer("c:\\temp\\Test.docx");
serializer.SerializeElementsFullBetweenBookmarks(memoryStream,
                                                "profielstart",
                                                "profieleind");

As you can see the ContentSerializer accepts a string that points to the location of where the document resides.  You can then call the SerializeElementsFullBetweenBookmarks.  This method accepts 3 parameters:

  • The stream to which it will serialize to
  • The bookmark indicating the start of the text
  • The bookmark indicating the end of the text

The ContentSerializer will then serialize all OpenXmlElements between the two bookmarks together with their styles and numbering definition.  You can use the data in the stream to save the objects to a database or a binary file.

There is also a SerializeElementsBetweenBookmark function available which will only serialize the paragraphs between two bookmarks.  Yet for this example I want to be able to serialize everything.

ContentDeserializer

The ContentDeserializer will simply deserialize the serialized content into a format usable to inject into a document.  Depending on whether you serialized only the paragraphs or the full content with styles and numbering you can call one of the two following methods:

  • DeserializeContent: This will deserialize only paragraphs
  • DeserializeContentWithNumberingAndStyles: This method will deserialize the paragraphs together with the numbering and styles.

A small demonstration:

IContentDeserializer deserializer = new ContentDeserializer();
var contentWithNumbering = deserializer.DeserializeContentWithNumberingAndStyles(memoryStream);

This method will return an object that contains an IEnumerable of paragraphs, an IEnumerable of styles and a numbering definition. 

ContentInserter

When you have the content deserialized you might want to insert it into another docx file.  The ContentInserter will do just this for you.  All you need to do is provide it with the location of the wordprocessing document and then call the InsertElementsWithNumberingInDocument method to insert the deserialized content at the provided bookmark.

IContentInserter contentInserter = new ContentInserter("c:\\temp\\InsertInDocument.docx");
                contentInserter.InsertElementsWithNumberingInDocument(contentWithNumbering, "Paste");

So the InsertElementsWithNumberingInDocument method accepts the following parameters:

  • An ElementsFull element which contains the deserialized content
  • The name of the bookmark in which it will insert after.

There are a few things that you should take note to:

  • When the document is under revision control, all revisions will be accepted prior to serializing or inserting content!
  • The bookmarks are case sensitive
  • This projects uses the OpenXml Power tools, it’s a cool open source project so be sure to take a look at: http://powertools.codeplex.com/
  • The content that will be serialized between the two bookmarks currently only consists of Parargraphs and all their children (run, text, runproperties, etc.) and tables and all their children (tablecell, etc.).

Some things that I might improve in the future are:
  • Include the serialization of Images
  • Include the serialization of more elements like drawings and graphs
  • Optimize the serialization of the styles and numbering part, at the moment there is no filter

 
Make any adjustments as you like, until next time ;)

vrijdag 28 oktober 2011

Editable Views in Entity Framework

Last time I explained how you could create an association between a view and a table inentity framework.  Now let’s take it a step further.  

The inherent ‘problem’ with working with a view as opposed to working with a table is that a view is read only.  Yet out of the box, Entity Framework can work with this view like it would with a table and update and delete rows out of this view.

      I will be explaining two ways for updating your view in this post:  
  1. For simple views we can just remove the defining query
  2. For more advanced views we can map stored procedures
A cautious word: only do this with the simplest of views, or even better, write stored procedures that will do this for you.  I’m just explaining the possibility that exists in Entity Framework to do this and not really encouraging the use of updating views.

So that being said, let’s get down to business.

Removing the defining query

Open you edmx in your xml editor and then search for the line that describes your View:

          <EntitySet store:Name="vw_Person" EntityType="Model.Store.vw_Person" store:Type="Views" store:Schema="dbo" store:Name="vw_Person">
            <DefiningQuery>
               store:Name="vw_Person"
              SELECT
              [vw_Person].[UserId] AS [UserId],
              [vw_Person].[PersonId] AS [PersonId],
              [vw_Person].[FirstName] AS [FirstName],
              [vw_Person].[LastName] AS [LastName],
              [vw_Person].[Sex] AS [Sex],
              [vw_Person].[Titel] AS [Titel],
              [vw_Person].[Email] AS [Email],
              [vw_Person].[Street] AS [Street],
              [vw_Person].[Street2] AS [Street2],
              [vw_Person].[Zipcode] AS [Zipcode],
              [vw_Person].[City] AS [City],
              [vw_Person].[Country] AS [Country],
              [vw_Person].[CountryCode] AS [CountryCode],
              [vw_Person].[OrganisatieNaam] AS [OrganisatieNaam],
              [vw_Person].[Rrn] AS [Rrn],
              [vw_Person].[OnlineModified] AS [OnlineModified],
              [vw_Person].[DatePasswordSent] AS [DatePasswordSent],
              [vw_Person].[CreateDate] AS [CreateDate],
              [vw_Person].[CountryId] AS [CountryId],
              [vw_Person].[ModifyDate] AS [ModifyDate],
              [vw_Person].[Image] AS [Image]
              FROM [dbo].[vw_Person] AS [vw_Person]
            </DefiningQuery>
          </EntitySet>

In order to make it updatable you have to remove the defining query.  When that is done also remove the store:Name attribute and  then remove the store prefix from the store:Schema attribute. When this is done the EntitySet element should look something like this:

          <EntitySet Name="vw_Person" EntityType="Model.Store.vw_Person" store:Type="Views" Schema="dbo" />

Now save the edmx and try to update your view like you would with any other entity.

This method will only work when your view is updatable.  When your view contains a calculated value or something similar then this method will fall short.  In this case we’ll have to manually assign the stored procedures that are responsible for updating the entity.

Using stored procedures

So in my example I have made a simple stored procedure that will update the values of the Person view.  I then import the stored procedure by clicking ‘Update Model from Database’ and then selecting the appropriate stored procedure to add.

Once the stored procedure is added, click on your view and go to the ‘Mapping Details’ toolwindow.  On the left side of this toolwindow you can select ‘Map Entity to Functions’.  In here is where you can define your Update, Insert and Delete functions that Entity Framework should use when doing these operations.



Voila that was it, see you next time ;-)

maandag 24 oktober 2011

How to map a relation between a View and a Table in Entity Framework

Imagine creating a view in your database with some data you need aggregated from different tables.  This data could be associated to another table in your database.  While you might not want to make any changes in your database you may want the conceptual diagram in your application to link up these entities so it makes it easier for you to develop with them.

The first thing you need to do is create you edmx and add the wanted tables and views.  Make sure that you have a primary key defined for your view, entity framework will by default take all non nullable columns (… go figure).  Then when you have the desired view and table on your entity diagram you create a new association.



In the ‘add association ‘ window you can then select the two entities you want to have a relation between.  Deselect the ‘Add foreign key properties to the ‘xxxx’ Entity’, as it will add a new property to the associated table that will act as a foreign key.  In this case it won’t be necessary because I already have my foreign keys mapped.



Once you’ve done this you should receive the following error:

No mapping specified for the following EntitySet/AssociationSet – PersonFunctie

So this is where the entity designer falls short.  We defined the association between the two entities but entity framework cannot find the association in the storage model so it cannot figure out which properties have to be used.  To fix this open the edmx in an xml editor.  Then browse to the line that the error indicated and you’ll find this:

<Association Name="PersonFunctie">
  <End Type="eOpvolgingModel.Person" Role="Person" Multiplicity="1" />
  <End Type="eOpvolgingModel.Functie" Role="Functie" Multiplicity="*" />
</Association>


Now you just need to say to the conceptual model which properties define this association:

<Association Name="PersonFunctie">
  <End Type="eOpvolgingModel.Person" Role="Person" Multiplicity="1" />
  <End Type="eOpvolgingModel.Functie" Role="Functie" Multiplicity="*" />
  <ReferentialConstraint>
    <Principal Role="Person">
      <PropertyRef Name="PersonId" />
    </Principal>
    <Dependent Role="Functie">
      <PropertyRef Name="Persoon_Id" />
    </Dependent>
  </ReferentialConstraint>
</Association>



Save it, build it, run it, and voila.  Everything should work now.

Until next post!