Tuesday, October 22, 2013

Catel extends Prism modularity options through NuGet

Introduction


Prism provides a lot of options regarding modularity by default. Fact is that most of the stuff  you need to build a modularized application is available on Prism. Prism contains several modularity options including ways to retrieve modules from a directory, set them up through the configuration file, and even options to create modularized Silverlight applications.

On the other hand, Catel provides support for module catalog composition in order to enable more than one modularity options into a single application.

Nowadays, initiatives around NuGet come from everywhere, such as Shimmer with the same goal of ClickOnce (but works), OctopusDeploy for release promotion and application deployment or even the recently released extension manager of ReSharper 8.0.

So, the Catel team comes with a new one and the fact is that we wondered why no one told about this before: 

What if you would be able to distribute your application's modules through NuGet?

If you want to know how, take a look.


Enabling NuGet based modularity option


With the forthcoming 3.8 version of Catel you will be able to install and load modules from a NuGet repository. The only thing you have to do is:
  1. Write a module as you know with Prism (or better in combination with Catel).
  2. Create a NuGet package for the module and publish it into your favorite gallery.
  3. Use and configure the NuGetBasedModuleCatalog in the application’s bootstrapper.
The first two steps are well documented. For details see creating and publishing a package and declaring modules from NuGet and Catel documentation.


So, now we can focus on the third one.

Use and configure the NuGetBasedModuleCatalog in the application’s bootstrapper


In order to simplify the following explanation we published a package into the official NuGet Gallery named Catel.Examples.WPF.Prism.Modules.NuGetBasedModuleA. Such package contains an assembly with a single class like this one:
 
public class NuGetBasedModuleA : ModuleBase
{
    public NuGetBasedModuleA(IServiceLocator container):base("NuGetBasedModuleA", null, container)
    {
    }

    protected virtual void OnInitialized()
    {
      var messageService = this.Container.ResolveType<IMessageService>();
      messageService.Show("NuGetBasedModuleA Loaded!!");
    }
}

Let’s start writing an  application that will use this public packaged module. Indeed, the main point of interest here is about the usage and configuration of the NuGetBasedModuleCatalog. Therefore just write the bootstrapper as follow:

public class Boot : BootstrapperBase<MainWindow, NuGetBaseModuleCatalog>
{
  protected override void ConfigureModuleCatalog()
  {
    base.ConfigureModuleCatalog();
    this.ModuleCatalog.AddModule(new ModuleInfo 
          { 
               ModuleName = "NuGetBasedModuleA", 
               ModuleType = "Catel.Examples.WPF.Prism.Modules.NuGetBasedModuleA.NuGetBasedModuleA, Catel.Examples.WPF.Prism.Modules.NuGetBasedModuleA", 
               Ref = "Catel.Examples.WPF.Prism.Modules.NuGetBasedModuleA, 1.0.0-BETA", 
               InitializationMode = InitializationMode.OnDemand
          });
  }
}

Notice how the Ref property of ModuleInfo is used to specify the package id and optionally the version number. If the version number is not specified then the application always will try to download the latest version keeping the modules up to date.

Finally, to make this demo work, just put in the XAML of the main window the ModuleManagerView control, run


and click the check box.

Conclusions


Yes, we did it again. Catel keeps working alongside Prism. Now, we extended Prism’s modularity options through NuGet in order to provide a powerful mechanism of application module distribution. 

If you want to try, just get the latest beta package of Catel.Extensions.Prism and discover by yourself all scenarios that you can handle with this Catel exclusive feature.

Saturday, September 28, 2013

Keep updated CatelR# with ReSharper 8.0



CatelR# is now compatible with the latest version of ReSharper (R#).

We also keep the compatibility with the most recent R# versions including 6.0, 7.0 and 7.1. The full installer, that also includes the R# 8.0 assemblies, of the latest beta of CatelR# is available to download here.

R# 8.0 includes a NuGet based Extension Manager and the Catel Team started to publish the CatelR# extension in the Extension Gallery.

Yes, CatelR# is alive. We keep working in more features and also are waiting for your ideas. You got one? Let us know!

So, ensure yourself to install the latest version of R# and you will no miss any of the forthcoming CatelR#'s cool features.

Friday, September 27, 2013

How Configure HTTP Access to Analysis Services on Internet Information Services from Powershell

Introduction


You can enable HTTP access to Analysis Services by configuring MSMDPUMP.dll, an ISAPI extension that runs in Internet Information Services (IIS) and pumps data to and from client applications and an Analysis Services server. This approach provides an alternative means for connecting to Analysis Services when your BI solution calls for the following capabilities:
  • Client access is over Internet or extranet connections, with restrictions on which ports can be enabled.
  • Client connections are from non-trusted domains in the same network. {…}"
The text above, is the way of the “Configure HTTP Access to Analysis Services on Internet Information Services (IIS)” guide start. Such steps are typical to setup the development environment or production server of a BI solution that move data over HTTP directly from an instance of SQL Server Analysis Services.

Repeat these steps over and over again encourage me to write the script to automate most of the steps of this guide with Powershell.

The Code

So, without more introduction here is the source:

1) Import or load the WebAdministration Module or PSSnapin (depends on IIS version)
$webAdminModule = Get-Module -ListAvailable | Where-Object { $_.Name -eq "WebAdministration" }
if ($webAdminModule -ne $null) 
{
    Write-Host "Importing WebApplication Module..."
    Import-Module -ModuleInfo $webAdminModule
}
else
{
    Write-Host "Loading WebApplication PSSnapin..."
    Add-PSSnapin WebAdministration
}

2) Create and setup application pool
Write-Host "Creating application pool OLAP..."
New-WebAppPool -Name OLAP -ErrorAction SilentlyContinue
Set-ItemProperty IIS:\AppPools\OLAP -Name ManagedRuntimeVersion -Value v2.0.50727
Set-ItemProperty IIS:\AppPools\OLAP -Name ManagedPipelineMode -Value 1

3) Set application pool credentials
Write-Host "Enter application pool credentials"
$userName = Read-Host "UserName" 
$securedPassword = Read-Host "Password" -AsSecureString
$userName = $credential.UserName
$password = [Runtime.InteropServices.Marshal]::PtrToStringAuto([Runtime.InteropServices.Marshal]::SecureStringToBSTR($securedPassword));
Set-ItemProperty IIS:\AppPools\OLAP -Name ProcessModel -Value @{ userName="$userName"; password="$password"; IdentityType=3 }

3) Create the web application
Write-Host "Creating OLAP Application in Default Web Site..."
New-Item 'IIS:\Sites\Default Web Site\OLAP' -Type Application -PhysicalPath "C:\Program Files\Microsoft SQL Server\MSAS10_50.MSSQLSERVER\OLAP\bin\isapi"
Set-ItemProperty "IIS:\Sites\Default Web Site\OLAP" -Name ApplicationPool -Value OLAP
Set-WebConfiguration system.web/authentication 'IIS:\Sites\Default Web Site\OLAP' -value @{ mode = 'None' }

4) Add the handler mapping (it can't be done with the available command-lets, so)
Write-Host "Adding the handler mapping..."
[System.Reflection.Assembly]::LoadFrom("C:\windows\system32\inetsrv\Microsoft.Web.Administration.dll")
$isapiPath = "C:\Program Files\Microsoft SQL Server\MSAS10_50.MSSQLSERVER\OLAP\bin\isapi\msmdpump.dll"
$isapiConfiguration = Get-WebConfiguration "/system.webServer/security/isapiCgiRestriction/add[@path='$isapiPath']/@allowed"  
if (-not($isapiConfiguration -eq $null -or $isapiConfiguration.value))
{  
    Write-Host "Enabling ISAPI Module - $isapiPath"
    Set-WebConfiguration "/system.webServer/security/isapiCgiRestriction/add[@path='$isapiPath']/@allowed" -value "True" -PSPath:IIS:\  
}

$serverManager = New-Object Microsoft.Web.Administration.ServerManager
if ($isapiConfiguration -eq $null)
{
    Write-Host "Adding and enabling ISAPI Module - $isapiPath"
    $appHostConfig = $serverManager.GetApplicationHostConfiguration();
    $isapiCgiRestrictionSection = $appHostConfig.GetSection("system.webServer/security/isapiCgiRestriction");
    $isapiCgiRestrictionCollection =  $isapiCgiRestrictionSection.GetCollection();
    
    $cgiRestrictionElement = $isapiCgiRestrictionCollection.CreateElement("add");
    $cgiRestrictionElement.SetAttributeValue("path", "C:\Program Files\Microsoft SQL Server\MSAS10_50.MSSQLSERVER\OLAP\bin\isapi\msmdpump.dll");
    $cgiRestrictionElement.SetAttributeValue("allowed", "true");
    $cgiRestrictionElement.SetAttributeValue("groupId", "");

    $isapiCgiRestrictionCollection.AddAt(0, $cgiRestrictionElement);
}

$webConfig = $serverManager.GetWebConfiguration("Default Web Site", "OLAP");
$handlersSection = $webConfig.GetSection("system.webServer/handlers");
$handlersCollection = $handlersSection.GetCollection();

$handlerElement = $handlersCollection.CreateElement("add");
$handlerElement.SetAttributeValue("name", "OLAP");
$handlerElement.SetAttributeValue("path", "msmdpump.dll");
$handlerElement.SetAttributeValue("verb", "*");
$handlerElement.SetAttributeValue("modules", "IsapiModule");
$handlerElement.SetAttributeValue("scriptProcessor", "C:\Program Files\Microsoft SQL Server\MSAS10_50.MSSQLSERVER\OLAP\bin\isapi\msmdpump.dll");

$handlersCollection.AddAt(0, $handlerElement);

$serverManager.CommitChanges();

The main differences with the original non-automated guide are: 
  • Doesn’t create a copy of the files in ‘C:\Program Files\Microsoft SQL Server\MSAS10_50.MSSQLSERVER\OLAP\bin\isapi’ to a ‘c:\inetpub\wwwroot\olap’ just publishes directly the original directory as application. Our deployments (production or workstation) typically involve a single instance of SQL Server Analysis Service.
  • Doesn’t modify the archive msmdpump.ini because the default configuration is enough.
  • Doesn't grant data access permissions. Assumes that the  application pool’s credentials have the right data access permissions.

Conclusions

I’m agree with you. This script can be more parameterized. But with this simple “coded” blog post I just want to send you a message. 

When you have to face automation tasks of your daily work follow this tips:
  • Just do a small step at once.
  • Don’t add complexity to the solution more than your needs.
  • Don’t overwhelm you, do it for fun. 
  • The code will be better tomorrow. 
  • One day as result of your small improvements the solution will look just like you ever wanted. 

But if I you have something to automate just start NOW ;-). 

Wednesday, August 14, 2013

If you can’t defeat them, just “join” them – Part I

Introduction

I spend a lot of time using Subversion as the Version Control System for my development team. The fact is I love Subversion and strongly think that there is no a centralized version control system (CVCS) like Subversion. 

On the other hand, my organization promotes the usage of Microsoft Team Foundation Server (TFS) as Application Life-Cycle Management (ALM) and I also think TFS is a great integrated environment. However some of its services are in disadvantage mainly in terms of usability against some third party software.

But I’m not here to talk about TFS vs. Subversion, even when the user experience of the Visual SVN is better than Team Explorer (even in VS2012).

I just want to share how I committed to the indication of shutting down my local team Subversion, and move my entire sources to the centralized TFS, keeping all my “non-integrated” development environment’s cool features.

What features will I miss if I’ll move to TFS version control system?

I received a lot of critics because I don’t use the integration features of Visual Studio Team System. Actually I have a non-fully integrated environment for ALM (TFS and Subversion) but always keeping traces between sources and tasks. I know that TFS has built-in support for this and also have a check-in policy system to “force” some team disciplines.

On the other hand, I needed move my team into some of self-discipline development practices to control the existing “chaos”: uncommented commits, unplanned or non-well-reported work, and more. In such scenario, I was “forced” to ensure the Subversion and TFS integration and also “route” my team into the right direction. 

I thought that I would able to handle this situation starting from the Subversion with relative ease and I was right. I implemented a hook to enforce the usage of these practices and also solve some integration issues. The continuous integration server (Team City) also helps me in track task with sources, listing all related tasks from the TFS. The fact is I also modified the existing TFS integration plugin to work as I expected, but this is part of another history.

Therefore with a simple Subversion hook I get a centralized or server side “check-in” polices system to:

Avoid empty commit messages: Every single commit must contain a description.

Avoid unplanned work: Every single commit message must contain the id of the work item of type “Sprint Backlog Item” in “In Progress” state.

Automatically track the task history: As every single commit message contains a work item id then all commit messages are attached as history of the work item. 

Accept some relevant commit comments or messages: A "relevant" commit comment is about a commit comment that indicates a feature completion. It should be related with addition, modification, removal or bug fixing, but just when those goals are actually "Done, Done". The usage of this pattern also turn automatically to the “Done” the state of the related work items. This messages can also use to generate the release notes document for a specific version

Migrating from SVN to TFS version control system

My support team started with some existing migration tools but the outcome was incomplete (with runtime errors included). On the other hand, I actually thought in move from SVN to GIT, because the next version of TFS comes with built-in support for GIT repositories but apparently “they” can’t wait and yes, I have to move my source to TFS.

Therefore, with the GIT idea in my mind, I came across a couple of GIT extensions and I started to type the follow commands:

> git svn clone http://svn.mydivsion:3690/repository
> git tfs init http://tfs.mycompany:8080/tfs/DefaultCollection/MyDivision $/MyDivision 
> git tfs fetch 
> git rebase /remotes/default/tfs master 
> git tfs rcheckin –authors="authors.txt"

Everything works like charm. The full migration can be done. Now, after years of discussions, I lose and “they" win. Now, I have to start over again.

The fact is that I think TFS was built on top of one of the best Business Intelligence Platform (BI) ever made and can be configured to produce a lot of multidimensional metrics very useful for ALM. But this migration to TFS version control system are backwards steps, at least for me.


Hook the TFS commits from the server side

As I wrote before, TFS has its own check-in policy system. Actually it is a “client side evaluation” check-in policy system, and I don’t like them. Such policies can also be overridden by the developers. TFS check-in policy system looks more like a warnings system.

But TFS comes with a server side event handling system and can be plugged-in deploying an assembly into the web service plugins directory of the application tier of the current installation of TFS.

I started to re-write all my useful check-in polices, referencing the right TFS assemblies.

For instance, the “Avoid empty commit messages” policy looks like this:

public class AvoidEmptyCommitMessagePolicy : ISubscriber
{
    public string Name
    {
        get { return " AvoidEmptyCommitMessagePolicy"; }
    }

    public SubscriberPriority Priority
    {    
        get { return SubscriberPriority.Normal; }
    }
    
    public EventNotificationStatus ProcessEvent(TeamFoundationRequestContext requestContext, NotificationType notificationType, object notificationEventArgs, out int statusCode, out string statusMessage, out ExceptionPropertyCollection properties)
    {
        statusCode = 0;
        properties = null;
        statusMessage = string.Empty;
        var eventNotificationStatus = EventNotificationStatus.ActionPermitted;
        if (notificationType == NotificationType.DecisionPoint)
        {
            var checkinNotification = notificationEventArgs as CheckinNotification;
            if (checkinNotification != null && string.IsNullOrEmpty(checkinNotification.Comment))
            {
                statusMessage = "Your commit has been blocked because you didn't give any log message.\n Please write a log message describing the purpose of your changes and \n then try committing again.";
                eventNotificationStatus = EventNotificationStatus.ActionDenied;
            }
        }

        return eventNotificationStatus;
    }
}

The “Avoid unplanned work” looks like this:

public class AvoidUnplannedWorkPolicy : ISubscriber
{
        /*...*/
        if (checkinNotification != null)
        {
                CheckinNotificationInfo notificationInfo = checkinNotification.NotificationInfo;
                CheckinNotificationWorkItemInfo[] workItemInfos = notificationInfo.WorkItemInfo;
                if (notificationType == NotificationType.DecisionPoint)
                {
                    if (workItemInfos == null || workItemInfos.Length == 0)
                    {
                        statusMessage = "Your commit has been blocked because you didn't give any work item linked to this changeset.";
                        eventNotificationStatus = EventNotificationStatus.ActionDenied;
                    }
                    
                    var teamFoundationLocationService = requestContext.GetService<TeamFoundationLocationService>();
                    var uri = new Uri(teamFoundationLocationService.ServerAccessMapping.AccessPoint + "/" + requestContext.ServiceHost.Name);
                    TfsTeamProjectCollection tfsTeamProjectCollection =  TfsTeamProjectCollectionFactory.GetTeamProjectCollection(uri);
                    var workItemStore = tfsTeamProjectCollection.GetService<WorkItemStore>();
                    if (workItemInfos.Exists(workItem => workItem.Type != "Sprint Backlog Item" ||  workItem.State != "In Progress"))
                    {
                        statusMessage = "Your commit has been blocked because you didn't give any “Sprint Backlog Item” in state "In Progress" linked to this changeset.";
                        eventNotificationStatus = EventNotificationStatus.ActionDenied;
                    }
                }
       }
        /*...*/
}

and so on.

The AvoidUnplannedWorkPolicy is very Scrum for Team System process template related. It can be done more generic and cross process template support, but it’s just an example. 

That fact is I’m now thinking about a lot of policies to improve my team practices, but if tell you I have to kill you  ;-)

Conclusions

Seems like I can live with such transition (to TFS version control system), but it isn’t all happiness. Now I have to wait for an approval process to deploy my server side policies in the main TFS server. But if “they” don’t like such intrusion then probably I will also back into the “chaos”.

In the meantime I just wrote this blog post.

PD: Yes I'm back in the game again, but now without Subversion, with my ankle ligaments broken as outcome of a basketball game, and with my team Villa Clara as the new Champion of the Cuban National Baseball League ;-)

Thursday, January 17, 2013

Cache storage explained

Introduction 

Caching is about improving applications performance. The most expensive performance costs of the applications are related to the data retrieving, typically when this data requires to be moved across the network or loaded from disk. But some data have a slow changing behavior (a.k.a nonvolatile) and don't require to be re-read with the same frequency of the volatile data.

So, to improve your application performance and handle this "nonvolatile" data from a pretty clean approach, Catel comes with a CacheStorage<TKey, TValue> class. Notice that the first generic parameter is the type of the key and the second the type of the value that will be stored, just like a Dictionary<TKey, TValue> but CacheStorage isn't it just a Dictionary. 

This class allows you to retrieve data and store it into the cache with a single statement and also helps you to handle the expiration policy if you need it.

Let’s take a look into these features:

Initializing a cache storage

To initialize a cache storage field into your class use the following code:

/*...*/
private readonly CacheStorage<string, Person> _personCache = new CacheStorageCacheStorage<string, Person>(allowNullValues: true);

Retrieve data and store it into the cache with a single statement

To retrieve data and storing into a cache with a single statement use the following code:

/*...*/
var person = _personCache.GetFromCacheOrFetch(Id, () => service.FindPersonById(Id));

When this statement is executed more than once with the same key, the value will be retrieved from the cache storage instead of from the service call (notice the usage of a lambda expression). The service call will be executed just the first time or if the item is removed from the cache manually or automatically due to the expiration policy.

Using cache expiration policies

The cache expiration policies add a removal behavior to the cache storage items. A policy signals that an item is expired to make that cache storage remove the item automatically.

A default cache expiration policy initialization code can be specified during cache storage initialization constructor:

/*...*/
private readonly CacheStorage<string, Person> _personCache = new CacheStorageCacheStorage<string, Person>(() => ExpirationPolicy.Duration(TimeSpan.FromMinutes(5)), true);

You can specify a specific expiration policy for an item when it's stored:

/*...*/
var person = _personCache.GetFromCacheOrFetch(id, () => service.GetPersonById(id), ExpirationPolicy.Duration(TimeSpan.FromMinutes(10)));

The default cache policy specified at cache storage initialization will be used if during item storing the expiration policy is not specified.

Build-in expiration policies

Catel comes with build-in expiration policies. They are listed in the following table:

Expiration policy Type Description Initialization code sample
AbsoluteExpirationPolicy Time-base The cache item will expire on the absolute expiration DateTime
ExpirationPolicy.
Absolute(new DateTime(21, 12, 2012))
DurationExpirationPolicy Time-base The cache item will expire using the duration TimeSpan to calculate the absolute expiration from DateTime.Now
ExpirationPolicy.
Duration(TimeSpan.FromMinutes(5))
SlidingExpirationPolicy Time-base The cache item will expire using the duration TimeSpan to calculate the absolute expiration from DateTime.Now, but every time the item is requested, it is expanded again with the specified TimeSpan
ExpirationPolicy.
Sliding(TimeSpan.FromMinutes(5))
CustomExpirationPolicy Custom The cache item will expire using the expire function and execute the reset action if is specified. The example shows how to create a sliding expiration policy with a custom expiration policy.
var startDateTime = DateTime.Now;
var duration = TimeSpan.FromMinutes(5);
ExpirationPolicy.
Custom(() => DateTime.Now >
startDateTime.Add(duration), 
() => startDateTime = DateTime.Now); 
CompositeExpirationPolicy Custom Combines several expiration policies into a single one. It can be configured to expire when any or all policies expire.
new CompositeExpirationPolicy().
Add(ExpirationPolicy.
Sliding(TimeSpan.FromMinutes(5))).
Add(ExpirationPolicy.
Custom(()=>...))

Implementing your own expiration cache policy

If the CustomExpirationPolicy is not enough, you can implement your own expiration policy to makes that cache item expires triggered from a custom event. You are also able to add some code to reset the expiration policy if the item is read from the cache before it expires (just like SlidingExpirationPolicy does).

To implement an expiration cache policy use the following code template:

public class MyExpirationPolicy : ExpirationCachePolicy
{
   public MyExpirationPolicy():base(true)
   {
   }

   public override bool IsExpired
   {
      get
      {
         // Add your custom expiration code to detect if the item expires
      }
   }

   public override void OnReset()
   {
      // Add your custom code to reset the policy if the item is read.
   }
}

Notice that the base constructor has a parameter to indicate if the policy can be reset. Therefore, if you call the base constructor false then the OnReset method will never be called.

Conclusion

If you are using caching in your current projects, you are probably using a different caching strategy. Now you have learned about the caching possibilities in Catel, you should definitely try it out and be amazed at the possibilities :)

Friday, November 2, 2012

More about Catel and Prism in combination

Introduction

Since Catel 3.2 the support of Prism was enhanced. The fact is that several classes were introduced into Catel.Extensions.Prism to simplify the modules and bootstrapper implementation, and thanks to dependency injection support of the ServiceLocator everything can be done with Catel without the usage of third party containers.

But  now, with 3.4, a lot of improvements were introduced and the Prism extension is working like charm.

Lets take a look to a couple of scenarios.
 

Modularity with Catel explained

Probably you actually noticed the close resemblance of the UI with the Prism's Quick Start examples (with MEF or Unity). Actually this example is an adaptation of the original sources in order to show how use the Prism features in combination with Catel.

   

Here are some implementation details:

1) Catel comes with two versions of  generic classes, named BootstrapperBase, that helps to initialize with ease the shell and the module catalog. There are several methods that you are able to override, such as ConfigureContainer or ConfigureModuleCatalog, in order to setup the IoC container (a.k.a ServiceLocator) and the module catalog respectively, just like this:

    /// <summary>
    /// Initializes Prism to start this quickstart Prism application to use Catel.
    /// </summary>
    public class QuickStartBootstrapper : BootstrapperBase<Shell, CompositeModuleCatalog>
    {
        #region Fields

        /// <summary>
        /// The callback logger.
        /// </summary>
        private readonly CallbackLogger callbackLogger = new CallbackLogger();

        #endregion

        #region Methods

        /// <summary>
        /// Configures the <see cref="IServiceLocator"/>. May be overwritten in a derived class to add specific
        /// type mappings required by the application.
        /// </summary>
        protected override void ConfigureContainer()
        {
            base.ConfigureContainer();
            Container.RegisterTypeIfNotYetRegistered<IModuleTracker, ModuleTracker>();

            LogManager.AddListener(callbackLogger);

            Container.RegisterInstance<CallbackLogger>(callbackLogger);
        }

        /// <summary>
        /// Configures the <see cref="IModuleCatalog"/> used by Prism.
        /// </summary>
        protected override void ConfigureModuleCatalog()
        {
            ModuleCatalog.Add(Catel.Modules.ModuleCatalog.CreateFromXaml(new Uri("/ModularityWithCatel.Silverlight;component/ModulesCatalog.xaml", UriKind.Relative)));

            var moduleCatalog = new ModuleCatalog();
            // Module A is defined in the code.
            Type moduleAType = typeof(ModuleA.ModuleA);
            moduleCatalog.AddModule(new ModuleInfo(moduleAType.Name, moduleAType.AssemblyQualifiedName, "ModuleD"));

            // Module C is defined in the code.
            Type moduleCType = typeof(ModuleC.ModuleC);
            moduleCatalog.AddModule(new ModuleInfo { ModuleName = moduleCType.Name, ModuleType = moduleCType.AssemblyQualifiedName, InitializationMode = InitializationMode.OnDemand });

            ModuleCatalog.Add(moduleCatalog);
        }

        #endregion
    }

2) Catel also comes with the CompositeModuleCatalog class to deal with many catalog as one. The current implementation actually "allow" cross module catalog's module dependencies.

3) For a module implementation the common approach is  inherit from ModuleBase just like this:


    /// <summary>
    /// A module for the quickstart.
    /// </summary>
    public class ModuleA : ModuleBase
    {
        /// <summary>
        /// Initializes a new instance of the <see cref="ModuleA"/> class.
        /// </summary>
        /// </param name="moduleTracker">The module tracker.</param>        
        public ModuleA(IModuleTracker moduleTracker) : base("ModuleA", moduleTracker)
        {
        }
    }

4) The common implementation of the application class is about the instantiation of the Bootstrapper and making a call to the Run method.

var bootstrapper = new QuickStartBootstrapper();
bootstrapper.Run();

But, what will happend if you call RunWithSplashScreen instead? Try with the latest beta and discover the feature by your self or just take a look into orchesta.

Enhanced working with regions

Some time ago, I wrote about a feature where your were able to inject a view from a view model from it's view model:
 
 
	var employeeViewModel = new EmployeeViewModel();
	var uiVisualizerService = GetService<IUIVisualizerService>();
	uiVisualizerService.Activate(viewModel, "MainRegion");

1) But now you can deal with more than one shell and do this:

	uiVisualizerService.Activate(employeeViewModel, this, "MainRegion");

Where the second parameter is the parent view-model.

2) Actually you are able to inject views (referencing it's view models) in any window. Just like the previous example but in combination with the experimental extension method Show:

	var uiVisualizerService = GetService<IUIVisualizerService>();
	var windowViewModel = new WindowWithRegionViewModel();
	uiVisualizerService.Show(windowViewModel, () => { uiVisualizerService.Activate(new EmployeeViewModel(),  windowViewModel, "WindowMainRegion") });

Conclusions

1) The usage of third party Prism Extensions like MEF or Unity is no longer required. Catel Service Locator support dependency injection and is actually configured to work just as Prism expect, the fact is that the ServiceLocatorAdapter do this job.
 
2) Now, you are able to have a nice programming session with Prism & Catel.

Thursday, August 23, 2012

Creating a view model with a model and mappings with CatelR#

There are tons of yarns based on MVVM developer’s experiences behind Catel framework. Some of them are well documented on Catel docs and one of my favorites is the named "Creating a view model with a model and mappings".

When I started to read it I identified myself as one of those developers that map all the view model properties back to the model.

I will remember you how it started:

"During the use of the MVVM pattern, we noticed that lots and lots of developers have a model, and map the values of the model to all properties of the view model. When the UI closes, the developers map all the properties back to the model. All this redundant code is not necessary when using the view models of Catel." More...

This only feature makes that my interest about Catel grown until I became one of the members of the development team. But this is part of the other history.

Let’s go back to the Catel feature again. If you don’t remember how it works, here is a summary.

Basically if you want to create a model the only thing that you have to do is decorate a view model property with the ModelAttribute. So if you want to expose the model property as view model one, and don’t write the mapping back code you must decorate the exposed property with the ViewModelToModelAttribute just like this:

    /// 
    /// The person view model.
    /// 
    public class PersonViewModel : ViewModelBase
    {
        #region Static Fields

        /// Register the FirstName property so it is known in the class.
        public static readonly PropertyData FirstNameProperty = RegisterProperty("FirstName", typeof(string));

        /// Register the Person property so it is known in the class.
        public static readonly PropertyData PersonProperty = RegisterProperty("Person", typeof(Person));

        #endregion

        #region Public Properties

        /// 
        /// Gets or sets the first name.
        /// 
        [ViewModelToModel("Person")]
        public string FirstName
        {
            get { return GetValue<string>(FirstNameProperty); }
            set { SetValue(FirstNameProperty, value); }
        }

        /// 
        /// Gets or sets the person.
        /// 
        [Model]
        public Person Person
        {
            get { return GetValue<Person>(PersonProperty); }
            set { SetValue(PersonProperty, value); }
        }

        #endregion
    }
 
This example will automatically create a mapping between Person.FirstName and PersonViewModel.FirstName.

This feature is amazing and Catel distributes some code snippets to accelerate writing such code:
  • vm - declare a view model
  • vmpropmodel - declare a property as model on a view model
  • vmpropviewmodeltomodel - declare a property as a pass-through property on a view model

But could you imagine to yourself writing this code (and more) as fast as is possible (near to the speed of light). Don't you believe me?

Watch the movie, and believe me, it is in slow motion ;)


This is a forthcoming feature of CateR#, now powered by Catel itself.

You got more ideas? Let us know!

X-ray StoneAssemblies.MassAuth with NDepend

Introduction A long time ago, I wrote this post  Why should you start using NDepend?  which I consider as the best post I have ever...