Tuesday, October 22, 2013

Catel extends Prism modularity options through NuGet

Introduction


Prism provides a lot of options regarding modularity by default. Fact is that most of the stuff  you need to build a modularized application is available on Prism. Prism contains several modularity options including ways to retrieve modules from a directory, set them up through the configuration file, and even options to create modularized Silverlight applications.

On the other hand, Catel provides support for module catalog composition in order to enable more than one modularity options into a single application.

Nowadays, initiatives around NuGet come from everywhere, such as Shimmer with the same goal of ClickOnce (but works), OctopusDeploy for release promotion and application deployment or even the recently released extension manager of ReSharper 8.0.

So, the Catel team comes with a new one and the fact is that we wondered why no one told about this before: 

What if you would be able to distribute your application's modules through NuGet?

If you want to know how, take a look.


Enabling NuGet based modularity option


With the forthcoming 3.8 version of Catel you will be able to install and load modules from a NuGet repository. The only thing you have to do is:
  1. Write a module as you know with Prism (or better in combination with Catel).
  2. Create a NuGet package for the module and publish it into your favorite gallery.
  3. Use and configure the NuGetBasedModuleCatalog in the application’s bootstrapper.
The first two steps are well documented. For details see creating and publishing a package and declaring modules from NuGet and Catel documentation.


So, now we can focus on the third one.

Use and configure the NuGetBasedModuleCatalog in the application’s bootstrapper


In order to simplify the following explanation we published a package into the official NuGet Gallery named Catel.Examples.WPF.Prism.Modules.NuGetBasedModuleA. Such package contains an assembly with a single class like this one:
 
public class NuGetBasedModuleA : ModuleBase
{
    public NuGetBasedModuleA(IServiceLocator container):base("NuGetBasedModuleA", null, container)
    {
    }

    protected virtual void OnInitialized()
    {
      var messageService = this.Container.ResolveType<IMessageService>();
      messageService.Show("NuGetBasedModuleA Loaded!!");
    }
}

Let’s start writing an  application that will use this public packaged module. Indeed, the main point of interest here is about the usage and configuration of the NuGetBasedModuleCatalog. Therefore just write the bootstrapper as follow:

public class Boot : BootstrapperBase<MainWindow, NuGetBaseModuleCatalog>
{
  protected override void ConfigureModuleCatalog()
  {
    base.ConfigureModuleCatalog();
    this.ModuleCatalog.AddModule(new ModuleInfo 
          { 
               ModuleName = "NuGetBasedModuleA", 
               ModuleType = "Catel.Examples.WPF.Prism.Modules.NuGetBasedModuleA.NuGetBasedModuleA, Catel.Examples.WPF.Prism.Modules.NuGetBasedModuleA", 
               Ref = "Catel.Examples.WPF.Prism.Modules.NuGetBasedModuleA, 1.0.0-BETA", 
               InitializationMode = InitializationMode.OnDemand
          });
  }
}

Notice how the Ref property of ModuleInfo is used to specify the package id and optionally the version number. If the version number is not specified then the application always will try to download the latest version keeping the modules up to date.

Finally, to make this demo work, just put in the XAML of the main window the ModuleManagerView control, run


and click the check box.

Conclusions


Yes, we did it again. Catel keeps working alongside Prism. Now, we extended Prism’s modularity options through NuGet in order to provide a powerful mechanism of application module distribution. 

If you want to try, just get the latest beta package of Catel.Extensions.Prism and discover by yourself all scenarios that you can handle with this Catel exclusive feature.

Saturday, September 28, 2013

Keep updated CatelR# with ReSharper 8.0



CatelR# is now compatible with the latest version of ReSharper (R#).

We also keep the compatibility with the most recent R# versions including 6.0, 7.0 and 7.1. The full installer, that also includes the R# 8.0 assemblies, of the latest beta of CatelR# is available to download here.

R# 8.0 includes a NuGet based Extension Manager and the Catel Team started to publish the CatelR# extension in the Extension Gallery.

Yes, CatelR# is alive. We keep working in more features and also are waiting for your ideas. You got one? Let us know!

So, ensure yourself to install the latest version of R# and you will no miss any of the forthcoming CatelR#'s cool features.

Friday, September 27, 2013

How Configure HTTP Access to Analysis Services on Internet Information Services from Powershell

Introduction


You can enable HTTP access to Analysis Services by configuring MSMDPUMP.dll, an ISAPI extension that runs in Internet Information Services (IIS) and pumps data to and from client applications and an Analysis Services server. This approach provides an alternative means for connecting to Analysis Services when your BI solution calls for the following capabilities:
  • Client access is over Internet or extranet connections, with restrictions on which ports can be enabled.
  • Client connections are from non-trusted domains in the same network. {…}"
The text above, is the way of the “Configure HTTP Access to Analysis Services on Internet Information Services (IIS)” guide start. Such steps are typical to setup the development environment or production server of a BI solution that move data over HTTP directly from an instance of SQL Server Analysis Services.

Repeat these steps over and over again encourage me to write the script to automate most of the steps of this guide with Powershell.

The Code

So, without more introduction here is the source:

1) Import or load the WebAdministration Module or PSSnapin (depends on IIS version)
$webAdminModule = Get-Module -ListAvailable | Where-Object { $_.Name -eq "WebAdministration" }
if ($webAdminModule -ne $null) 
{
    Write-Host "Importing WebApplication Module..."
    Import-Module -ModuleInfo $webAdminModule
}
else
{
    Write-Host "Loading WebApplication PSSnapin..."
    Add-PSSnapin WebAdministration
}

2) Create and setup application pool
Write-Host "Creating application pool OLAP..."
New-WebAppPool -Name OLAP -ErrorAction SilentlyContinue
Set-ItemProperty IIS:\AppPools\OLAP -Name ManagedRuntimeVersion -Value v2.0.50727
Set-ItemProperty IIS:\AppPools\OLAP -Name ManagedPipelineMode -Value 1

3) Set application pool credentials
Write-Host "Enter application pool credentials"
$userName = Read-Host "UserName" 
$securedPassword = Read-Host "Password" -AsSecureString
$userName = $credential.UserName
$password = [Runtime.InteropServices.Marshal]::PtrToStringAuto([Runtime.InteropServices.Marshal]::SecureStringToBSTR($securedPassword));
Set-ItemProperty IIS:\AppPools\OLAP -Name ProcessModel -Value @{ userName="$userName"; password="$password"; IdentityType=3 }

3) Create the web application
Write-Host "Creating OLAP Application in Default Web Site..."
New-Item 'IIS:\Sites\Default Web Site\OLAP' -Type Application -PhysicalPath "C:\Program Files\Microsoft SQL Server\MSAS10_50.MSSQLSERVER\OLAP\bin\isapi"
Set-ItemProperty "IIS:\Sites\Default Web Site\OLAP" -Name ApplicationPool -Value OLAP
Set-WebConfiguration system.web/authentication 'IIS:\Sites\Default Web Site\OLAP' -value @{ mode = 'None' }

4) Add the handler mapping (it can't be done with the available command-lets, so)
Write-Host "Adding the handler mapping..."
[System.Reflection.Assembly]::LoadFrom("C:\windows\system32\inetsrv\Microsoft.Web.Administration.dll")
$isapiPath = "C:\Program Files\Microsoft SQL Server\MSAS10_50.MSSQLSERVER\OLAP\bin\isapi\msmdpump.dll"
$isapiConfiguration = Get-WebConfiguration "/system.webServer/security/isapiCgiRestriction/add[@path='$isapiPath']/@allowed"  
if (-not($isapiConfiguration -eq $null -or $isapiConfiguration.value))
{  
    Write-Host "Enabling ISAPI Module - $isapiPath"
    Set-WebConfiguration "/system.webServer/security/isapiCgiRestriction/add[@path='$isapiPath']/@allowed" -value "True" -PSPath:IIS:\  
}

$serverManager = New-Object Microsoft.Web.Administration.ServerManager
if ($isapiConfiguration -eq $null)
{
    Write-Host "Adding and enabling ISAPI Module - $isapiPath"
    $appHostConfig = $serverManager.GetApplicationHostConfiguration();
    $isapiCgiRestrictionSection = $appHostConfig.GetSection("system.webServer/security/isapiCgiRestriction");
    $isapiCgiRestrictionCollection =  $isapiCgiRestrictionSection.GetCollection();
    
    $cgiRestrictionElement = $isapiCgiRestrictionCollection.CreateElement("add");
    $cgiRestrictionElement.SetAttributeValue("path", "C:\Program Files\Microsoft SQL Server\MSAS10_50.MSSQLSERVER\OLAP\bin\isapi\msmdpump.dll");
    $cgiRestrictionElement.SetAttributeValue("allowed", "true");
    $cgiRestrictionElement.SetAttributeValue("groupId", "");

    $isapiCgiRestrictionCollection.AddAt(0, $cgiRestrictionElement);
}

$webConfig = $serverManager.GetWebConfiguration("Default Web Site", "OLAP");
$handlersSection = $webConfig.GetSection("system.webServer/handlers");
$handlersCollection = $handlersSection.GetCollection();

$handlerElement = $handlersCollection.CreateElement("add");
$handlerElement.SetAttributeValue("name", "OLAP");
$handlerElement.SetAttributeValue("path", "msmdpump.dll");
$handlerElement.SetAttributeValue("verb", "*");
$handlerElement.SetAttributeValue("modules", "IsapiModule");
$handlerElement.SetAttributeValue("scriptProcessor", "C:\Program Files\Microsoft SQL Server\MSAS10_50.MSSQLSERVER\OLAP\bin\isapi\msmdpump.dll");

$handlersCollection.AddAt(0, $handlerElement);

$serverManager.CommitChanges();

The main differences with the original non-automated guide are: 
  • Doesn’t create a copy of the files in ‘C:\Program Files\Microsoft SQL Server\MSAS10_50.MSSQLSERVER\OLAP\bin\isapi’ to a ‘c:\inetpub\wwwroot\olap’ just publishes directly the original directory as application. Our deployments (production or workstation) typically involve a single instance of SQL Server Analysis Service.
  • Doesn’t modify the archive msmdpump.ini because the default configuration is enough.
  • Doesn't grant data access permissions. Assumes that the  application pool’s credentials have the right data access permissions.

Conclusions

I’m agree with you. This script can be more parameterized. But with this simple “coded” blog post I just want to send you a message. 

When you have to face automation tasks of your daily work follow this tips:
  • Just do a small step at once.
  • Don’t add complexity to the solution more than your needs.
  • Don’t overwhelm you, do it for fun. 
  • The code will be better tomorrow. 
  • One day as result of your small improvements the solution will look just like you ever wanted. 

But if I you have something to automate just start NOW ;-). 

Wednesday, August 14, 2013

If you can’t defeat them, just “join” them – Part I

Introduction

I spend a lot of time using Subversion as the Version Control System for my development team. The fact is I love Subversion and strongly think that there is no a centralized version control system (CVCS) like Subversion. 

On the other hand, my organization promotes the usage of Microsoft Team Foundation Server (TFS) as Application Life-Cycle Management (ALM) and I also think TFS is a great integrated environment. However some of its services are in disadvantage mainly in terms of usability against some third party software.

But I’m not here to talk about TFS vs. Subversion, even when the user experience of the Visual SVN is better than Team Explorer (even in VS2012).

I just want to share how I committed to the indication of shutting down my local team Subversion, and move my entire sources to the centralized TFS, keeping all my “non-integrated” development environment’s cool features.

What features will I miss if I’ll move to TFS version control system?

I received a lot of critics because I don’t use the integration features of Visual Studio Team System. Actually I have a non-fully integrated environment for ALM (TFS and Subversion) but always keeping traces between sources and tasks. I know that TFS has built-in support for this and also have a check-in policy system to “force” some team disciplines.

On the other hand, I needed move my team into some of self-discipline development practices to control the existing “chaos”: uncommented commits, unplanned or non-well-reported work, and more. In such scenario, I was “forced” to ensure the Subversion and TFS integration and also “route” my team into the right direction. 

I thought that I would able to handle this situation starting from the Subversion with relative ease and I was right. I implemented a hook to enforce the usage of these practices and also solve some integration issues. The continuous integration server (Team City) also helps me in track task with sources, listing all related tasks from the TFS. The fact is I also modified the existing TFS integration plugin to work as I expected, but this is part of another history.

Therefore with a simple Subversion hook I get a centralized or server side “check-in” polices system to:

Avoid empty commit messages: Every single commit must contain a description.

Avoid unplanned work: Every single commit message must contain the id of the work item of type “Sprint Backlog Item” in “In Progress” state.

Automatically track the task history: As every single commit message contains a work item id then all commit messages are attached as history of the work item. 

Accept some relevant commit comments or messages: A "relevant" commit comment is about a commit comment that indicates a feature completion. It should be related with addition, modification, removal or bug fixing, but just when those goals are actually "Done, Done". The usage of this pattern also turn automatically to the “Done” the state of the related work items. This messages can also use to generate the release notes document for a specific version

Migrating from SVN to TFS version control system

My support team started with some existing migration tools but the outcome was incomplete (with runtime errors included). On the other hand, I actually thought in move from SVN to GIT, because the next version of TFS comes with built-in support for GIT repositories but apparently “they” can’t wait and yes, I have to move my source to TFS.

Therefore, with the GIT idea in my mind, I came across a couple of GIT extensions and I started to type the follow commands:

> git svn clone http://svn.mydivsion:3690/repository
> git tfs init http://tfs.mycompany:8080/tfs/DefaultCollection/MyDivision $/MyDivision 
> git tfs fetch 
> git rebase /remotes/default/tfs master 
> git tfs rcheckin –authors="authors.txt"

Everything works like charm. The full migration can be done. Now, after years of discussions, I lose and “they" win. Now, I have to start over again.

The fact is that I think TFS was built on top of one of the best Business Intelligence Platform (BI) ever made and can be configured to produce a lot of multidimensional metrics very useful for ALM. But this migration to TFS version control system are backwards steps, at least for me.


Hook the TFS commits from the server side

As I wrote before, TFS has its own check-in policy system. Actually it is a “client side evaluation” check-in policy system, and I don’t like them. Such policies can also be overridden by the developers. TFS check-in policy system looks more like a warnings system.

But TFS comes with a server side event handling system and can be plugged-in deploying an assembly into the web service plugins directory of the application tier of the current installation of TFS.

I started to re-write all my useful check-in polices, referencing the right TFS assemblies.

For instance, the “Avoid empty commit messages” policy looks like this:

public class AvoidEmptyCommitMessagePolicy : ISubscriber
{
    public string Name
    {
        get { return " AvoidEmptyCommitMessagePolicy"; }
    }

    public SubscriberPriority Priority
    {    
        get { return SubscriberPriority.Normal; }
    }
    
    public EventNotificationStatus ProcessEvent(TeamFoundationRequestContext requestContext, NotificationType notificationType, object notificationEventArgs, out int statusCode, out string statusMessage, out ExceptionPropertyCollection properties)
    {
        statusCode = 0;
        properties = null;
        statusMessage = string.Empty;
        var eventNotificationStatus = EventNotificationStatus.ActionPermitted;
        if (notificationType == NotificationType.DecisionPoint)
        {
            var checkinNotification = notificationEventArgs as CheckinNotification;
            if (checkinNotification != null && string.IsNullOrEmpty(checkinNotification.Comment))
            {
                statusMessage = "Your commit has been blocked because you didn't give any log message.\n Please write a log message describing the purpose of your changes and \n then try committing again.";
                eventNotificationStatus = EventNotificationStatus.ActionDenied;
            }
        }

        return eventNotificationStatus;
    }
}

The “Avoid unplanned work” looks like this:

public class AvoidUnplannedWorkPolicy : ISubscriber
{
        /*...*/
        if (checkinNotification != null)
        {
                CheckinNotificationInfo notificationInfo = checkinNotification.NotificationInfo;
                CheckinNotificationWorkItemInfo[] workItemInfos = notificationInfo.WorkItemInfo;
                if (notificationType == NotificationType.DecisionPoint)
                {
                    if (workItemInfos == null || workItemInfos.Length == 0)
                    {
                        statusMessage = "Your commit has been blocked because you didn't give any work item linked to this changeset.";
                        eventNotificationStatus = EventNotificationStatus.ActionDenied;
                    }
                    
                    var teamFoundationLocationService = requestContext.GetService<TeamFoundationLocationService>();
                    var uri = new Uri(teamFoundationLocationService.ServerAccessMapping.AccessPoint + "/" + requestContext.ServiceHost.Name);
                    TfsTeamProjectCollection tfsTeamProjectCollection =  TfsTeamProjectCollectionFactory.GetTeamProjectCollection(uri);
                    var workItemStore = tfsTeamProjectCollection.GetService<WorkItemStore>();
                    if (workItemInfos.Exists(workItem => workItem.Type != "Sprint Backlog Item" ||  workItem.State != "In Progress"))
                    {
                        statusMessage = "Your commit has been blocked because you didn't give any “Sprint Backlog Item” in state "In Progress" linked to this changeset.";
                        eventNotificationStatus = EventNotificationStatus.ActionDenied;
                    }
                }
       }
        /*...*/
}

and so on.

The AvoidUnplannedWorkPolicy is very Scrum for Team System process template related. It can be done more generic and cross process template support, but it’s just an example. 

That fact is I’m now thinking about a lot of policies to improve my team practices, but if tell you I have to kill you  ;-)

Conclusions

Seems like I can live with such transition (to TFS version control system), but it isn’t all happiness. Now I have to wait for an approval process to deploy my server side policies in the main TFS server. But if “they” don’t like such intrusion then probably I will also back into the “chaos”.

In the meantime I just wrote this blog post.

PD: Yes I'm back in the game again, but now without Subversion, with my ankle ligaments broken as outcome of a basketball game, and with my team Villa Clara as the new Champion of the Cuban National Baseball League ;-)

Thursday, January 17, 2013

Cache storage explained

Introduction 

Caching is about improving applications performance. The most expensive performance costs of the applications are related to the data retrieving, typically when this data requires to be moved across the network or loaded from disk. But some data have a slow changing behavior (a.k.a nonvolatile) and don't require to be re-read with the same frequency of the volatile data.

So, to improve your application performance and handle this "nonvolatile" data from a pretty clean approach, Catel comes with a CacheStorage<TKey, TValue> class. Notice that the first generic parameter is the type of the key and the second the type of the value that will be stored, just like a Dictionary<TKey, TValue> but CacheStorage isn't it just a Dictionary. 

This class allows you to retrieve data and store it into the cache with a single statement and also helps you to handle the expiration policy if you need it.

Let’s take a look into these features:

Initializing a cache storage

To initialize a cache storage field into your class use the following code:

/*...*/
private readonly CacheStorage<string, Person> _personCache = new CacheStorageCacheStorage<string, Person>(allowNullValues: true);

Retrieve data and store it into the cache with a single statement

To retrieve data and storing into a cache with a single statement use the following code:

/*...*/
var person = _personCache.GetFromCacheOrFetch(Id, () => service.FindPersonById(Id));

When this statement is executed more than once with the same key, the value will be retrieved from the cache storage instead of from the service call (notice the usage of a lambda expression). The service call will be executed just the first time or if the item is removed from the cache manually or automatically due to the expiration policy.

Using cache expiration policies

The cache expiration policies add a removal behavior to the cache storage items. A policy signals that an item is expired to make that cache storage remove the item automatically.

A default cache expiration policy initialization code can be specified during cache storage initialization constructor:

/*...*/
private readonly CacheStorage<string, Person> _personCache = new CacheStorageCacheStorage<string, Person>(() => ExpirationPolicy.Duration(TimeSpan.FromMinutes(5)), true);

You can specify a specific expiration policy for an item when it's stored:

/*...*/
var person = _personCache.GetFromCacheOrFetch(id, () => service.GetPersonById(id), ExpirationPolicy.Duration(TimeSpan.FromMinutes(10)));

The default cache policy specified at cache storage initialization will be used if during item storing the expiration policy is not specified.

Build-in expiration policies

Catel comes with build-in expiration policies. They are listed in the following table:

Expiration policy Type Description Initialization code sample
AbsoluteExpirationPolicy Time-base The cache item will expire on the absolute expiration DateTime
ExpirationPolicy.
Absolute(new DateTime(21, 12, 2012))
DurationExpirationPolicy Time-base The cache item will expire using the duration TimeSpan to calculate the absolute expiration from DateTime.Now
ExpirationPolicy.
Duration(TimeSpan.FromMinutes(5))
SlidingExpirationPolicy Time-base The cache item will expire using the duration TimeSpan to calculate the absolute expiration from DateTime.Now, but every time the item is requested, it is expanded again with the specified TimeSpan
ExpirationPolicy.
Sliding(TimeSpan.FromMinutes(5))
CustomExpirationPolicy Custom The cache item will expire using the expire function and execute the reset action if is specified. The example shows how to create a sliding expiration policy with a custom expiration policy.
var startDateTime = DateTime.Now;
var duration = TimeSpan.FromMinutes(5);
ExpirationPolicy.
Custom(() => DateTime.Now >
startDateTime.Add(duration), 
() => startDateTime = DateTime.Now); 
CompositeExpirationPolicy Custom Combines several expiration policies into a single one. It can be configured to expire when any or all policies expire.
new CompositeExpirationPolicy().
Add(ExpirationPolicy.
Sliding(TimeSpan.FromMinutes(5))).
Add(ExpirationPolicy.
Custom(()=>...))

Implementing your own expiration cache policy

If the CustomExpirationPolicy is not enough, you can implement your own expiration policy to makes that cache item expires triggered from a custom event. You are also able to add some code to reset the expiration policy if the item is read from the cache before it expires (just like SlidingExpirationPolicy does).

To implement an expiration cache policy use the following code template:

public class MyExpirationPolicy : ExpirationCachePolicy
{
   public MyExpirationPolicy():base(true)
   {
   }

   public override bool IsExpired
   {
      get
      {
         // Add your custom expiration code to detect if the item expires
      }
   }

   public override void OnReset()
   {
      // Add your custom code to reset the policy if the item is read.
   }
}

Notice that the base constructor has a parameter to indicate if the policy can be reset. Therefore, if you call the base constructor false then the OnReset method will never be called.

Conclusion

If you are using caching in your current projects, you are probably using a different caching strategy. Now you have learned about the caching possibilities in Catel, you should definitely try it out and be amazed at the possibilities :)

X-ray StoneAssemblies.MassAuth with NDepend

Introduction A long time ago, I wrote this post  Why should you start using NDepend?  which I consider as the best post I have ever...