Friday, November 15, 2013

Mongo Style Capped Collections In MS SQL Server

Suppose you want to store temporary data in MS SQL Server. In general it's not a good practice because of relatively low performance. Although when expected load isn't too high it can be desirable. The lifetime of each piece of data is short but you create the pieces often therefore you need to find a way of keeping the size of the storage small.

One possible way of solving this is removing multiple outdated records over a time interval using SQL Server Agent or a special console app launched as a service. The downside here is that we introduce one extra dependency which needs to be configured on every server you want to use your storage.

The second option is using a special data structure which prevents your storage from overflow. Mongo capped collections (which are basically circular buffers) are a good example of this kind of data structures. Unfortunately there is no anything similar in MS SQL Server but it's rather easy to build one.

Let's start by defining a table:

CREATE TABLE [dbo].[Storage] (
 [Key] int IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED,
 [Data] varbinary(MAX),
 [Version] rowversion NOT NULL
)

Retrieving data from storage by key should be fast so a clustered index on the Key column is very handy here. We can create an index on rowversion to find records quickly during updates but this should be done carefully. The amount of reads and writes is almost the same so while we gain some performance boost on reads we lose it on writes.

Next we should decide should we initialize the storage with empty values or not. I think it is beneficial because if the entire storage is filled we won't have to choose between an update and insert operation every time we want to store some bytes. We simply update the oldest record with a new value.

declare @MaxItems int = 999,
 @ItemIndex int = 0
while @ItemIndex < @MaxItems
begin
 INSERT INTO [dbo].[Storage] ([Data]) VALUES(null)
 SET @ItemIndex = @ItemIndex + 1
end

This will populate our storage with empty data. We use auto increment for simplicity; in fact keys can be generated whatever we want.

Last but not least we need to create a mechanism for inserting data to the storage. The following stored procedure will do the job:

CREATE PROCEDURE [dbo].[insertToStorage]
 @Data varbinary(MAX)
AS
BEGIN
 UPDATE [dbo].[Storage] WITH (UPDLOCK, READPAST)
 SET [Data] = @Data
 OUTPUT inserted.[Key]
 WHERE [Version] = (SELECT MIN([Version]) FROM [dbo].[Storage])
END

Thursday, October 31, 2013

Fixing Interface Segregation Principle Violations And Applying Dependency Injection Using Unity

Following SOLID principles in object oriented design is a good thing. It allows you to produce more maintainable and testable code.

Let's look at a common situation when the "I" letter principle in SOLID acronym is violated. Assume you have an application that reads various settings from different places (app.config, database, service etc.). For that purpose one can create a class that encapsulates all settings access logic along with some data type conversions. The class is then injected as a dependecy according to Dependency Inversion principle (letter "D" in SOLID acronym). Here is an example that is written on C# and uses Unity as an IoC container:

public static void Main()
{
 var container = new UnityContainer();
 container.RegisterType<ISiteSettings, SiteSettings>();
 container.RegisterType<EmailSender>();
 container.RegisterType<TitlePrinter>();
 container.RegisterType<ApplicationCache>();

 container.Resolve<EmailSender>().SendEmail();
 container.Resolve<TitlePrinter>().Print();
 container.Resolve<ApplicationCache>().Insert();
 Console.ReadKey();
}

First we register the settings and some helpers which use them: EmailSender, TitlePrinter and ApplicationCache. Right after the registration we resolve helpers and trigger their methods. Our IoC container injects the settings automaticaly so we don't have to resolve them and pass to helper classes manually. Here is what we have so far for settings:

public class SiteSettings : ISiteSettings
{
 public SiteSettings()
 {
  Console.WriteLine("----------New Site settings instance created------------");
 }

 public int CacheTimeoutMinutes
 {
  get { return 1; }
 }

 public string Title
 {
  get { return "My awesome site"; }
 }

 public string EmailSenderName
 {
  get { return "Vasya"; }
 }

 public string EmailSenderAddress
 {
  get { return "vasya@domain.com"; }
 }
}

public interface ISiteSettings
{
 int CacheTimeoutMinutes { get; }

 string Title { get; }

 string EmailSenderName { get; }

 string EmailSenderAddress { get; }
}

The helpers are all basicaly the same so I'm going to put only EmailSender here:

public class EmailSender
{
 private readonly ISiteSettings m_settings;

 public EmailSender(ISiteSettings settings)
 {
  m_settings = settings;
 }

 public void SendEmail()
 {
  Console.WriteLine(
   "Email is sent by {0} from {1}",
   m_settings.EmailSenderName,
   m_settings.EmailSenderAddress);
 }
}

The settings are injected via constructor and stored in m_settings field. I'm sure constructors are the best place to perform injection because of three things:
1. There is no way of missing a dependency as you have to explicitly specify all of them during creation.
2. It is clear for the class consumer what this class relies on.
3. If you find yourself using huge constructors it's time to refactor the class in order to follow Single Responsibility principle (letter "S" in SOLID acronym).
 Once we call SendEmail method we simply print a message based on m_settings. It's just an example and in a real app you would place some valuable code there. Let's look at the output:

----------New Site settings instance created------------
Email is sent by Vasya from vasya@domain.com
----------New Site settings instance created------------
My awesome site
----------New Site settings instance created------------
Item inserted into cache. Duration is set to 1

Good news is that the app works. However we have a few problems here. First of all ISiteSettings interface contains all application settings and helper classes are forced to pull all its members even if they don't need all of them. This is a typical violation of Interface Segregation principle and the code quickly becomes messy as application grows. Second we create a separate instance of SiteSettings on every injection which is redundant.

In order to address the first problem let's group all the settings by their purpose and split the ISiteSettings interface into three smaller ones:

public interface IAppearanceSettings
{
 string Title { get; }
}
 
public interface ICacheSettings
{
 int CacheTimeoutMinutes { get; }
}

public interface IEmailSettings
{
 string EmailSenderName { get; }

 string EmailSenderAddress { get; }
}

public class SiteSettings : IEmailSettings, ICacheSettings, IAppearanceSettings
{
 ...
}

Now for every helper class we have to provide only what it really needs.

public class EmailSender
{
 private readonly IEmailSettings m_settings;

 public EmailSender(IEmailSettings settings)
 {
  m_settings = settings;
 }

 public void SendEmail()
 {
  Console.WriteLine(
   "Email is sent by {0} from {1}",
   m_settings.EmailSenderName,
   m_settings.EmailSenderAddress);
 }
}

We've just fixed Interface Segregation principle violation. However we still create an instance of SiteSettings class every time we resolve a dependency. One possible way of solving the problem is to create a separate instance of SiteSettings and specify it during registration phase like this:

var container = new UnityContainer();
SiteSettings settings = new SiteSettings();
container.RegisterType<IEmailSettings>(new InjectionFactory(i => settings));
container.RegisterType<IAppearanceSettings>(new InjectionFactory(i => settings));
container.RegisterType<ICacheSettings>(new InjectionFactory(i => settings));

This will work. However you have to create the instance at the very begining of your application lifetime. In some circumstances it might not be a desirable option (for example if you wan't to perform a lazy initialization). In this case you could use the following approach:

var container = new UnityContainer();
container.RegisterType<SiteSettings>(new CustomLifetimeManager());
container.RegisterType<EmailSender>(new InjectionConstructor(new ResolvedParameter<SiteSettings>()));
container.RegisterType<TitlePrinter>(new InjectionConstructor(new ResolvedParameter<SiteSettings>()));
container.RegisterType<ApplicationCache>(new InjectionConstructor(new ResolvedParameter<SiteSettings>()));

Pay attention for SiteSettings registration as we pass an instance of CustomLifetimeManager there. Also we have to explicitly tell IoC container that we wan't to inject SiteSettings instance to our helpers. By default Unity tries to find registrations for IEmailSettings, IApprearanceSettings and ICacheSettings and fails (as we did not register anything with these types). Here is what CustomLifetimeManager looks like:

public class CustomLifetimeManager : LifetimeManager
{
 private object m_value;

 public override object GetValue()
 {
  return m_value;
 }

 public override void SetValue(object newValue)
 {
  m_value = newValue;
 }

 public override void RemoveValue()
 {
  m_value = null;
 }
}

We use m_value field to save SiteSettings instance and then return it when necessary. In web apps we could store the value somewhere else (in HttpContext for example). And here is the final console output:

----------New Site settings instance created------------
Email is sent by Vasya from vasya@domain.com
My awesome site
Item inserted into cache. Duration is set to 1

All the code mentioned above can be found in my github repository. Hope this helps.

Wednesday, July 17, 2013

How to stop, start or restart IIS site on a remote machine with powershell

Sometimes during your script execution you need to shut down a site, do some work and start it again. Powershell remoting is a good way to go. To allow a script to be executed on a remote machine you have to log on there as an administrator and run the following command: 

winrm quickconfig

Notice that confirmation is required. After that you should be able to run your commands on a remote machine. First approach is straightforward. You can stop entire IIS server and then start it again. Like this:

Invoke-Command -ComputerName $targetServer -ScriptBlock {iisreset /STOP}
# do your stuff here
Invoke-Command -ComputerName $targetServer -ScriptBlock {iisreset /START}

where $targetServer is a variable that contains the name of the server. To restart the server use iisreset without arguments. This approach is fine for a single site on a server. On the other hand you probably don't want to shut down all sites on a server. In this case you can stop a few like this:

Invoke-Command 
-ComputerName $targetServer 
-ScriptBlock {import-module WebAdministration; Stop-Website $args[0]; Stop-Website $args[1]} 
-ArgumentList @($site1, $site2)

where $site1 and $site2 are the sites you want to stop. To start the sites use a similar command:

Invoke-Command 
-ComputerName $targetServer 
-ScriptBlock {import-module WebAdministration; Start-Website $args[0]; Start-Website $args[1]} 
-ArgumentList @($site1, $site2)

Hope this helps.

Friday, June 21, 2013

Config transformations without msbuild

In my previous post I was writing about automation of build and deployment process. I used SlowCheetah to transform all configuration files (not just web.config) in a solution and some powershell scripting to push the result to target server.

In this post I'm going to tell you how transformation can be done without triggering a build. This might be helpful when you want to send result files for review before deployment.

I would like to thank Outcoldman and AlexBar as their discussion led me to this solution. Consider the following lines:

using Microsoft.Web.XmlTransform;

namespace ConfigTransformer
{
    public class Program
    {
        public static void Main(string[] args)
        {
            string sourceFile = args[0];
            string transformFile = args[1];
            string resultFile = args[2];

            var transformation = new XmlTransformation(transformFile);
            var transformableDocument = new XmlTransformableDocument();
            transformableDocument.Load(sourceFile);
            transformation.Apply(transformableDocument);
            transformableDocument.Save(resultFile);
        }
    }
}

I get input data from command line arguments and use Microsoft.Web.XmlTransform.dll library to perform transformations here. Now it's time to develop this app to a real life solution. Here is the list of arguments we need:
1. Source folder. A path to solution's directory (I want to transform all files in my solution);
2. Destination folder. The app should place transformed files here;
3. Build configuration name. There might be more than one transformation files for each config file (e.g. Release, Debug). It this case it is reasonable to specify which one to use.
4. A list of configuration files to skip. This should be an optional parameter.

Here is new code:

namespace ConfigTransformer
{
    class Program
    {
        static void Main(string[] args)
        {
            if (args == null || args.Length < 3)
            {
                return;
            }

            var transformer = new SolutionConfigsTransformer(args[0], args[1], args[2]);
            if (args.Length > 3)
            {
                for (int i = 3; i < args.Length; i++)
                {
                    transformer.FilesToExclude.Add(args[i]);
                }
            }

            transformer.Transform();
        }
    }
}

In Main I validate input parameters and pass them to SolutionConfigTransformer. This class encapsulates transformation logic and exposes one method Transform(). Here is it's implementation:

public void Transform()
{
    if (!IsInputValid())
    {
        return;
    }

    IList<ConfigurationEntry> configurationEntries = GetConfigurationEntries();
    foreach (ConfigurationEntry entry in configurationEntries)
    {
        var transformation = new XmlTransformation(entry.TransformationFilePath);
        var transformableDocument = new XmlTransformableDocument();
        transformableDocument.Load(entry.FilePath);
        if (transformation.Apply(transformableDocument))
        {
            if (!string.IsNullOrWhiteSpace(entry.FileName))
            {
                var targetDirecory = Path.Combine(TargetDirectory, entry.ParentSubfolder);
                Directory.CreateDirectory(targetDirecory);
                transformableDocument.Save(Path.Combine(targetDirecory, entry.FileName));
            }
        }
    }
}

After some validation I get a list of configuraion files along with corresponding transformations and do almost the same thing as at the beginning (with some extra System.IO function calls). As you may guess GetConfigurationEntries() method plays a key role in program flow.

private IList<ConfigurationEntry> GetConfigurationEntries()
{
    string[] configs = Directory.GetFiles(SourceDirectory, "*.config", SearchOption.AllDirectories);
    var result = new List<ConfigurationEntry>();
    if (configs.Length == 0)
    {
        return result;
    }

    int i = 0;
    while (i < configs.Length - 1)
    {
        string config = configs[i];
        string transformation = configs[i + 1];
        var regex = new Regex(BuildSearchPattern(config.Remove(config.Length - 7, 7)), RegexOptions.IgnoreCase);
        bool found = false;
        while (regex.IsMatch(transformation))
        {
            Match match = regex.Match(transformation);
            if (IsTransformationFound(match) && !found)
            {
                found = true;
                if (FilesToExclude.Contains(config))
                {
                    m_logger.InfoFormat("{0} is in a black list. Won't be processed", config);
                }
                else
                {
                    var entry = new ConfigurationEntry
                    {
                        FilePath = config,
                        FileName = Path.GetFileName(config),
                        ParentSubfolder = GetParentSubfolder(config),
                        TransformationFilePath = transformation
                    };
                    result.Add(entry);
                }
            }

            i++;
            if (i < configs.Length - 1)
            {
                transformation = configs[i + 1];
            }
            else
            {
                break;
            }
        }

        i++;
    }

    return result;
}

I get all *.config files in a source directory. Then I iterate through the list searching for configuration files that have transformations. My main assumption here is that transformations are right after their configuration file in the list; for instance:

connectionStrings.config
connectionStrings.Relese.config
connectionStrings.Debug.config
Web.config
Web.Release.config
Web.Debug.config
Web.Test.config

I understand it's not a 100% accurate way. But it's rather simple and does what I need. I use regular expressions here to check transformations against particular pattern. If they match then use pattern's named group to check against build configuration name. If it matches too then put the configuration with the corresponding transformation to the result list.

It's just a high level description of my approach. If you are interested feel free to download the sources from my github repository.

Monday, April 22, 2013

Config Files Transformation And Web Application Deployment

My quest started with an innocent thought: "what if I had configuration file transformations instead of managing files via xcopy?" Previously I had already had some experience with this .net feature during my open source project deployment to appharbor. I didn't expect any difficulties mainly because the technology is rather straightforward and effective.

First I noticed that some web.config sections are extracted into separate files. So I tried implementing all the transforms in the web.release.config (with no luck of course). Each separate file required it's own transformation. That's how I met SlowCheetah. I read great introduction for the tool and realized that simply installing the VS extension is not enough - a way of propagating the MSBuild tasks to the build server is required. Fortunately here I found a detailed explanation.

At this point I had a few simple connection string transformations and all required changes in project files and  nuget packages. I triggered a build and examined the contents of the _PublishedWebsites folder in my build drop location. The transformations wasn't applied. After some searching I realized that this is exactly how it should work. Transformations are applied only when publishing the site. I added an extra argument /p:DeployOnBuild=True to the MSBuild call from my build definition and got packaged web sites in the _PublishedWebsites folder. All the transformations were in place this time. And here real troubles started.

The output of MSBuild with /p:DeployOnBuild=True is basically a zip archive with rather tricky hierarchy. The desired published site lied deep inside the archive and some of its folders were build version specific (dynamic). I realized that working with the package using common tools was something considered wrong.

The first and the most obvious solution was using generated sitename.deploy.cmd file to deploy the package. The file used MSDeploy internally and required the tool to be installed and configured on all target environments. By that time I had all my builds set up and running with powershell xcopy-style deployment strategy (which is generally speaking wrong). So I decided that it was too much work to redesign all the stuff just because of config transformations and continued searching.

What if I could extract the contents of the package with msdeploy and put it into shared location? I wrote this:
"c:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe" -verb:sync -source:package=c:\Share\mysite.zip -dest:contentPath=\\my-pc\Share\Test -disableLink:AppPoolExtension -disableLink:ContentExtension -disableLink:CertificateExtension
and got the error
Error: Source (sitemanifest) and destination (iisApp) are not compatible for the given operation.
I didn't find the solution for the problem. Here are some links that might help: onetwo. Please notice the solution from the latter one:
This occurs because the iisApp provider specified in the destination argument is not expecting a Manifest.xml file in the source. To resolve this issue, use the auto provider instead
But "auto" - was actually the same as my first attempt, so no luck here.

The third option was triggering SlowCheetah during a build, replacing configuration files in _PublishedWebsites explicitly. The approach was described here in more detail. Unfortunately I noticed that all configuration files was locked during a build. I found one possible workaround here but wasn't excited with it at all.

At this point I decided to work with mysite.zip package using powershell. I had already had some powershell deployment steps by that time so I thought it wouldn't be too much overhead to add another one.

Here I'm going to show you two auxiliary powershell functions I used to achieve the goal. They are rather simple and are assembled from various pieces found in the Internet here and there.

# Copy published site from deployment package to destination folder
function Copy-PublishedSite
{
    param($zipFileName, $destination)
    if (!(Test-Path $zipFileName)) 
    {
        Throw "Deployment package is missing"
    }
    $shell = new-object -com shell.application
    Get-ZipChildFolders $shell.Namespace($zipFileName).Items()
}
The function above performs input check (destination check is omitted) and calls recursive search function:

# Search for published site inside a deployment package
function Get-ZipChildFolders
{
    param([object]$items) 
    $containerName = "PackageTmp"
    foreach($item in $items) 
    {
        if (($item.IsFolder -eq $true) -and ($item.Name -eq $containerName))
        {
            $shell.NameSpace($destination).CopyHere(($item.getfolder.Items()), 0x14)
            return
        }
        else 
        {
            if($item.getfolder -ne $null)
            {    
                Get-ZipChildFolders $item.getfolder.items()
            }
        }   
    } 
}
This function traverses archive hierarchy tree searching for "PackageTmp" folder which is assumed to be a container for a published site. If folder is found the function copies its contents to the destination folder.

These functions worked fine in command prompt window but they failed during TFS build. CopyHere wasn't copying anything and didn't throw any errors. I didn't manage to make it work. Instead I decided using command line version of 7zip. Here is the code I got:

# Copy published site from deployment package to destination folder
function Copy-PublishedSite
{
    param($zipFile, $currentSite)
    Print-LogMessage "Copying zip package..."
    Copy-Item $zipFile.FullName $deploymentFolder
    Print-LogMessage "Unzipping package..."
    $tempFolder = join-path $deploymentFolder "tmp"
    $tempZip = join-path $deploymentFolder $zipFile.Name
    & $zipUtilityPath x $tempZip ("-o" + $tempFolder) -aoa -r
    Print-LogMessage "Creating target directory..."
    $targetPath = Join-Path (Join-Path $deploymentFolder "MySitesFolder") $currentSite 
    Create-DirectoryStructure $targetPath
    Print-LogMessage "Moving package contents to target directory..."
    $moveFolder = Get-ChildItem $tempFolder -filter "PackageTmp" -r
    Move-Item (join-path $moveFolder.FullName "*") $targetPath -force
    Print-LogMessage "Deleting temp data..."
    Remove-Item $tempFolder -force -r
    Remove-Item $tempZip -force
}

Although it's not a complete solution but the main idea is quite clear. First I copied the entire package to deployment folder (transfering an archive as a single file over the network is faster). Then I unzipped the contents of the package to the temporary folder "tmp" (folder already exists check is ommited). Then I copied the contents of "PackageTmp" subfolder into my sites directory. Finally I did some cleanup.

With the approach above I keep my existing powershell deployment strategy and have all config transformation features I need. I realize this solution is far from ideal and someday I'll have to move to msdeploy. But right now I don't see a strong reason for doing that.

Friday, February 15, 2013

WCF Behaviors. Wildcard name or empty string name.

Few days ago I found a WCF behavior configuration section similar to this:

<behaviors>
    <endpointBehaviors>
        <behavior name="*">
     <webHttp/>
        </behavior>
    </endpointBehaviors>
</behaviors>

Does name="*" make any sense? .net 4 comes with great impovements in default configurations. And if one wants to specify a behavior for all endpoints then behavior with an empty name (or even without the name attribute at all) is used. So what about wildcards? Usually an asterics is used for exactly the same purpose. What is the difference then? This question marked as answered actually doesn't contain the answer.

First of all we can use an asterics as an argument during channel factory creation.

var factory = new ChannelFactory<IMyService>("*");

In this case first endpoint configuration is taken. The feature is described in more details here.
It has nothing to do with behaviors.

In the configuration above the "*" symbol acts as a regular behaviorConfiguration name. You can reference it in any of our endpoints.

<endpoint 
    address="http://localhost:9001/http/" 
    contract="Shared.IMyService" 
    binding="basicHttpBinding" 
    behaviorConfiguration="*"/>

But if you want apply a behavior to all of your enpoints just use an empty name for it.

<behavior name="">
    <webHttp/>
</behavior>

Or even

<behavior>
    <webHttp/>
</behavior>

You can read more about WCF configuration defaults in this article.

Thursday, February 14, 2013

WCF. Fighting your way through a proxy.

Recently I discovered that my desktop tool for memorizing English words doesn't work when a client is behind a proxy.

The problem was pretty common and I quickly found this awesome answer on stackoverflow. But still there were a few things to deal with:
1. Move all those hardcoded strings to config files.
2. Define proxy usage per binding.
3. Test the application.

First might be easily accomplished using ConfigurationManager class. Here is the code I got:

using System;
using System.Configuration;
using System.Net;
using log4net;

namespace VX.Desktop.Infrastructure
{
    public class CustomProxy : IWebProxy
    {
        private const string CustomProxyAddressKey = "CustomProxyAddress";
        private const string CustomProxyUserKey = "CustomProxyUser";
        private const string CustomProxyPassword = "CustomProxyPassword";

        private readonly ILog logger = LogManager.GetLogger(typeof (CustomProxy));
        
        public Uri GetProxy(Uri destination)
        {
            var proxyAddress = ConfigurationManager.AppSettings[CustomProxyAddressKey];
            if (string.IsNullOrEmpty(proxyAddress))
            {
                logger.Error("Error retrieving CustomProxyAddress from configuration file. Make sure you have a corresponding key in appSettings section of application config file.");
                return null;
            }
            
            logger.InfoFormat("Proxy address: {0}", proxyAddress);
            return new Uri(proxyAddress);
        }

        public bool IsBypassed(Uri host)
        {
            logger.InfoFormat("IsBypassed for host: {0} is false", host);
            return false;
        }

        public ICredentials Credentials
        {
            get
            {
                logger.InfoFormat("Getting proxy credentials");
                string userName = ConfigurationManager.AppSettings[CustomProxyUserKey];
                string password = ConfigurationManager.AppSettings[CustomProxyPassword];
                logger.InfoFormat("Done. {0}", userName);
                if (string.IsNullOrEmpty(userName) || string.IsNullOrEmpty(password))
                {
                    logger.Error(
                        "Error retrieving proxy credentials from configuration file. Make sure you have corresponding keys in appSettings section of application config file.");
                }

                return new NetworkCredential(userName, password);
            }
            set { }
        }
    }
}

Nice ways of addressing the second and the third issues are described here.
So we should use something like:

<bindings>
    <basicHttpBinding>
        <binding name="myBindingWithProxy" useDefaultWebProxy="true" />
    </basicHttpBinding>
</bindings>
instead of
<defaultProxy enabled="true" useDefaultCredentials="false">
  <module type = "SomeNameSpace.MyProxy, SomeAssembly" />
</defaultProxy>

Now we can use other bindings without any proxies. And to test all this stuff we can use a great tool - Fiddler. Just check the Rules -> Require Proxy Authentication option and you're done.