Import Machines Keys – Visual Studio Team Services – Unit Tests in Build

In my previous post I talked about how to Encrypt an App.config file and export the machine keys needed to deploy the application to different machines and import them, all using our old friend aspnet_regiis.exe

This breaks my build

If you are using Visual Studio Team Services build definition package, and you run Unit Tests during the build which rely on using the encrypted credentials, they will fall over which an error similiar to this one:

System.Configuration.ConfigurationErrorsException: Failed to decrypt using provider ‘DataProtectionConfigurationProvider’. Error message from the provider: Key not valid for use in specified state.

This (as explained previously) is beause the machine keys wont be present on our Azure VM, exactly the same reason if you ran the application on a desktop that didnt have the keys imported.

The answer is……

The VSTS has a handy build step Batch Script, which allows you do run batch files as part of the build process., example here:


What I did was create an area in the repository with a directory called encrypt, and leave my install_keys.bat file there. Then the first step I run is this script, which will then install the keys from the file (keys.xml) created previously.

My build order then in VSTS looks something like this:











Yes should mean your Unit Tests can access and decrypt the sections in the app.config for the credential data.

Security Hole

The only issue with the multi-machine-to-one-RSAkey approach, is the keys.xml is left on the VSTS server. Now it is left in a private repository, but it is still somewhere. We cannot delete it, because we may need it for more machines in the future.

Apart from that, the beauty of this approach is you can deploy your application with encrypted app.config credentials to any machines, as long as the machine has had these RSA keys installed.

Encrypting Credentials in App.config for Multiple Machines

We should all care as developers about security and how we store and use sensitive data, to either connect to databases, login to domain accounts etc.

Today I’m going to talk about how to encrypt usernames and passwords that are stored and saved to via an applications app.config. This article will use a custom configuration section called EncryptUserCredentials. I wont discuss how I created that here, but here is a sample app.config showing it, please not:

  • service: key value to the record.
  • userName: username.
  • password: password.

I will not show you the implementation details and how you would access this in code, until another post. Today i will talk about how you can encrypt the EncyptedUserCredentials themselves, because at the moment they are plain text for all to see!

The way you accomplish this is using aspnet_regiis.exe, which all you ASP.NET web developers will know registers your web applications with IIS.

But wait, there are other functions this fine and dandy binary brings and that is encrypting sections in web.configs…..

But I’m using an App.Config silly.

Thats right, that doesnt matter. They are just config files to .NET, but with different names. So let me explain what you need to do, but before that, here is where aspnet_regiis is located on your Windows box:

Version of .NET Framework Location of Aspnet_regiis.exe file
.NET Framework version 1 %windir%\.NET\Framework\v1.0.3705
.NET Framework version 1.1 %windir%\Microsoft.NET\Framework\v1.1.4322
.NET Framework version 2.0, version 3.0, and version 3.5 (32-bit systems) %windir%\Microsoft.NET\Framework\v2.0.50727
.NET Framework version 2.0, version 3.0, and version 3.5 (64-bit systems) %windir%\Microsoft.NET\Framework64\v2.0.50727
.NET Framework version 4 (32-bit systems) %windir%\Microsoft.NET\Framework\v4.0.30319
.NET Framework version 4 (64-bit systems) %windir%\Microsoft.NET\Framework64\v4.0.30319

Before we move on, I must tell you we are focusing on a multi-machine configuration file encryption using RSA. If though your application is running on one machine only then you can use DPAPI and its provider DataProtectionConfigurationProvider. DPAPI is handled by Windows itself and uses specific machine keys and containers. These are not transferable to different machines. If you wanted to use the DPAPI method for a multi-machine scenario, aspnet_regiis would need to be run on a app.config on each machine it is deployed on.

Why is that a bad thing?

Simple, you would need to store a plain text app.config file as either part of the Continous Integration process or someone would need to manually keep a copy and run it on each machine or even include the plain copy in the installer if that was your method for deploying. This just adds a security weak  point. You could include scripts to delete the plain text files, if this is the route you wanted to go down. But just so you know, DPAPI exists and could be a better option for you.

RSA route

So aspnet_regiis allows you to create containers of asymmetric private/public keys and export them to other machines, allowing you one global config file to be used.

Step 0 – Preperation is (RSA) key

Yes yes, Step 0 exits because I got half way and forgot this step, thank the stars it was meant to be Step1! Add a configProtectedData section to your config with provider. Please note:

  • keyContainerName – should be the name of the RSA container you will create later.
  • name – Can be anything. Im naming mine MyEncryptionProvider.

Step 1 -Espionage….

Yes i said aspnet_regiis wont have a problem with an App.config – it wont, but first you need to rename/copy said App.config file to web.config.

copy app.config web.config

Step 2 – Rise and Serve

Create a public/private RSA key pair with a specfic container name. They should also be marked as exportable (otherwise what is the point!). MyCustomKeys can be anyname you desire.

aspnet_regiis.exe  -pc MyCustomKeys -exp

Step 3 – Let me in!

Grant permissions for accounts to access the container. Example here is the network service say IIS uses.

aspnet_regiis.exe  -pa MyCustomKeys "NT AUTHORITY\NETWORK SERVICE"

Step 4 – Encrypt and Protect

Now the magic happens. The following line will now encrypt your section (my EncryptedUserCredentials are wrapped in section CustomConfg). The -pef switch is telling the application to look for a web.config file and to use my provider I declared in Step 0 (which is using type RsaProtectedConfigurationProvider).

aspnet_regiis.exe  -pef CustomConfig . -prov MyEncryptionProvider

You web.config file should now have transformed. Gone is the CustomConfig section with plain text credentials, now there is a nice CyperValues. Please note mine below have been replaced with hard coded text, but you will see what i mean when you do yours. Also note your CustomConfig section now declares it uses a configProtectionProvider=MyEncryptionProvider.

Step 5 – Export those Keys

So now we have created our web.config file you can rename it to app.config and use this in your application. To use it on different machines though, you will need to export the keys from the machine that you created the encrypted web/app.config file and import them onto each machine. Firstly on your machine run the following which will create the key file for your container, including the private keys (-pri).

aspnet_regiis.exe -px MyCustomKeys keys.xml -pri

Step 5 – Import those Keys

Log into the machine(s) you wish your application to work on and run the following

aspnet_regiis -pi MyCustomKeys keys.xml

I would do this as part of your Release or Installation process making sure you delete the keys.xml file from the installed machines. The only place the keys.xml should be kept is in your code repository store but somewhere safe where it is restricted. This is the security issue for the RSA approach.


The full encrypt and export script can be found here. Amend it to include your custom container, section and provider names.



How to….Item Templates in Visual Studio

For some time now I have wanted to find out if it was possible to create a C# Template with the formatting I use when writing code. For example I always add regions that I break down into:

  • Fields
  • Properties
  • Constructors
  • Methods

After posting a Stack Overflow question which gained no reply, I went on C-Sharp Group and got inspiration from a lovely chap Juri, His answer was correct but there was a better way of doing it using the Export Template function in Visual Studio I talk about in the above youtube video.

I hope this helps you.

[MVC3] ObjectContext.SaveChanges() — How to Use

ADO.NET Entity Framework stack

ADO.NET Entity Framework stack (Photo credit: Wikipedia)

Recently i have joined a Facebook Group The Dev circle, which is a group of likeminded developers who wish to learn, code and grok!


One member is starting out using Microsoft’s ORM Entity Framework (EF) and wanted to know how to update his Entities. This is easy within EF, as there is an method called SaveChanges() which will do the following:


Persists all updates to the data source and resets change tracking in the object context.

So how do we use it? Well with all Data Connections we need to use a “using” statement and we also need to make sure we catch the required Exceptions detailed in the MSDN documentation, one in particular is OptmisticConcurrencyException which allows us to resolve any concurrency conflicts based on parameters we pass to another method Refresh().


Take for e.g. a controller i have in a MVC3 Project which employs EF. Here is the code:


// POST: /LedgerUser/Create[Bind(Prefix = "LedgerUser")]

 public ActionResult Create(bool Thumbnail,LedgerUser LedgerUser, 
HttpPostedFileBase imageLoad2) { var Avatar = new Accounts.Models.Image(); //--Handle the image first. if (imageLoad2 != null) { using (System.Drawing.Image img =
System.Drawing.Image.FromStream(imageLoad2.InputStream)) { //--Initialise the size of the array byte[] file = new byte[imageLoad2.InputStream.Length]; //--Create a new BinaryReader and set the InputStream for the
//--Images InputStream to the //--beginning, as we create the img using a stream.
BinaryReader reader = new BinaryReader(imageLoad2.InputStream); imageLoad2.InputStream.Seek(0, SeekOrigin.Begin); //--Load the image binary. file = reader.ReadBytes((int)imageLoad2.InputStream.Length); //--Create a new image to be added to the database = Guid.NewGuid(); Avatar.TableLink =; Avatar.RecordStatus = " "; Avatar.FileSize = imageLoad2.ContentLength; Avatar.FileName = imageLoad2.FileName; Avatar.FileSource = file; Avatar.FileContentType = imageLoad2.ContentType; Avatar.FileHeight = img.Height; Avatar.FileWidth = img.Width; Avatar.CreatedDate = DateTime.Now; //-- Now we create the thumbnail and save it. if (Thumbnail == true) { byte[] thumbnail =
Images.CreateThumbnailToByte(imageLoad2.InputStream, 100, 100); Avatar.ThumbnailSource = thumbnail; Avatar.ThumbnailFileSize = thumbnail.Length; Avatar.ThumbnailContentType =
Files.GetContentType(imageLoad2.FileName); Avatar.ThumbnailHeight = Images.FromByteHeight(thumbnail); Avatar.ThumbnailWidth = Images.FromByteWidth(thumbnail); } else { byte[] thumbnail = new byte[0]; Avatar.ThumbnailSource = thumbnail; Avatar.ThumbnailFileSize = 0; Avatar.ThumbnailContentType = " "; Avatar.ThumbnailHeight = 0; Avatar.ThumbnailWidth = 0; } } } if (!ModelState.IsValid) { ModelState.ModelStateErrors(); } if (ModelState.IsValid) { using (AccountsEntities context = new AccountsEntities()) { try { //--Save the LedgerUser context.LedgerUsers.AddObject(LedgerUser); context.SaveChanges(); } catch (OptimisticConcurrencyException) { context.Refresh(RefreshMode.ClientWins, LedgerUser); context.SaveChanges(); Console.WriteLine
("OptimisticConcurrencyException " + "handled and changes saved"); } catch (UpdateException ex) { Console.WriteLine(ex.ToString()); } try { //--Save the Image context.Images.AddObject(Avatar); context.SaveChanges(); return RedirectToAction("Index", "Home"); } catch (OptimisticConcurrencyException) { context.Refresh(RefreshMode.ClientWins, Avatar); context.SaveChanges(); Console.WriteLine
("OptimisticConcurrencyException Avatar" + "handled and changes saved"); } catch (UpdateException ex) { Console.WriteLine(ex.ToString() + " 2 "); } } } var userTypes = new SelectList(db.UserTypes, "id", "Description"); var ledgerUser = new LedgerUser() { id =, RecordStatus = LedgerUser.RecordStatus, CreatedDate = LedgerUser.CreatedDate, DateOfBirth = LedgerUser.DateOfBirth }; var viewModel = new LedgerUserViewModel() { UserTypes = userTypes, LedgerUser = ledgerUser }; return View(viewModel); }


So what is this controller doing:

  • Create new Ledger Users and their Avatar Images.
  • These are persisted using an AccountsEntities ObjectContext
  • I only save the the LedgerUser and Avatar if the ModelState.IsValid . I then try and SaveChanges.
  • I catch the required Exceptions
  • Return the viewModel if the SaveChanges() fails.

There is a good MSDN Article How to: Manage Data Concurrency in the Object Context to read here.

This approach should be used for most EF persistence.




Enhanced by Zemanta

[MVC 3] MvcImage Project–How did i do Thumbnail Support?

Before reading the following, please read up on the tutorial posts listed here to get up-to speed on how i have accomplished Image handling using MVC so far, as i wont be going into detail on the code i have updated changed, only new code:

Also, before i continue, you can download and use the code on my MvcImage Codeplex Project home page.

So firstly i updated the AjaxSubmit and ImageLoad controllers. I have updated them to segregate the Image and Thumbnail byte arrays within the Session. The AjaxSubmit controller references my new Extension method Images.CreateThumbnailToByte to create the Thumbnail. I will discuss this in a moment. Remember to read up in the previous tutorials on how i use these withn a JQuery Image Preview Plugin.

So the AjaxSubmit Code is:

        public ActionResult AjaxSubmit(int id)

            Session["Image.ContentLength"] = Request.Files[0].ContentLength;
            Session["Image.ContentType"] = Request.Files[0].ContentType;
            byte[] b = new byte[Request.Files[0].ContentLength];
            Request.Files[0].InputStream.Read            (b, 0, Request.Files[0].ContentLength);
            Session["Image.ContentStream"] = b;

            if (id > 0)
                byte[] thumbnail = Images.CreateThumbnailToByte(Request.Files[0].InputStream, 100, 100);

                Session["Thumbnail.ContentLength"] = thumbnail.Length;
                Session["Thumbnail.ContentType"] = Request.Files[0].ContentType;

                byte[] c = new byte[thumbnail.Length];
                Request.Files[0].InputStream.Read(c, 0,                 Request.Files[0].ContentLength);
                Session["Thumbnail.ContentStream"] = thumbnail;


            return Content(Request.Files[0].ContentType + ";" + Request.Files[0].ContentLength);

And the ImageLoad Controller Code – the id parameter is used to determine if to Respond with the Thumbnail binary or the Image binary.

       public ActionResult ImageLoad(int id)
            if(id == 0)
                byte[] b = (byte[])Session["Image.ContentStream"];
                int length = (int)Session["Image.ContentLength"];
                string type = (string)Session["Image.ContentType"];
                Response.Buffer = true;
                Response.Charset = "";
                Response.ContentType = type;
                Session["Image.ContentLength"] = null;
                Session["Image.ContentType"] = null;
                Session["Image.ContentStream"] = null;

            //--The following is the Thumnbail id.

            if (id == 1)
                byte[] b = (byte[])Session["Thumbnail.ContentStream"];
                int length = (int)Session["Thumbnail.ContentLength"];
                string type = (string)Session["Thumbnail.ContentType"];
                Response.Buffer = true;
                Response.Charset = "";
                Response.ContentType = type;
                Session["Thumbnail.ContentLength"] = null;
                Session["Thumbnail.ContentType"] = null;
                Session["Thumbnail.ContentStream"] = null;
            return Content("");

It is not the nicest code but it is there to show the logic behind what I’m trying to do. I’m following the same pattern as before but this time handling two byte arrays.

So lets resume talking about the following piece of code:

byte[] thumbnail = Images.CreateThumbnailToByte(Request.Files[0].InputStream, 100, 100);

So parameter 1 is the Stream of the Image, and the 2nd and 3rd parameters are the maximum height and width of the new resized Thumbnail Image. So using inspiration from Nathanael Jones (but not using it all as i didn’t have time), i maintain the aspect ratio of the image and create a new height and width by scaling the image properly. Then again based on advice from Nathanael’s article we create a new GDI drawing surface and create the Thumbnail Image. Then lastly we convert it back to a binary stream and return the byte array.

Here is the code:

        /// <summary>
        /// This method creates a Thumbnail Image and and scales it. /// It returns a byte array
        /// to be used.
        /// </summary>
        /// <param name="stream">Image Stream</param>
        /// <param name="maxHeight">Max Height (Used to scale the image</param>
        /// <param name="maxWidth">Max Width (Used to scale the image)</param>
        /// <returns>Scaled thumbail image byte array.</returns>
        public static byte[] CreateThumbnailToByte(Stream stream,         double maxHeight, double maxWidth)
            int newHeight;
            int newWidth;
            double aspectRatio = 0;
            double boxRatio = 0;
            double scale = 0;

            Stream imageStream = new MemoryStream();
            Image originalImage;

            Streams.RewindStream(ref stream);
            using (originalImage = Image.FromStream(stream))
                //--We need too maintain the aspect ratio on the image.
                aspectRatio = originalImage.Width / originalImage.Height;
                boxRatio = maxWidth / maxHeight;

                if (boxRatio > aspectRatio)
                    scale = maxHeight / originalImage.Height;
                    scale = maxWidth / originalImage.Width;

                //--Scale the Original Images dimensions
                newHeight = (int)(originalImage.Height * scale);
                newWidth = (int)(originalImage.Width * scale);

                using (var bitmap = new Bitmap(newWidth, newHeight))

                //--Create a new GDI+ drawing surfance based on the original Image. 
 //--This methid allows us to alter
                //--it where necessary, based on advice from here. //-- -pitfalls/
                using (var graphics = Graphics.FromImage(bitmap))
                    var rectangle = new Rectangle(0, 0, newWidth, newHeight);

                    graphics.CompositingQuality = CompositingQuality.HighQuality;
                    graphics.SmoothingMode = SmoothingMode.HighQuality;
                    graphics.InterpolationMode =                     InterpolationMode.HighQualityBicubic;
                    graphics.PixelOffsetMode = PixelOffsetMode.HighQuality;
                    graphics.DrawImage(originalImage, rectangle);

                    //--The same image to a new ImageStream so we can //--convert this to a byte array.
                    bitmap.Save(imageStream, originalImage.RawFormat);


                byte[] thumbnail = new byte[imageStream.Length];
                BinaryReader reader = new BinaryReader(imageStream);
                imageStream.Seek(0, SeekOrigin.Begin);

                //--Load the image binary.
                thumbnail = reader.ReadBytes((int)imageStream.Length);
                return thumbnail;




Now in the future i will look to extend this further to encompass all of Nathanael’s advice. So that’s it! Took a lot of work getting there, not just coding the solution but understanding it which is key.

The last thing i did was add more properties to my ImageModel to save the Thumbnail data separately from the main Image, so we don’t have to convert it each and every time we want to load the Thumbnail instead of the main Image binary.

So my new ImageModel class is now:

    /// <summary> 
    /// This class represents the table for Images and it necessary columns
    /// </summary>
    public class ImageModel : IImageModel
        public Guid UniqueId { get; set; }
        public Guid TableLink { get; set; }
        public String RecordStatus { get; set; }
        public DateTime RecordCreatedDate { get; set; }
        public DateTime RecordAmendedDate { get; set; }
        public Byte[] Source { get; set; }
        public Int32 FileSize { get; set; }
        public String FileName { get; set; }
        public String FileExtension { get; set; }
        public String ContentType { get; set; }
        public Int32 Height { get; set; }
        public Int32 Width { get; set; }

        //-- New in Alpha Release 0.0.2
        public Byte[] ThumbnailSource { get; set; }
        public Int32 ThumbnailFileSize { get; set; }
        public String ThumbnailContentType { get; set; }
        public Int32 ThumbnailHeight { get; set; }
        public Int32 ThumbnailWidth { get; set; }

Enhanced by Zemanta

[MVC 3] Entity Framework and Serializing JSON–Circular References

ADO.NET Entity Framework stack

Image via Wikipedia

I’m at the moment trying to write a light-weight AJAX control for my home page, which displays New Properties which have been added, aptly named WhatsNew.

The issue i had was the following error which i inspected in Fiddler when i didn’t have any JSON returned to my view:

A circular reference was detected while serializing an object type of “System.Data.Metadata.Edm.Properties”

A bit of googling, and it seems the JavaScriptSerializer has gets it knickers in a twist trying to traverse your Entity Framework objects which have relationships.

There are many options, but the one i took was to make the JSON as lightweight as possible and pass a ViewModel with only and not Entity Framework at objects at all. I build the view model based on data from the Entity Framework.

Here is my viewModel – the List was my old implementation which was causing the problems.

This will be returned as a JSON ActionResult to my view.  Bingo – I’ve fixed the issue! Smile

public class WhatsNewViewModel {
    //public List<PropertyViewModel> Properties { get; set; } 
 public string UniqiueKey { get; set; }
    public string Area { get; set; }
    public string PropertyType { get; set; }

Enhanced by Zemanta

[MVC 3] Images – Downloading Images

UPDATE: I have now combined all my MVC Image Handling  blog posts into a Open Source Project. Feel free to check this out once you have read the post.

Original Blog Post:

The previous posts i have shown you how to upload images, and use JQuery to preview the image. We have also used Entity Framework 4.0 and SQL to save a image as a varbinary data type.

Now I’m going to show you how to download the image, and display it on your view, using a HTML Helper.

Before we start lets think about what we need:

  • HTML Helper Extension to help display the image
  • Controller classes to grab the image
  • Object validation methods

Lets start with the code for the HTML Helper Extension. You should be familiar with how these work now. Again, mine is to accept a model using Lambda expressions:

public static MvcHtmlString DisplayImageFor<TModel, TProperty>    (this HtmlHelper<TModel> helper,
    Expression<Func<TModel, TProperty>> expression, string alt = null,     string action = null string controller= null, string actionParameterName = null,string height = null, string width = null)

    if (String.IsNullOrEmpty(alt)){
        string _name = ExpressionHelper.GetExpressionText(expression);
        alt = helper.ViewContext.ViewData.TemplateInfo.        GetFullHtmlFieldName(_name);

    if (String.IsNullOrEmpty(height)){
        height = "126px";

    if (String.IsNullOrEmpty(width)){
        width = "126px";

        actionParameterName = "id";

    ///---Set the default src settings if null 
 ///--- src element is made up of action, controller and acionParameterName 
 if (String.IsNullOrEmpty(action)){
        action = "GetImage";


    if (String.IsNullOrEmpty(controller)){
        controller = "ImagePreview";

    ModelMetadata metadata = ModelMetadata.FromLambdaExpression(expression,     helper.ViewData);
    Object value = metadata.Model;
    Type valueType = metadata.Model.GetType();
string src = null;

        src = String.Format(CultureInfo.InvariantCulture,

    var imgBuilder = new TagBuilder("img");

    imgBuilder.MergeAttribute("alt", alt);
    imgBuilder.MergeAttribute("src", src);
    imgBuilder.MergeAttribute("height", height);
    imgBuilder.MergeAttribute("width", width);

    return MvcHtmlString.Create(imgBuilder.ToString(TagRenderMode.SelfClosing));


So in a nutshell:

  1. If the parameters are null we set up the default values for the HTML Elements.
  2. build the metadata from the ViewData dictionary. We can then load the metadata.Model data to an Object, i.e. the value of the model.
  3. We build the src element to include the link of the controller we will build later to get the image.
  4. We use again to build the HTML Elements.
  5. The alt element will end up being the models name if there isn’t a parameter passed, for e.g.
     1: <img width="126" height="126" alt="property.UniqueKey" src="/ImagePreview/GetImage/b8b03b6d-e30c-46d6-9cba-002f3a4699ee"/>


You will notice we have used a static method, ObjectValidation.IsStringType. This has been built to allow me to query the object type. I need this so i can make sure the parameter I’m passing to the controller is a String. Here is how i have implemented this:

public static bool IsStringType(Type type)
    if (type == null)
        return false;

    switch (Type.GetTypeCode(type))
        case TypeCode.String:
            return true;

        case TypeCode.Object:
            if (type.IsGenericType && type.GetGenericTypeDefinition() == typeof(Nullable<>))
                return IsStringType(Nullable.GetUnderlyingType(type));

            return false;


    return false;

Quite  a simple class, which passes the type of the object (Previously we used .GetType() to achieve this), then we test to see if the equals String. If it does, we return true.

The other check is if the object is generic and Nullable, we pass the underlying type back through to the IsStringType method.

We have now built our HTML Helper and our Object Validation method, now onto our GetImage action method we set in the src element.

        #region GetImage
        public ActionResult GetImage(Guid id)
            //Guid ID = new Guid(id); Medium profileimage = new Medium();

            try {
                int count = db.Media.Count(c => c.Unique_Key == id);

                if(count > 0)
                    profileimage =  db.Media.SingleOrDefault(i => i.Unique_Key == id);
                    //--Convert the Image data into Memory. 
 byte[] imagedata = profileimage.Source;
                    return File(imagedata, profileimage.Content_Type, profileimage.File_Name);


                count = 0;
                count = db.Media.Count(c => c.Table_Link == id);

                if (count > 0)
                    profileimage = db.Media.SingleOrDefault(i => i.Table_Link == id);
                    //--Convert the Image data into Memory. 
 byte[] imagedata = profileimage.Source;
                    return File(imagedata, profileimage.Content_Type, profileimage.File_Name);

                return File(new byte[0], null);
            finally {
                if (db != null)




This controller method contains alot of my own bespoke business logic in obtaining the image. Which is why i do 2 checks.

Enhanced by Zemanta