Thursday, May 19, 2016

What is .NET Core and ASP.NET Core

.NET developers are familiar with .NET’s own web development framework which is ASP.NET. The ASP.NET framework supports a magnitude of higher level web development frameworks such as Web Forms, Web  Pages, MVC, Web API, SignalR. The ASP.NET framework was and currently is solid framework for developing modern web applications. However there were a considerable amound of reasons for which Microsoft decided to develop and introduce a new framework termed ASP.NET 5. Few of the key reasons for a new framework are,

  • Open-source the code-base to gain community support and feedback
  • Imporve performance of the traditional ASP.NET runtime
  • Reach out to developers with frequent updates due to competing technologies
  • Enavle cross-platform development/hosting oppertunities
  • Support extensive command-line tooling
  • Introduce simple project structure to quickly and easily create ASP.NET web applications

Introducing of ASP.NET 5

After 2+ years of effort, as of the 18th, November 2015, Microsoft officially released ASP.NET 5. ASP.NET 5 is a brand-new ground-up implementation influenced by the traditional ASP.NET framework. ASP.NET 5 is completely platform agnostic depending on the runtime you decide to use. You are free to utilize the full .NET framework that will enable running on windows only or the .NET Core framework which will enable cross-platform behaviour, and the choise of runtime is completely up to you. Hence the ASP.NET 5 RC1 version was built to run on top of .NET Execution Environment (DNX) and a couple of other supporting tools such as the .NET Version Manager (DNVM) and .NET Development Utilities (DNU). Below are the tasks handled by tools,

DNVM – .NET Version Manager DNVM acts as the version manager that will help you configure what version of the .NET runtime to use, by downloading the required versions of .NET runtime and by setting it at a mechine, process or user level so that your application can pick up the runtime during runtime.

DNX – .NET Execution Environment The DNX is to provide a consistent development and execution environment across multiple operating systems. It is responsible of hosting the CLR by handeling the dependecies and bootstrpping your applicaiton based on the configurations specified in the configuration file that is defined as part of the application.

DNU – .NET Development Utilities As the term suggests, DNU is a tool to support various tasks, such as managing libraries or packaging and publishing your application.

ASP.NET 5 Rebranded to ASP.NET Core

Upon the release of ASP.NET 5 which was an entirely new ground-up implementation, it caused somewhat of a misunderstanding that ASP.NET 5 is a newer version of the current ASP.NET framework and replaces the current version, which was not the case. Hence Micorosoft officially decided to rebranded the term ASP.NET 5 to ASP.NET Core, such that it clears the misunderstanding. This was communicated by Scott Hanselman on 19th, January 2016.

Limitations of ASP.NET Core RC1

ASP.NET 5 was much appriciated by the .NET development comunity. However ASP.NET 5 by design was developed more towards targetting web application development. ASP.NET 5 applications contained Startup.cs class within a class library. The DNX tool runs the ASP.NET hosting library and that would dynamically figure out the Startup.cs class and bootstrap the application.

During this time, Microsoft determined that it was also important to support native/cross-platform console applications. Due to this reason Microsoft had to revamp and introduce a toolchain that will seemlessly be ready for developing of both console and web applications.

With .NET RC2 and ASP.NET RC2 the .NET toolchain is one of the most significant changes that RC2 brings to ASP.NET Core as stated in the Visual Studio blog.

Introducing of ASP.NET Core RC2

As of the 16th, May 2016, Microsoft officially released .NET Core RC2 and ASP.NET Core RC2. The RC2 version of .NET Core and ASP.NET Core addresses the limitations encountered in the RC1 version. As of RC2 an ASP.NET application is bound to behave as a console applicaiton. The console application is responsible of calling on the ASP.NET Hosting libraries as opposed to the other way around that happened in RC1. Although the RC1 way of how things happened are still supported in RC2, RC2’s way provides more visibility and control to the appication developer on determining how the application works.

Going further ASP.NET Core RC2 makes things simpler by relying on a new toolchain called the .NET Command Line Interface (.NET CLI) that comes as part of .NET Core RC2. This tool replaces the old DNVM, DNX and DNU wich was part of the ASP.NET RC1 build. The .NET CLI will perform the tasks that each of the tools in RC1 was responsible, including easy construction, package management, and compilation of applications using the new .NET Core SDK.

Important Details

As confirmed by the Visual Studio blog and Scott Hunter, for the most part of these frameworks runtime/Libraries (CLR, libraries, compilers, etc.) for both .NET Core RC2 and ASP.NET Core RC2 will not change “much” by the time it RTMs which should be available by the end of June. This means we are free to develop go live applications with the RC2 verion of these frameworks .

However the tooling such as the .NET CLI and Visual Studio are still on preview. Microsoft has officially split the delivery of Visual Studio tools from the .NET Core and the ASP.NET Core runtime and libraries. As mentioned by Scott Hunter the tooling supports for .NET CLI and ASP.NET Core are still not at the level of RTM but should be by the end of June.

Summary

The intention of this post is to demystify the understating of .NET Core and ASP.NET Core and provide a breakdown of how related the two are in terms of eveolution of the frameworks. The content of this blog post is a compiled set of information that I have gathered from the various online blogs. I hope this gives you enough information on how ASP.NET is spanning on to reach highrounds and where the framework is heading towards and what how various components interact together. Please do let me know if there is anything I have failed to include or misinterpretted.

Happy Coding!

Thursday, May 12, 2016

Display Loading Indicator with Interceptors in Angular

Most Single Page Applications (SPA) written in Angular, utilize a plethora of asynchronous service calls in the background, some completing instantly and some taking a very long duration just because your ISP provides blazing speeds. Nevertheless, during such situation, it is essential that you display a loading indicator that suggests something like "Loading.." or "Please wait, your connection sucks!".

If your application contains a magnitude of $http service calls that you could hardly remember yourself, modifying each of them to display a loading indicator and hide it before and after each request will kill your time and ultimately you. Just imaging maintaining it! Is there a better way to enable this feature?

I feel your pain, hence this blog is about to detail a mechanism of how you can conveniently incorporate a loading indicator using a custom interceptor that plugs in to the AnguarlJS $httpProvider. To demonstrate this lets create a simple application where upon the user clicking a button we will query a backend service while displaying a loading indicator throughout the period of the HTTP request roundtrip.

Folder Structure

I like segregating an application into specific files and modules merely for maintainability. Before we dive in to the nitty-gritty details, lets observe the Angular application folder structure shown below that I opted to, which by no means are you restricted or limited to,

The pink square depicts the shared module, where I will have shared controllers, services, etc. indexController.js will be a controller that will be used to perform a simple http request. The utilityService.js contains a simple utility function to generate a unique ID and the httpInterceptorService.js will be the interceptor that is used to show/hide the loading indicator for each http request made via Angular.

The red square depicts the application where I have a module.js and rootController.js defined. The module.js at this level is responsible of injecting all the other modules that the application is dependent on (e.g. shared/module.js).

Below is the markup index.html page created as part of this solution, which details the loading of each file and the bootstrapping of the application,

<html>
<head>
    <title>Display Loading Indicator with Interceptors in AngularJS</title>
    <script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.5.5/angular.min.js"></script>
    <script type="text/javascript" src="https://code.jquery.com/jquery-2.2.3.min.js"></script>
    <script type="text/javascript" src="app/module.js"></script>
    <script type="text/javascript" src="app/rootController.js"></script>
    <script type="text/javascript" src="app/shared/module.js"></script>
    <script type="text/javascript" src="app/shared/services/utilityService.js"></script>
    <script type="text/javascript" src="app/shared/services/httpInterceptorService.js"></script>
    <script type="text/javascript" src="app/shared/controllers/indexController.js"></script>
    <link rel="stylesheet" href="Styles/styles.css" />
</head>
<body ng-app="app" ng-controller="app.RootController">
    <div ng-controller="shared.IndexController">
        <button ng-click="getData()">Get Data</button>
        <div>
            <ul ng-repeat="contact in contacts">
                <li>{{contact}}</li>
            </ul>
        </div>
    </div>
</body>
</html>

Notice the app.RootController at the top most level, and the shared.IndexController at a child level. This will enable the addition of common functionality to the root controller that can be invoked throughout the applications life cycle. Lets see how we could add the show/hide ability of the loading indicator to the root controller below.

Toggling the Loading Indicator

The rootController.js is responsible as functioning as the top-most controller for the entire application and will be the controller the application is bootstraped with, and is typically where its most suitable to add all application wide functionality such as the show/hide functionality of the loading indicator we indent to enable,

angular.module('app')
    .controller('app.RootController', ['$rootScope', function ($rootScope) {

        // Collection to maintain load order.
        var _loadList = {};

        // Display the loading message.
        $rootScope.showLoading = function (id, message) {

            if (_loadList != null && _loadList[id] == null) {
                var data = { id: id, message: message }; 7
                _loadList[id] = data;
            }

            var loadElement = $('div[data-load]');
            if (loadElement.length == 0) {
                $('body').append('<div data-load class="preloader"><img src="http://www.downgraf.com/wp-content/uploads/2014/09/01-progress.gif" /><p data-load-message>' + message + '</p></div>');
            } else {
                loadElement.find('p[data-load-message]').text(message);
            }
        };

        // Hide the loading message.
        $rootScope.hideLoading = function (id) {
            if (_loadList != null && _loadList[id] != null) {
                delete _loadList[id];
            }

            if (Object.keys(_loadList).length != 0) {
                var data = _loadList[Object.keys(_loadList)[Object.keys(_loadList).length - 1]];
                if (data.id != null) {
                    _showLoading(data.id, data.message);
                    return;
                }
            }

            var loadElement = $('div[data-load]');
            loadElement.remove();
        };
    }]);

The code is fairly simple. There are two functions bound to the $rootScope which is showLoading(id, message) and hideLoading(id).  The showLoading(id, message) function is responsible of queuing the message based on the ID and then displaying a animated GIF image that is dynamically added to the DOM using JQuery. The hideLoading(id) function is responsible to removing the ID from the queue and hiding the loading indicator from the DOM. If there are other loading messages queued in the _loadList array, the hideLoading(id) function is responsible of displaying the next immediate loading message.

Having code to show/hide the loading indicator is all good, but we need to enable the mechanism of showing/hiding or invoking the showLoading(id, message)/hideLoading(id) functions accordingly for each $http service request invocation. Lets see on how to enable that using interceptors in Angular next.

Configuring the HTTP Interceptor

We all know what $http in Angular is. The $http is a service in Angular that supports the communication with a backend server via HTTP. Hence in order to add a loading indicator we need the ability to pre/post process each of the requests executed via the $http service. The $httpProvider is how Angular enables this ability, where it contains an array named interceptors. An interceptor in the context of $httpProvider is simply an object that will contain for important methods, which in-fact are request(...), requestError(...), response(...) and responseError(...) that will be triggered for each request made via $http service.

Simple enough! All we need is a mechanism to trigger a way to display a loading message when the request(...) function is triggered and hide the loading message when requestError(...), response(...) and responseError(...) is triggered. Lets see the code of the custom interceptor below,

angular.module('shared')
    .factory('shared.httpInterceptorService', ['$rootScope', '$q', 'shared.utilityService', function ($rootScope, $q, utilityService) {

        // Shows the loading.
        var _showLoading = function (id, message) {
            $rootScope.showLoading(id, message);
        };

        // Hides the loading.
        var _hideLoading = function (id) {
            $rootScope.hideLoading(id);
        };

        return {
            // On request success
            request: function (config) {

                // Inject unique ID to config and and show loading. Show loading only if backgroundLoad property is not set or set to false.
                if (config != null) {
                    config.id = utilityService.scriptHelper.getUniqueId();
                    _showLoading(config.id, config.loadMessage != null ? config.loadMessage : 'Loading...');
                }

                // Return the config or wrap it in a promise if blank.
                return $q.when(config);
            },

            // On request failure
            requestError: function (rejection) {

                // Hide loading triggered against the unique ID.
                if (rejection != null && rejection.config != null) {
                    _hideLoading(rejection.config.id);
                }

                // Return the promise rejection.
                return $q.reject(rejection);
            },

            // On response success
            response: function (response) {

                // Get unique id from config and hide loading. Hide loading only if backgroundLoad property is not set or set to false.
                var config = response.config;
                if (config != null) {
                    _hideLoading(config.id);
                }

                // Return the response or promise.
                return $q.when(response);
            },

            // On response failure
            responseError: function (rejection) {

                // Hide loading triggered against the unique ID.
                if (rejection != null && rejection.config != null) {
                    _hideLoading(rejection.config.id);
                }

                // Return the promise rejection.
                return $q.reject(rejection);
            }
        };
    }]);

There are a couple of things going on here. First off, lets understand the four important methods of the interceptor,

  • request(...) function: This function is called before the request is sent over to the backend. The function is passed with a configuration object and you are free to modify the config object as required and this config object will be passed to each of the other three functions. Likewise I am adding a unique ID generated via the utilityService.js to the config object, which will then be passed to the _showLoading(id, message) function and get queued. You are further required to return a valid configuration or promise or the request will be terminated/rejected.

  • response(...) function: This function is called as soon as a response is received from the backend. As of this point we retrieve the unique ID from the config object and call the _hideLoading(id) function. You are further required to return a valid configuration or promise or the request will be terminated/rejected.

  • requestError(...) function: The current interceptor is not the only interceptor, and there can be interceptors chained together. During certain situations a request can fail due to other interceptors throwing errors, or for other network or backend related issues. in this case as well we retrieve the unique ID from the config object and call the _hideLoading(id) function. You are further required to return a valid configuration or promise or the request will be terminated/rejected.

  • responseError(...) function: At certain tims there are situations where interceptors in the chan fail, or there can be situation where the backend failed to provide a successful response. In either case we retrieve the unique ID from the config object and call the _hideLoading(id) function. You are further required to return a valid configuration or promise or the request will be terminated/rejected.

You may have noticed the term promise quite a few times. Promise is a JS based pattern for differed execution/asynchronous programming. Please refer the $q documentation to gain more understanding on the promises in the context of Angular.

Summary

Provided all goes well, you should see and output similar to the following for each $http invocation in you application,

Provide you are unable to get things working, download the sample application from below and try it out.

Happy Coding!

Saturday, February 27, 2016

Entity Framework Core 1.0 Database-First to Code-First

There is much interest within the .NET development community with the announcement of the new introduction of ASP.NET 5aka ASP Core 1.0, formerly known as ASP.NET 5 and Entity Framework 7 formerly known as Entity Framework Core 1.0. The highlight of these technologies is the ability to develop applications that run on the classic .NET framework, and the all new .NET Core which runs on top of the new .NET Execution Environment (DNX) which enables developing cross platform application to run on Windows, Linux, Mac.

Both ASP.NET Core and EF Core frameworks ground up implementations with quite a few changes to the traditional way of how we used to work with ASP.NET applications, however there is much more capabilities offered with the new versions, albeit they are still in the RC state, which is perfectly fine for playing around, experimenting and getting your hands dirty.

Its fairly easy to start off with EF Core using the code first approach, in fact; there are quite a number of blog posts that explain code first. Hence in this post is on how you can start off using an existing database using EF Core, together with some insights on the new ASP.NET.

Before we go any further, one important highlight with the new EF Core and VS tooling is,

No more EDMX support!

Currently you are able to create your model in two ways, using an XML based EDMX in the designer, or by using a code first with a set of classes and DBContext that defines the mappings. The model you choose has no difference on how the EF Framework behaves at run-time. The framework will create an in-memory model by reading the EDMX, or by reflecting upon the DBContext and related classes and its mappings.

Also as highlighted by Julie Lerman on “Data Points - Looking Ahead to Entity Framework 7”, going forward EF will no longer support the EDMX based model although database-first will be supported (using scaffolding) which can thereby evolve as code-first model. Updates/changes to the data model can later be migrated and applied to the database as and when necessary.

For those developers who are bonded with the EDMX designer, this post will detail the steps on how you can make use of an existing database (database-first development) to generate a code first model and move on with updating the data model and migrate and apply the updates to the database (code-first development)

For those developers who are bonded with the EDMX designer, this post will detail the steps on how you can make use of an existing database (database-first development) to generate a code first model and move on with updating the data model and migrate and apply the updates to the database (code-first development).

Creating the Data Model

You will need Visual Studio 2015 installed on Windows which is what I will be using to outline the actions that need to be performed. Upon installation of VS 2015 you will also need to upgrade the .NET Version Manager (DNVM) to use the latest version of the .NET Execution Environment (DNX). You can follow the steps detailed at “Installing ASP.NET 5 On Windows” to get your self up to speed.

For brevity we will start of with creating a console application project which will contain our Entity Data model.

Create the project

  1. Open Visual Studio
  2. Select File > New Project
  3. Select Visual C# > Windows > Web
  4. Select Console Application (Package) and give your solution and project a name like so,

    image

New Project Structure

This console application is not the traditional type of console application we are used to. It is based on the new project convention Microsoft introduced for ASP.NET Core. There are quite a few changes with the new project structure. What you would immediately notice is that the app.config file is missing. Instead there is a project.json file. This is one of the overhauls of the ASP.NET Core, where the entire project will be based-off over a JSON configuration file called project.json. I will not be talking much about the project.json file here, except for the bare essentials. However if you are interested to get to know more about the project.json file refer to this wiki on GitHub.

Add References to EF Commands

In order to generate the data model based on the database, we need a reference to a couple nuget pacakges,

  • EntityFramework.Commands
  • EntityFramework.MicrosoftSqlServer
  • EntityFramework.MicrosoftSqlServer.Design

EntityFramework.Commands – This package provides all the necessary commands to work with EF such as scaffolding the database, creating/applying migrations etc.

EntityFramework.MicrosoftSqlServer and EntityFramework.MicrosoftSqlServer.Design – These packages provide Microsoft SQL Server specific capabilities for entity framework to work with since we are suing Microsoft SQL Server as our database.

As of now VS 2015 does not tooling support for EF Core to generate your data model from the database. Hence we will be using the command line to generate it for us. Open the project.json file and add the dependencies as shown below,

image

Save your project.json and you the relevant package will be downloaded to your DNX profile and referred from within the project.

Note the section where the dependencies are declared

If it is declared within the a target framework, the declared dependencies will be only available to that specific framework as shown in the yellow box. If you require a dependency to target both the .NET full framework and the .NET Core framework you can declare it in the global dependencies section where the EntityFramework Packages have been declared, shown in the green box.

You also need to make sure you added a command as shown in the purple box, which will be the entry point from DNX to access the EF Command.

Run Entity Framework Commands

  1. Open command-prompt in Administrator mode
  2. Execute dnvm list and validate if you are using the latest runtime as shown below. If not use dnvm upgrade to download and use the latest runtime.

    image

 

  1. Navigate to the folder of you console application project.
  2. Execute dnx ef and validate if you are able to access the EF Core as shown below,

    image

Scaffold database to Code-First DBContext

Execute dnx ef dbcontext scaffold -h. This will list you all the parameters that are required to scaffold the DBContext against the target database as shown below,

image

At a bare minimum you need to input two arguments, a [connection] to the target database and a [provider] to use for working with the database. I will also specify the -c to specify a custom name for the DbContext. You can try this out with any database you have at your side. I happen to have a StudentDatabase with just two tables.

image

In order to generate the DbContext against the database you can execute the following EF command,

dnx ef dbcontext scaffold "data source=.;initial catalog=StudentDatabase;Integrated Security=true" "EntityFramework.MicrosoftSqlServer" -c "StudentDbContext".

This will create you the DbContext against the target database and you should be able to see the classes already included in the project in VS Solution Explorer for your use as shown below,

image

Query the database using the DbContext

You should now have a DbContext scaffolded using the EF command. Hence lets try to query for some data using the code below

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace App.Console
{
public static class Program
{
    public static void Main(string[] args)
    {
        // Create DbContext.
        var context = new StudentDbContext();

        // Display students.
        context.Student
            .ToList()
            .ForEach(s => System.Console.WriteLine("Student ID: {0}, First Name: {1}, Last Name: {2}", s.Id, s.FirstName, s.LastName));

        System.Console.ReadKey();
    }
}
}

Voila! We are able to query the database, without having to type all the cumbersome code.

image

Wait a minute! How does the application know on how to connect to the database. Well as part of the scaffold  process the DbContext is automatically  configured to used the connection string we provided to the scaffold command. This is not a nice way  to maintain the connection string. The new framework supports much better ways to overcome this issue by enabling configuration options to be passed as dependencies, which we will look at in a future post.

An important aspect with EF is migrations where you are able to maintain versions of your data model as and when it evolves over time. I will be writing up more on how you can perform migrations over your data model in the coming series of posts.

Happy Coding!

Thursday, June 6, 2013

SignalR – Real-time application development

Introduction

Real-time applications for the web are none other than usual client-server applications with one distinct feature, that being they accomplish functionality with very little (near real-time) or zero latency (real-time). There have been a number of traditional approaches in the past that were employed in order to achieve such functionality suing a variety of methodologies. One such mechanism is Comet, which is an umbrella term that provides a variety of models/solutions in order to achieve real-time application development over HTTP (Hyper Text Transfer Protocol) (e.g. Streaming, Hidden iframe, Ajax with long polling, etc.).
 
There are a number of third-party frameworks available today, until recent Microsoft had no streamlined mechanism that enabled a straightforward approach in the implementation of real-time applications. Although frameworks such as WCF (Windows Communication Foundation) did support a similar functionality using a couple of bindings, wsDualHttpBinding for web services and httpPollingDuplex Silverlight based applications, they had limited features in terms of scale and functionality. Apart from that, you were pretty much on your own if you needed to develop an application that required real-time functionally using ASP.NET.
 

Limitations of a typical Request-Response oriented Application

HTTP functions on the request-response principle, where the client makes a request and the server responds. This is the case with any application web application that runs on HTTP. This is illustrated below,
 
 Request response
 
This mechanism does not provide us the means to achieve real-time data transfer, mainly due to the reason that the server is not able to provide any updated unless the client specifically requests for it. One typical way that developers utilized to overcome this limitation is by performing periodic polling, where the client keeps on requesting until the server has an update to provide the client with as illustrated below,
 
 
Although the above mechanism tries to eliminate the afore mentioned drawback of implementing real-time web application using HTTP, it still could not be considered as an appropriate solution.
 

ASP.NET SignalR

ASP.NET SignalR is a framework maintained and developed by Microsoft that provides just the right functionality that helps you achieve seamless development of real-time applications using ASP.NET. SignalR incorporates a variety of mechanisms/modes that help handle failover during failure to negotiate on a specific transport mechanism to perform real-time message exchange. The framework also supports a straight forward development approach by exposing an API over the core functionality enabling you to develop applications in a breeze. SignalR is written so that it is scalable as you application grows and perform well even when the application requires handling many concurrent users at a given time.
 
SignalR provided four mechanisms in order to overcome the limitations associated with the traditional HTTP request-response principal during development of real-time web applications. Two of these mechanisms use new features introduced with HTML5 version, which are WS (Web-Sockets) and SSE (Server-Sent-Events). As of writing these two features are currently as draft within the HTML5 specification, although most modern browsers do support these features and will continue to evolve in future. The other two mechanisms SignalR supports are Forever Frame and Long Polling.
 
Upon SignalR framework being integrated to an application the framework will choose one of the best mechanisms based on the browser/server capabilities and agree on the transport negotiate accordingly. From a developer standpoint all that you will do is code against the high-level API that will encapsulate the negotiation of which mechanism to use. The key point to understand is that the code you write using framework is the “SAME” regardless of which transport mode you use. Listed below is more information on each of the transport modes supported by SignalR.
 

WS (Web-Sockets)

WS is a new protocol (i.e. ws:// or wss://) introduced with HTML5 and is the most appropriate technology for building real-time applications. That is due to the fact that WS enables creation of a Full-Duplex Bidirectional channel over HTTP enabling the client or the server to send messages independently.
 
 
As illustrated above upon the client creating a WS connection between the servers both server and client will utilize a full-duplex channel over HTTP enabling the server to send event data and the client to send data via the same connection. WS is the preferred over the other options below, due to the fact that it is very preformant and less resource intensive in its essence.
 
This feature is a new addition to HTML and hence requires alterations on an architectural level. Hence it requires that WS be supported in the web server, client browser and all intermediate associates (e.g. proxies, firewalls and server, client, public network infrastructures).In order to enable WS with ASP.NET the prerequisites are that the application must be running on ASP.NET 4.5 or MVC 4, IIS8 (or IIS8 express within Windows Server 2012) with a WS compatible browser.
 

SSE (Server-Sent-Events)

SSE is again an HTML5 feature which enables event based streaming over HTTP. On the contrary to WS, SSE is a mere addition to the JS API (i.e. EventSource object), hence requiring no major change architecturally. This feature is supported by most browsers available today.
 
 
SSE is not a duplex connection like WS. As illustrated above it is a one way connection so that the server can send updates to the client. SSE is achieved by the client creating an EventSource object via JS and the server flushing event data as and when there is an update triggered without terminating the stream. Should there be any client update to be sent, this will be sent via a separate request to the server and not using the event source created between the server and the client which can be considered somewhat of a limitation.
 

Forever Frame

This mechanism is a way of using existing HTML functionality to utilize real-time functionality. It is achieved by creating a hidden iframe within the client to connect to the server and use scriptlets sent by the server to trigger update within the client page. The functionality is similar to using SSE, although in this case this technique uses available HTML iframe element in order to achieve the similar functionality.
 
 
As illustrated the scirptlets sent by the server are appended to the iframe body and a mechanism of reading the script and executing it will be handled by the client accordingly.
 

Long Polling

This is technique that works across all browsers. Long polling is a last resort used by SignalR when determining a transport mechanism to use. Long polling functions in a manner that it sends Ajax based requests to the server where the server holds on to the request for a definite period of time and terminates the request with an empty response. However in cases where there is an server event that needs to be sent across to the client the server immediately sends the response for the client to use, and the client initiates another request to server that will again listen to any server update available.
 
 
Long Polling is considered more resource intensive compared to the other methods supported by SignalR. This is mainly due to the continuous connection initiated and terminated between he server and the client.
 

SignalR transport precedence

The above four mechanisms are supported by the SignalR framework and it will utilize the most effective based on the capabilities of the client/server and fallback to another mechanism if failed. The order of fallback within the framework is as follows,
1. Web-Sockets: SignalR will try to determine if the server/client or intermediate channels support Web-Sockets and use it.
2. Server-Sent-Events: Falls back from Web-Sockets if the browser supports Server-Sent-Events.
3. Forever Frame: Falls back from Server-Sent-Events if the browser supports this mechanism.
4. Long-Polling: Is the fail safe mechanism utilized by SignalR in cases where none of the above technologies are supported.
 

Summary

SignalR is a framework maintained by Microsoft and provides features and means of how real-time messaging can be achieve between the client and the server over HTTP. SignalR supports four main mechanisms of transport (i.e. Web-Sockets, Server-Sent-Events, Forever Frame and Long Polling). SignalR also provides and intuitive API and exposes multiple programing models that aids ease of development which will be looked at in a future post and demos.

Friday, May 3, 2013

Service Reference Generation using svcutil.exe

Duplicate objects being generated.

When working with generating a single service reference code file for multiple services I encountered an issue where one of the services being used exposes a System.Data.Dataset as the return datatype. The issue i had was that the generated objects seemed to get duplicated and generated in two different ways as shown and below,
 
The code below is gerated via the DataContracSerializer as you can clearly say based on some of the attributes used in the class and properties (i.e. line 3 and 23).
[System.Diagnostics.DebuggerStepThroughAttribute()]
[System.CodeDom.Compiler.GeneratedCodeAttribute("System.Runtime.Serialization", "4.0.0.0")]
[System.Runtime.Serialization.DataContractAttribute(Name="Person", Namespace="http://tempuri.org/")]
public partial class Person : object, System.Runtime.Serialization.IExtensibleDataObject
{
 
 private System.Runtime.Serialization.ExtensionDataObject extensionDataField;
 
 private string NameField;
 
 public System.Runtime.Serialization.ExtensionDataObject ExtensionData
 {
  get
  {
   return this.extensionDataField;
  }
  set
  {
   this.extensionDataField = value;
  }
 }
 
 [System.Runtime.Serialization.DataMemberAttribute(EmitDefaultValue=false)]
 public string Name
 {
  get
  {
   return this.NameField;
  }
  set
  {
   this.NameField = value;
  }
 }
}
 
The code below is gerated via the XMLSerializer as you can clearly say based on some of the attributes used in the class and properties (i.e. line 5 and 12).
[System.CodeDom.Compiler.GeneratedCodeAttribute("svcutil", "4.0.30319.1")]
[System.SerializableAttribute()]
[System.Diagnostics.DebuggerStepThroughAttribute()]
[System.ComponentModel.DesignerCategoryAttribute("code")]
[System.Xml.Serialization.XmlTypeAttribute(Namespace="http://tempuri.org/")]
public partial class Person
{
    
    private string nameField;
    
    /// 
    [System.Xml.Serialization.XmlElementAttribute(Order=0)]
    public string Name
    {
        get
        {
            return this.nameField;
        }
        set
        {
            this.nameField = value;
        }
    }
}
 
The reason for this duplication is that one of the services I was interfacing with had the type System.Data.Dataset being returned from a service method. Hence the svcutil tries to use the DataContractSerilizer schema importer tries to infer the System.Data.Dataset and fails to use the XML schema associated with it and resolves to XMLSerializer to serialize the objects for this service.

Overcoming the problem

It is clear that the DataContractSerializer cannot infer XML schema defined types. Hence in order to overcome this problem the alternative is to force svcutil to use the XMLSerializer for all the services being referenced like so,
 
svcutil /r:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Data.dll /serializer:XmlSerializer /out:MyService.cs /namespace:http://tempuri.org/,MyService.MyServiceReference https://www.mydomain.com/service1.asmx https://www.mydomain.com/service2.asmx
 
With the above command you should be able to generate the service code that will solve the duplicate objects being generated. However there is another minor issue, which is that the proxy objects generated do not fall/associate into the above provided namespace MyService.MyServiceReference instead is placed in the global namespace and will cause issues if you have similar names defined elsewhere in your solution.
 
To fix this is just a minor hack, if you analyze the generated MyService.cs file, you will notice that the namespace only wraps the service functions, where all you need to do is move the namespace definition to the beginning of the file so that it covers the proxy object definitions. This should give you a complete service reference when referring to multiple services that are required to be XMLSerialized due to the afore mentioned reason.

Monday, August 20, 2012

Entity Framework 5 with enums

Enum in Model Browser

Entity Framework 4 and earlier version limitations

Enums or Enumration Types are a very useful feature in .NET framework, that lets you define a collection of logical options as a specific type. Although this feature is a part of the language, it was not supported by the ADO.NET Entity Framework. In previous version of Entity Framework 4 and earlier, there was no possible way that you could define a scalar property as an enum type in any of your entities. It was possible to however explicitly associate integral type scalar properties to an enum via an explicit cast to the desired enum type.

The caveat with this explicit cast is that, a developer could cast any integral entity scalar property type to an enum type provided the enum being cast to has the integer value defined as its underlying value, meaning that you would need to make sure you performing the cast over the correct enum type. This is not a major issue if you have a very few enums defined in your application, but would a be a point of confusion when there are more than a few enums with similar names.

Other approaches would include creating additional properties that encapsulate the entity property within by returning an enum based on the value of the property within the entity class. Which by its very nature can only be accomplished if the data model is instructed to use custom POCO classes.

Entity Framework 5 and Enum Support

Fortunately ADO.NET Entity Framework 5 will officially support the ability to define enum types or use existing enum types as part of your entities scalar properties. Listed below are the very basic steps to add a simple enum property to you entity, I have a very basic sample databasa scheme to illustrate this. You can download the sample solution from here, which also has a local database with this schema.

Step 1: Defining the sample database schema

Database Schema

The above schema represents a relational database that holds order information related to a customer and product. An order in the order table can be one of two states which is either “Delivered” or “Pending Delivery”. Hence this state of the order is represented in the Status column which is of type int within the Order table.

Step 2: Generating an Entity Data Model from the Schema

Note that you could also generate the database script that could be deployed as a database by modeling the entities first. However for this scenario I will be generating the Entity Data Model over the existing database schema. Listed below are the steps for this. In order to generate your Entity Data Model using an existing database,

  1. Right Click on the project you need to include the Entity Data Model and create an ADO.NET Entity Data Model with a name that best suites your data model and click Next.
  2. Select the Generate from database option and click Next.
  3. Define your connection, provided a connection string name and click Next.
  4. Select the Tables relevant to the model, provide a valid namesapace for the entities and click Finish.

Your Entity Data Model should look like the following,

Entity Data Model

Step 3: Setting entity property as an enum

With ADO.NET Entity Framework 5 you are able to define new enum types that best suite your usecase, or you any existing enum type definitions that’s already part of the project. In order to link the Order tables Status property as an enum,

  1. Select the property and right click on it and select Convert to Enum from the context menu, which will bring up the following dialog, where I have filled in the required information.
     Enum Dialog
  2. If you desire to associate an enum type which is already part of the project, you could do so by checking the Reference external type and giving the fully qualified name for the enum type.
  3. You could always modify or add options to your enum type by locating the enum created under the Model Browser –> Enum Section as shown below.
    Model browser

Step 4: Using the enum types as part of the entity.

Upon having the enum configured to use a an enum type, its just a matter of writing code against entities the same way you would against a regular old objects that contains enum types within. Listed below is the code sample for doing just that.

// Delevered orders
Console.WriteLine("Delevered Orders");
WriteOrders(orders.Where(o => o.Status == OrderStatus.Delevered));

// Pending orders
Console.WriteLine("Pending Orders");
WriteOrders(orders.Where(o => o.Status == OrderStatus.Pending));

You can download the sample solution from here of that described in the blog post and go though the application.

Sunday, July 15, 2012

WCF Client Request / Response Message Inspection

Very recently I encountered a requirement for inspection of WCF messages passed to and from a service. This feature was required on the client side as the client applications requirement was to store these messages as log entries in the database. Although WCF does not support this out-of-the-box, it was pretty darn easy to implement it just by implementing two (out of many) interfaces in the WFC framework.

Before I go any further I need to mention that this blog post “Capture XML In WCF Service” helped me out a lot, although it talks about the message inspection on the service host itself, where as this post is about message inspection on the client side. I have further made a few enhancements over the code. So lets get started.

Intercepting WCF messages

In order to inspect client messages going in and coming out of the client we need to implement the interface contract IClientMessageInspector, as seen below,

///
/// Class to perform custome message inspection as behaviour.
/// 
public class MessageInspectorBehavior : IClientMessageInspector
{
    public void AfterReceiveReply(ref System.ServiceModel.Channels.Message reply, object correlationState)
    {
        // Do nothing.
    }

    public object BeforeSendRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel)
    {
        return null;
    }
}

Upon attaching this to the client runtime, WCF will ensure the two methods listed above is called when a request is sent to and a response is received from the service. So here is where we will write our custom code which will see later in this post.

The next important point is that we need to attach this inspector to the client runtime, and this can be done by the creating our own custom service behavior by implementing the IEndpointBehavior interface, as seen below (Note that I have implemented the interface to the same class that implements the IClientMessageInspector interface),

///
/// Class to perform custome message inspection as behaviour.
/// 
public class MessageInspectorBehavior : IClientMessageInspector, IEndpointBehavior
{
    public void AfterReceiveReply(ref System.ServiceModel.Channels.Message reply, object correlationState)
    {
        // Do nothing.
    }

    public object BeforeSendRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel)
    {
        return null;
    }

    public void AddBindingParameters(ServiceEndpoint endpoint, System.ServiceModel.Channels.BindingParameterCollection bindingParameters)
    {
        // Do nothing.
    }

    public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime)
    {
        // Do nothing.
    }

    public void ApplyDispatchBehavior(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher)
    {
        // Do nothing.
    }

    public void Validate(ServiceEndpoint endpoint)
    {
        // Do nothing.
    }
}

Next is that we need to integrate the message inspector to the behavior and this is done in the ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime) method as seen below,

public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime)
{
    // Add the message inspector to the as part of the service behaviour.
    clientRuntime.MessageInspectors.Add(this);
}

Now we have a custom inspector attached to a custom behavior, so how do we get the the request and response messages out of the inspector for logging? Well there are few ways to do this, but my preference was to embed an event handler in the inspector so users can subscribe to if required and be notified of when a request or response message is inspected. Here is the code that does just that,

///
/// Class to perform custome message inspection as behaviour.
/// 
public class MessageInspectorBehavior : IClientMessageInspector, IEndpointBehavior
{
    // Acts as the event to notify subscribers of message inspection.
    public event EventHandler OnMessageInspected;

    public void AfterReceiveReply(ref System.ServiceModel.Channels.Message reply, object correlationState)
    {
        if (OnMessageInspected != null)
        {
            // Notify the subscribers of the inpected message.
            OnMessageInspected(this, new MessageInspectorArgs { Message = reply.ToString(), MessageInspectionType = eMessageInspectionType.Response });
        }
    }

    public object BeforeSendRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel)
    {
        if (OnMessageInspected != null)
        {
            // Notify the subscribers of the inpected message.
            OnMessageInspected(this, new MessageInspectorArgs { Message = request.ToString(), MessageInspectionType = eMessageInspectionType.Response });
        }
        return null;
    }

   // Rest of the class code...

}

MessageInspectionArgs class and eMessageInspectionType enum are custom implementations for passing event arguments to the users for identifying the event related information. The code for these definitions are as seen below,

///
/// Enum representing message inspection types.
/// 
public enum eMessageInspectionType { Request = 0, Response = 1 };

///
/// Class to pass inspection event arguments.
/// 
public class MessageInspectorArgs : EventArgs
{
    ///
    /// Type of the message inpected.
    /// 
    public eMessageInspectionType MessageInspectionType { get; internal set; }

    /// 
    /// Inspected raw message.
    /// 
    public string Message { get; internal set; }
}

Finally its time for integrating it to the client application. Listed below is the code for that. Its pretty concise and easy to implement with very little or no effort.

class Program
{
    static void Main(string[] args)
    {
        string request = string.Empty;
        string response = string.Empty;

        // Instantiate the service.
        ServiceClient sc = new ServiceClient();

        // Instanticate the custom inspector behaviour.
        MessageInspectorBehavior cb = new MessageInspectorBehavior();

        // Add the custom behaviour to the list of service behaviours.
        sc.Endpoint.Behaviors.Add(cb);

        // Subscribe to message inpection events and provess the event invokation.
        cb.OnMessageInspected += (src, e) =>
        {
            if (e.MessageInspectionType == eMessageInspectionType.Request) request = e.Message;
            else response = e.Message;
        };

        // Call the service.
        var x = sc.GetData(1);

        // Display or log the results.
        Console.WriteLine(string.Format("Request\nMessage: {0}\n\nResponse\nMessage: {1}", request, response));

        Console.ReadKey();
    }
}

You can download the sample code from here. Let me know your feedback on suggestions or even improvements for that matter.

Tree View Checkbox JQuery Selection

Adding JQuery Hierachical Selection

Asp.Net TreeView control is a useful control when hierarchical data representation is required in an .aspx page. The Treeview control inherently supports enabling of checkboxes for node level selection. I came a across a problem where I needed a solution for the following, 1. Checking a node should cause all its child nodes to be selected. 2. Checking a node should cause all its parent nodes to be selected. In order to this it was just a matter of adding a few lines of JQuery to the aspx page. Thanks to JQuery’s wealth of convenient methods it resulted in a few lines. Listed below is the javascript code for your reference.

You can download the DEMO solution from here.

Introduction to C# and R Integration using R-(D)COM Server

Introduction

Quantitative analysis in the financial industry space plays a major role by providing services with the use of numerical, statistical and quantitative techniques. Such services may include investment management, portfolio optimization, risk management, derivative pricing, etc. In many of the cases mentioned above statistics is a subject area that provides invaluable concepts/theories in order to aid the process of analysis. Similar to using a language when developing a software, modeling statistical concepts/theories can also be done with the help of language.

One such language that complements subject of statistics is the R Language. However R in its very own nature is a nothing more than a language to perform statistical analysis. This creates the background for integrating R with C# whereby combining the power of a .NET language and its related frameworks together with a powerful statistical language.

Purpose

In this post I aim to integrate C# with R using the R-(D)COM Server which functions as the bridge between the two languages. I have come across few articles, tutorials and quite a number of forum threads detailing the integration process between C# and R. However it was not very straight forward when I tired it out for my self, hence I thought of sharing a detailed explanation with the intention of helping you save some time if you ever are to try the same.

Prerequisites

1. R Language: The latest version of R for windows can be downloaded from The Comprehensive R Archive Network aka CRAN site.
2. R-(D)COM Server: This can be downloaded from same CRAN site by clicking on the Other link on the left and selecting the R-(D)COM Server link.
3. Visual Studio: Visual C# Express can be downloaded from here.

Solution

This solution can be performed on both Windows environments Windows 7 and 2008 R2. Ill keep it simple by listing things you need to do step-wise, and detail where ever needed.

Step 1: Install R

Start the R setup as administrator, go with the default selections and complete the R installation.

Step 2: Install rscproxy.dll

The rscproxy.dll is a required dll in order to communicate with the R-(D)COM Server and by default the native R installer will not include this. To install the .dll, open R as administrator and type in the following command.
> install.packages(“rscproxy”)
Select a mirror and click OK. This will install the rscproxy.dll as part of the R library.
Copy the rscproxy.dll from the installed location %PROGRAMFILES%/R/R-2.14.1/library/rscproxy/libs/i386 to the %PROGRAMFILES%/R/R-2.14.1/bin/i386 directory.

Step 3: Configure R_HOME and path Environment Variables

Add a new system environment variable named R_HOME to point to the root directory of the R installation. To do this open the command prompt –> type sysdm.cpl –> go to the Advanced tab –> click Environment Variables… –> click New under the System variables panel as seen below.

image

Edit the path environment variable and add location to the i386 directory in the bin directory of R as seen below.
image

Step 4: Install R-(D)COM Server

Start the R-(D)COM Interface setup as administrator, go with the default selections and complete the installation.

Step 5: Verify R and R-(D)COM Server

By default the R-(D)COM Server setup installs a set of test files to verify and test connections between the (D)COM Server and R. To perform the basic test navigate to Start –> All Programs –> R –> (D)COM Server –> Server 01 – Basic Test. When the test dialog appears click on Start R. You should see a the initialization proceed and a basic test performed as shown below.

image

Step 6: Integrating C# and R using R-(D)COM Server

This tutorial will limit the example to a very basic Console Application which evaluates a very basic R command.

1. Open Visual Studio and create a new Console Application.

2. Add the following code to the R-(D)COM Server references as shown below.
image

3. Add the following code to the Program.cs class.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using STATCONNECTORSRVLib;
using System.Runtime.InteropServices;

namespace R_ConsoleApp
{
    class Program
    {
        static void Main(string[] args)
        {
            StatConnector connector = new StatConnector();

            try
            {
                connector.Init(&quot;R&quot;);

                // Create vector x.
                connector.EvaluateNoReturn(&quot;x &lt;- c(2,3,5,1,4,4)&quot;);

                // Basic calculations.
                Console.WriteLine(String.Format(&quot;sum(x): {0:0.00}&quot;, (double)connector.Evaluate(&quot;sum(x)&quot;)));
                Console.WriteLine(String.Format(&quot;mean(x): {0:0.00}&quot;, (double)connector.Evaluate(&quot;mean(x)&quot;)));
                Console.WriteLine(String.Format(&quot;sd(x): {0:0.00}&quot;, (double)connector.Evaluate(&quot;sd(x)&quot;)));
                Console.WriteLine(String.Format(&quot;median(x): {0:0.00}&quot;, (double)connector.Evaluate(&quot;median(x)&quot;)));
            }
            catch (COMException ex)
            {
                // Gets the text if any error occured.
                Console.WriteLine(string.Format(&quot;Unexpected COM Interop Error: {0}&quot;, connector.GetErrorText()));
            }
            finally
            {
                if (connector != null) connector.Close();
            }

            Console.ReadKey();
        }
    }
}
I will include combinations of R integrated with ASP.NET where you could leverage the power of a great web framework in order to do some really neat stuff. Please leave your feedback on any issues you encounter and I will get back to you as soon as I can.

Monday, July 2, 2012

Asp.Net Multiple Row Edit GridView Control

Multiple Row Edit GridView Control

About the ASP.NET Gridview

The Asp.Net GridView control is a versatile control when a web based application requires tabular data that can be manipulated with. It enables not just presentation of data but also extended functionality to perform selecting, editing, deleting and paging to name a few.

ASP.NET Gridview Limitaions

I recently came across a requirement which required providing functionality to edit multiple rows in the GridView. The GridView by default provides the ability to edit a single row at a given instance which was not sufficient. Upon my research I came across few examples on the web that suggested the use of a page wide variable which could be combined with the Visible property of controls added within the ItemTemplate of the GridView control similar to the following,
<ItemTemplate>
    <asp:Label ID="lblProductName" Visible='<%# !(bool) IsEditMode %>' runat="server" Text='<%# Eval("ProductName") %>' />
    <asp:TextBox ID="txtProductName" Visible='<%# IsEditMode %>' runat="server" Text='<%# Eval("ProductName") %>' />
</ItemTemplate>
The IsEditMode property in the above code segment is a page wide variable which will be set to true upon the user requesting to edit rows. This method has several drawbacks as follows,
  1. Changes all rows to edit mode. Cannot selectively specify which rows to edit. 
  2. All controls are placed within the ItemTemplate tag of the GridView thus making it difficult to clearly differentiate between item and edit controls. 
  3. If many GridViews are available the developer is required to take responsibility of managing global properties that enable the edit mode of each grid.

Enabling ASP.NET Gridview Mulitiple Row Edit

In order to rectify this I spent some time trying to extend the GridView control which mitigates the above mentioned drawbacks and enable multiple edit. I will go thru the most important code with you that enable these features in order to help you understand. You can download the code from here.

Step 1: Adding the check box column for row selection

In order to provide the user with the ability to select the rows to be edited it was evident that a checkbox column was required. This is achieved by adding a TemplateField to the GridView control where the TemplateField instances ItemTemplate and HeaderTemplate properties to an implementation that contains a CheckBox control for header and row selection, thereby adding the constructed TemplateFiled to the GridViews column collection. In essence what we are required to do is create a class that implements the ITemplate interface in the System.Web.UI namespace which will provide the method stubs for our custom implementation of the TemplateField instances ItemTemplate and HeaderTemplate. Listed in the below code block is the implementation of the custom CheckBoxTemplate,
///<summary> 
/// The selector check box template colum class.
/// </summary>
public class CheckBoxTemplate : ITemplate
{
    private ListItemType Type { get; set; }

    ///<summary> 
    /// Initializes a new instance of the class.
    /// </summary>
    ///The item type.
    public CheckBoxTemplate(ListItemType type)
    {
        this.Type = type;
    }

    public void InstantiateIn(Control container)
    {
        switch (Type)
        {
            case ListItemType.Header: goto default;
            case ListItemType.Item: goto default;
            default:
                    CheckBox chkSelector = new CheckBox();
                    chkSelector.Checked = false;
                    // The selector attribute is used by the JS code in MultiEditGridView.js file.
                    chkSelector.InputAttributes.Add("selector", Type == ListItemType.Header ? "headerCheckBox" : "rowCheckBox");
                    // Call the appropriate function in the MultiEditGridView.js file based on the checkbox style.
                    chkSelector.InputAttributes.Add("onClick", Type == ListItemType.Header ? "MultiEditHeaderCheckBoxSelect(this)" : "MultiEditRowCheckBoxSelect(this)");
                    container.Controls.Add(chkSelector);
                    break;
        }

    }
}
Note the constructor of the class that requests a ListItemType type in System.Web.UI.WebControls namespace to differentiate between the types of templates being created. The reason for this is because I required different client side behavior via JavaScript for the header checkbox and row level checkboxes such that when the header checkbox is checked all corresponding row level check boxes are checked and when all row level checkboxes are selected the header checkbox is checked. This implementation is performed in the InstatiateIn method at line 17. Last but not least to add the checkbox column to the grid we are required to create a TemplateField instance that has its ItemTemplate and HeaderTemplate set to an implementation of the custom CheckBoxTemplate class. This is performed in the overridden CreateColumns(…., ….) method of the MultiEditGridView class. When adding the customized TemplateField we need to verify that the check box column is always the first column in the grid. The following code block lists the code in achieving this aspect,
 
Note the highlighted rows in the code block above. Since we need to make sure the customized checkbox TemplateField is added as the first column we first request the base.GridView base.CreateColumns(…., ….) in creating the default columns fields (line 3) and finally insert the customized TemplateField at the first position (line 11).

Step 2: Switching the selected rows to edit mode

Upon the user selecting several/all checkboxes we need to toggle the stated of each row likewise. This is accoumplished in the overridden CreateRow(…., …., …., ….) method. The following code block details this,
protected override ICollection CreateColumns(PagedDataSource dataSource, bool useDataSource)
{
    ArrayList columnCollection = (ArrayList)base.CreateColumns(dataSource, useDataSource);

    // Appends an additional column to the beginining of the grid if multi edit is enabled.
    if (EnableMultiEdit)
    {
        SelectorTemplateFiled.HeaderTemplate = new CheckBoxTemplate(ListItemType.Header);
        SelectorTemplateFiled.ItemTemplate = new CheckBoxTemplate(ListItemType.Item);

        columnCollection.Insert(0, SelectorTemplateFiled);
    }

    return columnCollection;
}
Note the if condition (Line 4). It verifies if the grid is currently in edit mode and if the current row index is contained in the EditIndexes collection, effectively toggling the ItemTemplate or EditTemplate of the grid row (line 6 and line 9).

Step 3: Providing an overloaded DataBind() method

The GridView controls DataBind() method will not provide an indication if the grid is required to toggle edit mode on/off. Hence we need to overload the DataBind() by providing a DataBind(….) method to provide support for the developer to if required toggle the edit mode on/off. Listed below is the code block that achieves this,
protected override GridViewRow CreateRow(int rowIndex, int dataSourceIndex, DataControlRowType rowType, DataControlRowState rowState)
{
    // Enbles the edit template if any of the checkbox indexes are selected.
    if (EditFlag && EditIndexes.Contains(rowIndex))
    {
        return base.CreateRow(rowIndex, dataSourceIndex, rowType, DataControlRowState.Edit);
    }
    else
        return base.CreateRow(rowIndex, dataSourceIndex, rowType, rowState);
}
Based on the isEtidEnabled parameter of the DataBind(….) method the MultiEditGridView will toggle edit mode as specified in the earlier section. This method initializes the properties that will be used on by the overridden CreateRow(…., …., …., ….) method. Note the foreach loop where the EditIndexes are populated based on the selection of the custom checkbox TemplateField. Finally we call the GridView controls DataBind() method to perform the usual binding which will call the overridden methods in sequence.

Step 4: Additional Information

The custom checkbox TemplateField column related JavaScript is located in the MultiEditGridView.js file and in order to render the script when the control it is require that we register the script on the page. Listed below is code block that achieves this.
protected override void OnPreRender(EventArgs e)
{
    base.OnPreRender(e);

    // Register the JS on page.
    Page.ClientScript.RegisterClientScriptResource(typeof(CustomControls.MultiEditGridView), "CustomControls.MultiEditGridView.js");
}
Note the RegisterClientScriptResouce(…., ….) method which specifies which type of control will register the script (MultiEditGridView in this case) and the name of the resource to be registered. When using this method it is important that the MultiEditGridView.js is set to be an Embedded Resource under the properties window. Additionally it is also important that the following code is added in the AssemblyInfo.cs file for the required functionality,
// Add the MultiEditGridView.js as a web resource.
[assembly: WebResource("CustomControls.MultiEditGridView.js", "text/javascript")]

Step 5: Using the MultiEditGridView control

In order to use the MultiEditGridView control add the CustomControls project to the web solution you are working on and add a reference to the CustomControls project. Rebuild the whole solution. Go ahead and drag the MultiEditGridView to your page. Create the required template fields based on datasource.
 
NOTE: The embeded JavaScript of the MultiEditGridView is based on JQuery. Hence you will need to have a refference to the JQuery API from your web application.

Summary

Listed below is an example of a sample implementation (Click to expand the code section), Products.aspx page
 
Products.aspx page
<script type="text/javascript" src="Scripts/jquery-1.4.4.min.js"></script></pre>
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Products.aspx.cs" Inherits="MultiEditGridViewDemo.Products" %>
<%@ Register Assembly="CustomControls" Namespace="CustomControls" TagPrefix="cc" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title>Multi Edit Grid View Demo</title>
    <script src="Scripts/jquery-1.4.4.min.js" type="text/javascript"></script>
</head>
<body>
    <form id="form1" runat="server">
    <cc:MultiEditGridView ID="MultiEditGridView1" runat="server" AutoGenerateColumns="False">
        <Columns>
            <asp:TemplateField HeaderText="Product Name">
                <ItemTemplate>
                    <asp:Label ID="lblName" runat="server" Text='<%# Eval("Name") %>'></asp:Label>
                </ItemTemplate>
                <EditItemTemplate>
                    <asp:TextBox ID="txtName" runat="server" Text='<%# Eval("Name") %>'></asp:TextBox>
                </EditItemTemplate>
            </asp:TemplateField>
            <asp:TemplateField HeaderText="Product Category">
                <ItemTemplate>
                    <asp:Label ID="lblCategory" runat="server" Text='<%# Eval("Category") %>'></asp:Label>
                </ItemTemplate>
                <EditItemTemplate>
                    <asp:TextBox ID="txtCategory" runat="server" Text='<%# Eval("Category") %>'></asp:TextBox>
                </EditItemTemplate>
            </asp:TemplateField>
            <asp:TemplateField HeaderText="Product Description">
                <ItemTemplate>
                    <asp:Label ID="lblDescription" runat="server" Text='<%# Eval("Description") %>'></asp:Label>
                </ItemTemplate>
                <EditItemTemplate>
                    <asp:TextBox ID="txtDescription" runat="server" Text='<%# Eval("Name") %>'></asp:TextBox>
                </EditItemTemplate>
            </asp:TemplateField>
            <asp:TemplateField HeaderText="Is Available">
                <ItemTemplate>
                    <asp:CheckBox ID="chkIsAvailable" runat="server" Checked='<%# Eval("IsAvailable") %>' Enabled="false">
                    </asp:CheckBox>
                </ItemTemplate>
                <EditItemTemplate>
                    <asp:CheckBox ID="chkIsAvailable" runat="server" Checked='<%# Eval("IsAvailable") %>'>
                    </asp:CheckBox>
                </EditItemTemplate>
            </asp:TemplateField>
        </Columns>
    </cc:MultiEditGridView>
    <br />
    <asp:Button ID="btnEdit" runat="server" OnClick="btnEdit_Click" Text="Edit" />
    <asp:Button ID="btnUpdate" runat="server" OnClick="btnUpdate_Click" Text="Update" />
    </form>
</body>
</html>
Products.aspx.cs page
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;

namespace MultiEditGridViewDemo
{
    public partial class Products : System.Web.UI.Page
    {
        public bool IsEdit { get; set; }

        protected void Page_Load(object sender, EventArgs e)
        {
            if (!IsPostBack)
            {
                BindGridData(false);
            }
        }

        private void BindGridData(bool isEnableEdit)
        {
            List productList = new List {
                    new Product{ ID=1, Name="Product 1", Category="Category 1", Description="Product 1 - Category 1", IsAvailable=false },
                    new Product{ ID=1, Name="Product 2", Category="Category 4", Description="Product 2 - Category 4", IsAvailable=true },
                    new Product{ ID=1, Name="Product 3", Category="Category 3", Description="Product 3 - Category 3", IsAvailable=true },
                    new Product{ ID=1, Name="Product 4", Category="Category 1", Description="Product 4 - Category 1", IsAvailable=false },
                    new Product{ ID=1, Name="Product 5", Category="Category 5", Description="Product 5 - Category 5", IsAvailable=true },
                    new Product{ ID=1, Name="Product 6", Category="Category 6", Description="Product 6 - Category 6", IsAvailable=true },
                    new Product{ ID=1, Name="Product 7", Category="Category 2", Description="Product 7 - Category 2", IsAvailable=false },
                    new Product{ ID=1, Name="Product 8", Category="Category 5", Description="Product 8 - Category 5", IsAvailable=true },
                    new Product{ ID=1, Name="Product 9", Category="Category 3", Description="Product 9 - Category 3", IsAvailable=true },
                    new Product{ ID=1, Name="Product 10", Category="Category 3", Description="Product 10 - Category 3" , IsAvailable=false }
                };

            MultiEditGridView1.DataSource = productList;
            MultiEditGridView1.DataBind(isEnableEdit);
        }

        protected void btnEdit_Click(object sender, EventArgs e)
        {
            BindGridData(true);
        }

        protected void btnUpdate_Click(object sender, EventArgs e)
        {
            BindGridData(false);
        }
    }

    public class Product
    {
        public int ID { get; set; }
        public string Name { get; set; }
        public string Category { get; set; }
        public string Description { get; set; }
        public bool IsAvailable { get; set; }
    }
}
 
I encourage you to download the attached solution which consists of the extended GridView control MultiEditGridView and a sample web application demonstrating its features, thereby providing me your feedback on further improvements I could incorporate.

About Me

I am a software developer with over 7+ years of experience, particularly interested in distributed enterprise application development where my focus is on development with the usage of .Net, Java and any other technology that fascinate me.