Showing posts with label C#. Show all posts
Showing posts with label C#. Show all posts

Thursday, May 19, 2016

What is .NET Core and ASP.NET Core

.NET developers are familiar with .NET’s own web development framework which is ASP.NET. The ASP.NET framework supports a magnitude of higher level web development frameworks such as Web Forms, Web  Pages, MVC, Web API, SignalR. The ASP.NET framework was and currently is solid framework for developing modern web applications. However there were a considerable amound of reasons for which Microsoft decided to develop and introduce a new framework termed ASP.NET 5. Few of the key reasons for a new framework are,

  • Open-source the code-base to gain community support and feedback
  • Imporve performance of the traditional ASP.NET runtime
  • Reach out to developers with frequent updates due to competing technologies
  • Enavle cross-platform development/hosting oppertunities
  • Support extensive command-line tooling
  • Introduce simple project structure to quickly and easily create ASP.NET web applications

Introducing of ASP.NET 5

After 2+ years of effort, as of the 18th, November 2015, Microsoft officially released ASP.NET 5. ASP.NET 5 is a brand-new ground-up implementation influenced by the traditional ASP.NET framework. ASP.NET 5 is completely platform agnostic depending on the runtime you decide to use. You are free to utilize the full .NET framework that will enable running on windows only or the .NET Core framework which will enable cross-platform behaviour, and the choise of runtime is completely up to you. Hence the ASP.NET 5 RC1 version was built to run on top of .NET Execution Environment (DNX) and a couple of other supporting tools such as the .NET Version Manager (DNVM) and .NET Development Utilities (DNU). Below are the tasks handled by tools,

DNVM – .NET Version Manager DNVM acts as the version manager that will help you configure what version of the .NET runtime to use, by downloading the required versions of .NET runtime and by setting it at a mechine, process or user level so that your application can pick up the runtime during runtime.

DNX – .NET Execution Environment The DNX is to provide a consistent development and execution environment across multiple operating systems. It is responsible of hosting the CLR by handeling the dependecies and bootstrpping your applicaiton based on the configurations specified in the configuration file that is defined as part of the application.

DNU – .NET Development Utilities As the term suggests, DNU is a tool to support various tasks, such as managing libraries or packaging and publishing your application.

ASP.NET 5 Rebranded to ASP.NET Core

Upon the release of ASP.NET 5 which was an entirely new ground-up implementation, it caused somewhat of a misunderstanding that ASP.NET 5 is a newer version of the current ASP.NET framework and replaces the current version, which was not the case. Hence Micorosoft officially decided to rebranded the term ASP.NET 5 to ASP.NET Core, such that it clears the misunderstanding. This was communicated by Scott Hanselman on 19th, January 2016.

Limitations of ASP.NET Core RC1

ASP.NET 5 was much appriciated by the .NET development comunity. However ASP.NET 5 by design was developed more towards targetting web application development. ASP.NET 5 applications contained Startup.cs class within a class library. The DNX tool runs the ASP.NET hosting library and that would dynamically figure out the Startup.cs class and bootstrap the application.

During this time, Microsoft determined that it was also important to support native/cross-platform console applications. Due to this reason Microsoft had to revamp and introduce a toolchain that will seemlessly be ready for developing of both console and web applications.

With .NET RC2 and ASP.NET RC2 the .NET toolchain is one of the most significant changes that RC2 brings to ASP.NET Core as stated in the Visual Studio blog.

Introducing of ASP.NET Core RC2

As of the 16th, May 2016, Microsoft officially released .NET Core RC2 and ASP.NET Core RC2. The RC2 version of .NET Core and ASP.NET Core addresses the limitations encountered in the RC1 version. As of RC2 an ASP.NET application is bound to behave as a console applicaiton. The console application is responsible of calling on the ASP.NET Hosting libraries as opposed to the other way around that happened in RC1. Although the RC1 way of how things happened are still supported in RC2, RC2’s way provides more visibility and control to the appication developer on determining how the application works.

Going further ASP.NET Core RC2 makes things simpler by relying on a new toolchain called the .NET Command Line Interface (.NET CLI) that comes as part of .NET Core RC2. This tool replaces the old DNVM, DNX and DNU wich was part of the ASP.NET RC1 build. The .NET CLI will perform the tasks that each of the tools in RC1 was responsible, including easy construction, package management, and compilation of applications using the new .NET Core SDK.

Important Details

As confirmed by the Visual Studio blog and Scott Hunter, for the most part of these frameworks runtime/Libraries (CLR, libraries, compilers, etc.) for both .NET Core RC2 and ASP.NET Core RC2 will not change “much” by the time it RTMs which should be available by the end of June. This means we are free to develop go live applications with the RC2 verion of these frameworks .

However the tooling such as the .NET CLI and Visual Studio are still on preview. Microsoft has officially split the delivery of Visual Studio tools from the .NET Core and the ASP.NET Core runtime and libraries. As mentioned by Scott Hunter the tooling supports for .NET CLI and ASP.NET Core are still not at the level of RTM but should be by the end of June.

Summary

The intention of this post is to demystify the understating of .NET Core and ASP.NET Core and provide a breakdown of how related the two are in terms of eveolution of the frameworks. The content of this blog post is a compiled set of information that I have gathered from the various online blogs. I hope this gives you enough information on how ASP.NET is spanning on to reach highrounds and where the framework is heading towards and what how various components interact together. Please do let me know if there is anything I have failed to include or misinterpretted.

Happy Coding!

Saturday, February 27, 2016

Entity Framework Core 1.0 Database-First to Code-First

There is much interest within the .NET development community with the announcement of the new introduction of ASP.NET 5aka ASP Core 1.0, formerly known as ASP.NET 5 and Entity Framework 7 formerly known as Entity Framework Core 1.0. The highlight of these technologies is the ability to develop applications that run on the classic .NET framework, and the all new .NET Core which runs on top of the new .NET Execution Environment (DNX) which enables developing cross platform application to run on Windows, Linux, Mac.

Both ASP.NET Core and EF Core frameworks ground up implementations with quite a few changes to the traditional way of how we used to work with ASP.NET applications, however there is much more capabilities offered with the new versions, albeit they are still in the RC state, which is perfectly fine for playing around, experimenting and getting your hands dirty.

Its fairly easy to start off with EF Core using the code first approach, in fact; there are quite a number of blog posts that explain code first. Hence in this post is on how you can start off using an existing database using EF Core, together with some insights on the new ASP.NET.

Before we go any further, one important highlight with the new EF Core and VS tooling is,

No more EDMX support!

Currently you are able to create your model in two ways, using an XML based EDMX in the designer, or by using a code first with a set of classes and DBContext that defines the mappings. The model you choose has no difference on how the EF Framework behaves at run-time. The framework will create an in-memory model by reading the EDMX, or by reflecting upon the DBContext and related classes and its mappings.

Also as highlighted by Julie Lerman on “Data Points - Looking Ahead to Entity Framework 7”, going forward EF will no longer support the EDMX based model although database-first will be supported (using scaffolding) which can thereby evolve as code-first model. Updates/changes to the data model can later be migrated and applied to the database as and when necessary.

For those developers who are bonded with the EDMX designer, this post will detail the steps on how you can make use of an existing database (database-first development) to generate a code first model and move on with updating the data model and migrate and apply the updates to the database (code-first development)

For those developers who are bonded with the EDMX designer, this post will detail the steps on how you can make use of an existing database (database-first development) to generate a code first model and move on with updating the data model and migrate and apply the updates to the database (code-first development).

Creating the Data Model

You will need Visual Studio 2015 installed on Windows which is what I will be using to outline the actions that need to be performed. Upon installation of VS 2015 you will also need to upgrade the .NET Version Manager (DNVM) to use the latest version of the .NET Execution Environment (DNX). You can follow the steps detailed at “Installing ASP.NET 5 On Windows” to get your self up to speed.

For brevity we will start of with creating a console application project which will contain our Entity Data model.

Create the project

  1. Open Visual Studio
  2. Select File > New Project
  3. Select Visual C# > Windows > Web
  4. Select Console Application (Package) and give your solution and project a name like so,

    image

New Project Structure

This console application is not the traditional type of console application we are used to. It is based on the new project convention Microsoft introduced for ASP.NET Core. There are quite a few changes with the new project structure. What you would immediately notice is that the app.config file is missing. Instead there is a project.json file. This is one of the overhauls of the ASP.NET Core, where the entire project will be based-off over a JSON configuration file called project.json. I will not be talking much about the project.json file here, except for the bare essentials. However if you are interested to get to know more about the project.json file refer to this wiki on GitHub.

Add References to EF Commands

In order to generate the data model based on the database, we need a reference to a couple nuget pacakges,

  • EntityFramework.Commands
  • EntityFramework.MicrosoftSqlServer
  • EntityFramework.MicrosoftSqlServer.Design

EntityFramework.Commands – This package provides all the necessary commands to work with EF such as scaffolding the database, creating/applying migrations etc.

EntityFramework.MicrosoftSqlServer and EntityFramework.MicrosoftSqlServer.Design – These packages provide Microsoft SQL Server specific capabilities for entity framework to work with since we are suing Microsoft SQL Server as our database.

As of now VS 2015 does not tooling support for EF Core to generate your data model from the database. Hence we will be using the command line to generate it for us. Open the project.json file and add the dependencies as shown below,

image

Save your project.json and you the relevant package will be downloaded to your DNX profile and referred from within the project.

Note the section where the dependencies are declared

If it is declared within the a target framework, the declared dependencies will be only available to that specific framework as shown in the yellow box. If you require a dependency to target both the .NET full framework and the .NET Core framework you can declare it in the global dependencies section where the EntityFramework Packages have been declared, shown in the green box.

You also need to make sure you added a command as shown in the purple box, which will be the entry point from DNX to access the EF Command.

Run Entity Framework Commands

  1. Open command-prompt in Administrator mode
  2. Execute dnvm list and validate if you are using the latest runtime as shown below. If not use dnvm upgrade to download and use the latest runtime.

    image

 

  1. Navigate to the folder of you console application project.
  2. Execute dnx ef and validate if you are able to access the EF Core as shown below,

    image

Scaffold database to Code-First DBContext

Execute dnx ef dbcontext scaffold -h. This will list you all the parameters that are required to scaffold the DBContext against the target database as shown below,

image

At a bare minimum you need to input two arguments, a [connection] to the target database and a [provider] to use for working with the database. I will also specify the -c to specify a custom name for the DbContext. You can try this out with any database you have at your side. I happen to have a StudentDatabase with just two tables.

image

In order to generate the DbContext against the database you can execute the following EF command,

dnx ef dbcontext scaffold "data source=.;initial catalog=StudentDatabase;Integrated Security=true" "EntityFramework.MicrosoftSqlServer" -c "StudentDbContext".

This will create you the DbContext against the target database and you should be able to see the classes already included in the project in VS Solution Explorer for your use as shown below,

image

Query the database using the DbContext

You should now have a DbContext scaffolded using the EF command. Hence lets try to query for some data using the code below

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace App.Console
{
public static class Program
{
    public static void Main(string[] args)
    {
        // Create DbContext.
        var context = new StudentDbContext();

        // Display students.
        context.Student
            .ToList()
            .ForEach(s => System.Console.WriteLine("Student ID: {0}, First Name: {1}, Last Name: {2}", s.Id, s.FirstName, s.LastName));

        System.Console.ReadKey();
    }
}
}

Voila! We are able to query the database, without having to type all the cumbersome code.

image

Wait a minute! How does the application know on how to connect to the database. Well as part of the scaffold  process the DbContext is automatically  configured to used the connection string we provided to the scaffold command. This is not a nice way  to maintain the connection string. The new framework supports much better ways to overcome this issue by enabling configuration options to be passed as dependencies, which we will look at in a future post.

An important aspect with EF is migrations where you are able to maintain versions of your data model as and when it evolves over time. I will be writing up more on how you can perform migrations over your data model in the coming series of posts.

Happy Coding!

Thursday, June 6, 2013

SignalR – Real-time application development

Introduction

Real-time applications for the web are none other than usual client-server applications with one distinct feature, that being they accomplish functionality with very little (near real-time) or zero latency (real-time). There have been a number of traditional approaches in the past that were employed in order to achieve such functionality suing a variety of methodologies. One such mechanism is Comet, which is an umbrella term that provides a variety of models/solutions in order to achieve real-time application development over HTTP (Hyper Text Transfer Protocol) (e.g. Streaming, Hidden iframe, Ajax with long polling, etc.).
 
There are a number of third-party frameworks available today, until recent Microsoft had no streamlined mechanism that enabled a straightforward approach in the implementation of real-time applications. Although frameworks such as WCF (Windows Communication Foundation) did support a similar functionality using a couple of bindings, wsDualHttpBinding for web services and httpPollingDuplex Silverlight based applications, they had limited features in terms of scale and functionality. Apart from that, you were pretty much on your own if you needed to develop an application that required real-time functionally using ASP.NET.
 

Limitations of a typical Request-Response oriented Application

HTTP functions on the request-response principle, where the client makes a request and the server responds. This is the case with any application web application that runs on HTTP. This is illustrated below,
 
 Request response
 
This mechanism does not provide us the means to achieve real-time data transfer, mainly due to the reason that the server is not able to provide any updated unless the client specifically requests for it. One typical way that developers utilized to overcome this limitation is by performing periodic polling, where the client keeps on requesting until the server has an update to provide the client with as illustrated below,
 
 
Although the above mechanism tries to eliminate the afore mentioned drawback of implementing real-time web application using HTTP, it still could not be considered as an appropriate solution.
 

ASP.NET SignalR

ASP.NET SignalR is a framework maintained and developed by Microsoft that provides just the right functionality that helps you achieve seamless development of real-time applications using ASP.NET. SignalR incorporates a variety of mechanisms/modes that help handle failover during failure to negotiate on a specific transport mechanism to perform real-time message exchange. The framework also supports a straight forward development approach by exposing an API over the core functionality enabling you to develop applications in a breeze. SignalR is written so that it is scalable as you application grows and perform well even when the application requires handling many concurrent users at a given time.
 
SignalR provided four mechanisms in order to overcome the limitations associated with the traditional HTTP request-response principal during development of real-time web applications. Two of these mechanisms use new features introduced with HTML5 version, which are WS (Web-Sockets) and SSE (Server-Sent-Events). As of writing these two features are currently as draft within the HTML5 specification, although most modern browsers do support these features and will continue to evolve in future. The other two mechanisms SignalR supports are Forever Frame and Long Polling.
 
Upon SignalR framework being integrated to an application the framework will choose one of the best mechanisms based on the browser/server capabilities and agree on the transport negotiate accordingly. From a developer standpoint all that you will do is code against the high-level API that will encapsulate the negotiation of which mechanism to use. The key point to understand is that the code you write using framework is the “SAME” regardless of which transport mode you use. Listed below is more information on each of the transport modes supported by SignalR.
 

WS (Web-Sockets)

WS is a new protocol (i.e. ws:// or wss://) introduced with HTML5 and is the most appropriate technology for building real-time applications. That is due to the fact that WS enables creation of a Full-Duplex Bidirectional channel over HTTP enabling the client or the server to send messages independently.
 
 
As illustrated above upon the client creating a WS connection between the servers both server and client will utilize a full-duplex channel over HTTP enabling the server to send event data and the client to send data via the same connection. WS is the preferred over the other options below, due to the fact that it is very preformant and less resource intensive in its essence.
 
This feature is a new addition to HTML and hence requires alterations on an architectural level. Hence it requires that WS be supported in the web server, client browser and all intermediate associates (e.g. proxies, firewalls and server, client, public network infrastructures).In order to enable WS with ASP.NET the prerequisites are that the application must be running on ASP.NET 4.5 or MVC 4, IIS8 (or IIS8 express within Windows Server 2012) with a WS compatible browser.
 

SSE (Server-Sent-Events)

SSE is again an HTML5 feature which enables event based streaming over HTTP. On the contrary to WS, SSE is a mere addition to the JS API (i.e. EventSource object), hence requiring no major change architecturally. This feature is supported by most browsers available today.
 
 
SSE is not a duplex connection like WS. As illustrated above it is a one way connection so that the server can send updates to the client. SSE is achieved by the client creating an EventSource object via JS and the server flushing event data as and when there is an update triggered without terminating the stream. Should there be any client update to be sent, this will be sent via a separate request to the server and not using the event source created between the server and the client which can be considered somewhat of a limitation.
 

Forever Frame

This mechanism is a way of using existing HTML functionality to utilize real-time functionality. It is achieved by creating a hidden iframe within the client to connect to the server and use scriptlets sent by the server to trigger update within the client page. The functionality is similar to using SSE, although in this case this technique uses available HTML iframe element in order to achieve the similar functionality.
 
 
As illustrated the scirptlets sent by the server are appended to the iframe body and a mechanism of reading the script and executing it will be handled by the client accordingly.
 

Long Polling

This is technique that works across all browsers. Long polling is a last resort used by SignalR when determining a transport mechanism to use. Long polling functions in a manner that it sends Ajax based requests to the server where the server holds on to the request for a definite period of time and terminates the request with an empty response. However in cases where there is an server event that needs to be sent across to the client the server immediately sends the response for the client to use, and the client initiates another request to server that will again listen to any server update available.
 
 
Long Polling is considered more resource intensive compared to the other methods supported by SignalR. This is mainly due to the continuous connection initiated and terminated between he server and the client.
 

SignalR transport precedence

The above four mechanisms are supported by the SignalR framework and it will utilize the most effective based on the capabilities of the client/server and fallback to another mechanism if failed. The order of fallback within the framework is as follows,
1. Web-Sockets: SignalR will try to determine if the server/client or intermediate channels support Web-Sockets and use it.
2. Server-Sent-Events: Falls back from Web-Sockets if the browser supports Server-Sent-Events.
3. Forever Frame: Falls back from Server-Sent-Events if the browser supports this mechanism.
4. Long-Polling: Is the fail safe mechanism utilized by SignalR in cases where none of the above technologies are supported.
 

Summary

SignalR is a framework maintained by Microsoft and provides features and means of how real-time messaging can be achieve between the client and the server over HTTP. SignalR supports four main mechanisms of transport (i.e. Web-Sockets, Server-Sent-Events, Forever Frame and Long Polling). SignalR also provides and intuitive API and exposes multiple programing models that aids ease of development which will be looked at in a future post and demos.

Friday, May 3, 2013

Service Reference Generation using svcutil.exe

Duplicate objects being generated.

When working with generating a single service reference code file for multiple services I encountered an issue where one of the services being used exposes a System.Data.Dataset as the return datatype. The issue i had was that the generated objects seemed to get duplicated and generated in two different ways as shown and below,
 
The code below is gerated via the DataContracSerializer as you can clearly say based on some of the attributes used in the class and properties (i.e. line 3 and 23).
[System.Diagnostics.DebuggerStepThroughAttribute()]
[System.CodeDom.Compiler.GeneratedCodeAttribute("System.Runtime.Serialization", "4.0.0.0")]
[System.Runtime.Serialization.DataContractAttribute(Name="Person", Namespace="http://tempuri.org/")]
public partial class Person : object, System.Runtime.Serialization.IExtensibleDataObject
{
 
 private System.Runtime.Serialization.ExtensionDataObject extensionDataField;
 
 private string NameField;
 
 public System.Runtime.Serialization.ExtensionDataObject ExtensionData
 {
  get
  {
   return this.extensionDataField;
  }
  set
  {
   this.extensionDataField = value;
  }
 }
 
 [System.Runtime.Serialization.DataMemberAttribute(EmitDefaultValue=false)]
 public string Name
 {
  get
  {
   return this.NameField;
  }
  set
  {
   this.NameField = value;
  }
 }
}
 
The code below is gerated via the XMLSerializer as you can clearly say based on some of the attributes used in the class and properties (i.e. line 5 and 12).
[System.CodeDom.Compiler.GeneratedCodeAttribute("svcutil", "4.0.30319.1")]
[System.SerializableAttribute()]
[System.Diagnostics.DebuggerStepThroughAttribute()]
[System.ComponentModel.DesignerCategoryAttribute("code")]
[System.Xml.Serialization.XmlTypeAttribute(Namespace="http://tempuri.org/")]
public partial class Person
{
    
    private string nameField;
    
    /// 
    [System.Xml.Serialization.XmlElementAttribute(Order=0)]
    public string Name
    {
        get
        {
            return this.nameField;
        }
        set
        {
            this.nameField = value;
        }
    }
}
 
The reason for this duplication is that one of the services I was interfacing with had the type System.Data.Dataset being returned from a service method. Hence the svcutil tries to use the DataContractSerilizer schema importer tries to infer the System.Data.Dataset and fails to use the XML schema associated with it and resolves to XMLSerializer to serialize the objects for this service.

Overcoming the problem

It is clear that the DataContractSerializer cannot infer XML schema defined types. Hence in order to overcome this problem the alternative is to force svcutil to use the XMLSerializer for all the services being referenced like so,
 
svcutil /r:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Data.dll /serializer:XmlSerializer /out:MyService.cs /namespace:http://tempuri.org/,MyService.MyServiceReference https://www.mydomain.com/service1.asmx https://www.mydomain.com/service2.asmx
 
With the above command you should be able to generate the service code that will solve the duplicate objects being generated. However there is another minor issue, which is that the proxy objects generated do not fall/associate into the above provided namespace MyService.MyServiceReference instead is placed in the global namespace and will cause issues if you have similar names defined elsewhere in your solution.
 
To fix this is just a minor hack, if you analyze the generated MyService.cs file, you will notice that the namespace only wraps the service functions, where all you need to do is move the namespace definition to the beginning of the file so that it covers the proxy object definitions. This should give you a complete service reference when referring to multiple services that are required to be XMLSerialized due to the afore mentioned reason.

Monday, August 20, 2012

Entity Framework 5 with enums

Enum in Model Browser

Entity Framework 4 and earlier version limitations

Enums or Enumration Types are a very useful feature in .NET framework, that lets you define a collection of logical options as a specific type. Although this feature is a part of the language, it was not supported by the ADO.NET Entity Framework. In previous version of Entity Framework 4 and earlier, there was no possible way that you could define a scalar property as an enum type in any of your entities. It was possible to however explicitly associate integral type scalar properties to an enum via an explicit cast to the desired enum type.

The caveat with this explicit cast is that, a developer could cast any integral entity scalar property type to an enum type provided the enum being cast to has the integer value defined as its underlying value, meaning that you would need to make sure you performing the cast over the correct enum type. This is not a major issue if you have a very few enums defined in your application, but would a be a point of confusion when there are more than a few enums with similar names.

Other approaches would include creating additional properties that encapsulate the entity property within by returning an enum based on the value of the property within the entity class. Which by its very nature can only be accomplished if the data model is instructed to use custom POCO classes.

Entity Framework 5 and Enum Support

Fortunately ADO.NET Entity Framework 5 will officially support the ability to define enum types or use existing enum types as part of your entities scalar properties. Listed below are the very basic steps to add a simple enum property to you entity, I have a very basic sample databasa scheme to illustrate this. You can download the sample solution from here, which also has a local database with this schema.

Step 1: Defining the sample database schema

Database Schema

The above schema represents a relational database that holds order information related to a customer and product. An order in the order table can be one of two states which is either “Delivered” or “Pending Delivery”. Hence this state of the order is represented in the Status column which is of type int within the Order table.

Step 2: Generating an Entity Data Model from the Schema

Note that you could also generate the database script that could be deployed as a database by modeling the entities first. However for this scenario I will be generating the Entity Data Model over the existing database schema. Listed below are the steps for this. In order to generate your Entity Data Model using an existing database,

  1. Right Click on the project you need to include the Entity Data Model and create an ADO.NET Entity Data Model with a name that best suites your data model and click Next.
  2. Select the Generate from database option and click Next.
  3. Define your connection, provided a connection string name and click Next.
  4. Select the Tables relevant to the model, provide a valid namesapace for the entities and click Finish.

Your Entity Data Model should look like the following,

Entity Data Model

Step 3: Setting entity property as an enum

With ADO.NET Entity Framework 5 you are able to define new enum types that best suite your usecase, or you any existing enum type definitions that’s already part of the project. In order to link the Order tables Status property as an enum,

  1. Select the property and right click on it and select Convert to Enum from the context menu, which will bring up the following dialog, where I have filled in the required information.
     Enum Dialog
  2. If you desire to associate an enum type which is already part of the project, you could do so by checking the Reference external type and giving the fully qualified name for the enum type.
  3. You could always modify or add options to your enum type by locating the enum created under the Model Browser –> Enum Section as shown below.
    Model browser

Step 4: Using the enum types as part of the entity.

Upon having the enum configured to use a an enum type, its just a matter of writing code against entities the same way you would against a regular old objects that contains enum types within. Listed below is the code sample for doing just that.

// Delevered orders
Console.WriteLine("Delevered Orders");
WriteOrders(orders.Where(o => o.Status == OrderStatus.Delevered));

// Pending orders
Console.WriteLine("Pending Orders");
WriteOrders(orders.Where(o => o.Status == OrderStatus.Pending));

You can download the sample solution from here of that described in the blog post and go though the application.

Sunday, July 15, 2012

WCF Client Request / Response Message Inspection

Very recently I encountered a requirement for inspection of WCF messages passed to and from a service. This feature was required on the client side as the client applications requirement was to store these messages as log entries in the database. Although WCF does not support this out-of-the-box, it was pretty darn easy to implement it just by implementing two (out of many) interfaces in the WFC framework.

Before I go any further I need to mention that this blog post “Capture XML In WCF Service” helped me out a lot, although it talks about the message inspection on the service host itself, where as this post is about message inspection on the client side. I have further made a few enhancements over the code. So lets get started.

Intercepting WCF messages

In order to inspect client messages going in and coming out of the client we need to implement the interface contract IClientMessageInspector, as seen below,

///
/// Class to perform custome message inspection as behaviour.
/// 
public class MessageInspectorBehavior : IClientMessageInspector
{
    public void AfterReceiveReply(ref System.ServiceModel.Channels.Message reply, object correlationState)
    {
        // Do nothing.
    }

    public object BeforeSendRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel)
    {
        return null;
    }
}

Upon attaching this to the client runtime, WCF will ensure the two methods listed above is called when a request is sent to and a response is received from the service. So here is where we will write our custom code which will see later in this post.

The next important point is that we need to attach this inspector to the client runtime, and this can be done by the creating our own custom service behavior by implementing the IEndpointBehavior interface, as seen below (Note that I have implemented the interface to the same class that implements the IClientMessageInspector interface),

///
/// Class to perform custome message inspection as behaviour.
/// 
public class MessageInspectorBehavior : IClientMessageInspector, IEndpointBehavior
{
    public void AfterReceiveReply(ref System.ServiceModel.Channels.Message reply, object correlationState)
    {
        // Do nothing.
    }

    public object BeforeSendRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel)
    {
        return null;
    }

    public void AddBindingParameters(ServiceEndpoint endpoint, System.ServiceModel.Channels.BindingParameterCollection bindingParameters)
    {
        // Do nothing.
    }

    public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime)
    {
        // Do nothing.
    }

    public void ApplyDispatchBehavior(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher)
    {
        // Do nothing.
    }

    public void Validate(ServiceEndpoint endpoint)
    {
        // Do nothing.
    }
}

Next is that we need to integrate the message inspector to the behavior and this is done in the ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime) method as seen below,

public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime)
{
    // Add the message inspector to the as part of the service behaviour.
    clientRuntime.MessageInspectors.Add(this);
}

Now we have a custom inspector attached to a custom behavior, so how do we get the the request and response messages out of the inspector for logging? Well there are few ways to do this, but my preference was to embed an event handler in the inspector so users can subscribe to if required and be notified of when a request or response message is inspected. Here is the code that does just that,

///
/// Class to perform custome message inspection as behaviour.
/// 
public class MessageInspectorBehavior : IClientMessageInspector, IEndpointBehavior
{
    // Acts as the event to notify subscribers of message inspection.
    public event EventHandler OnMessageInspected;

    public void AfterReceiveReply(ref System.ServiceModel.Channels.Message reply, object correlationState)
    {
        if (OnMessageInspected != null)
        {
            // Notify the subscribers of the inpected message.
            OnMessageInspected(this, new MessageInspectorArgs { Message = reply.ToString(), MessageInspectionType = eMessageInspectionType.Response });
        }
    }

    public object BeforeSendRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel)
    {
        if (OnMessageInspected != null)
        {
            // Notify the subscribers of the inpected message.
            OnMessageInspected(this, new MessageInspectorArgs { Message = request.ToString(), MessageInspectionType = eMessageInspectionType.Response });
        }
        return null;
    }

   // Rest of the class code...

}

MessageInspectionArgs class and eMessageInspectionType enum are custom implementations for passing event arguments to the users for identifying the event related information. The code for these definitions are as seen below,

///
/// Enum representing message inspection types.
/// 
public enum eMessageInspectionType { Request = 0, Response = 1 };

///
/// Class to pass inspection event arguments.
/// 
public class MessageInspectorArgs : EventArgs
{
    ///
    /// Type of the message inpected.
    /// 
    public eMessageInspectionType MessageInspectionType { get; internal set; }

    /// 
    /// Inspected raw message.
    /// 
    public string Message { get; internal set; }
}

Finally its time for integrating it to the client application. Listed below is the code for that. Its pretty concise and easy to implement with very little or no effort.

class Program
{
    static void Main(string[] args)
    {
        string request = string.Empty;
        string response = string.Empty;

        // Instantiate the service.
        ServiceClient sc = new ServiceClient();

        // Instanticate the custom inspector behaviour.
        MessageInspectorBehavior cb = new MessageInspectorBehavior();

        // Add the custom behaviour to the list of service behaviours.
        sc.Endpoint.Behaviors.Add(cb);

        // Subscribe to message inpection events and provess the event invokation.
        cb.OnMessageInspected += (src, e) =>
        {
            if (e.MessageInspectionType == eMessageInspectionType.Request) request = e.Message;
            else response = e.Message;
        };

        // Call the service.
        var x = sc.GetData(1);

        // Display or log the results.
        Console.WriteLine(string.Format("Request\nMessage: {0}\n\nResponse\nMessage: {1}", request, response));

        Console.ReadKey();
    }
}

You can download the sample code from here. Let me know your feedback on suggestions or even improvements for that matter.

About Me

I am a software developer with over 7+ years of experience, particularly interested in distributed enterprise application development where my focus is on development with the usage of .Net, Java and any other technology that fascinate me.