InternetException

About coding and whatnot.

Safe cast string to enum

clock February 25, 2014 23:49 by author n.podbielski

In on of my projects I had make an interface between database and web service in C# code.

One of problem I had to face was need to cast strings to enumerable types, because of simple fact that database have no idea what is an 'enum'. Yes simplest mapping between an enumeration and database type is integer. It is simplest but in my idea not best. For example I really do not remember (or do not WANT to remember) what is the meaning of 4 integer value in case of package shipment status.

We can have values like this:

0 - RequestReceived
1 - Accepted
2 - Assembling
3 - Sended
4 - CustomerReceived
5 - Completed

Let's assume that is valid flow of package delivery flow. For someone that is working on system that handles monitoring of that kind of mechanism, this could be not an issue, because this flow is easy to recall. Yes. But as company grows, software grows. And most likely more enumeration types will be created.

So to have best of to worlds, in my opinion this is best to have enumeration in strings and with possibility to cast strings like '0' and 'RequestReceived' to RequestReceived enum value.

Nice feature is to make casting also case insensitive. But this is not necessary.

Aside from interfacing with database there are other use case that come to mind:

1. User interfaces input

2. Type serialization to JSON and from JSON

3. XML serialization

4. Import from various data sources like CSV

Ok. That is for introduction. Let's go to the coding.

First we have to retrieve values of enum type:

var values = Enum.GetValues(typeof(TEnum));

This is simple. Static method of Enum special class returns all possible values of an enum. With that knowledge we can just use foreach loop to iterate through collection:

public static EnumTest GetEnumValue(string val)
{
    var values = Enum.GetValues(typeof(EnumTest));
    foreach (EnumTest value in values)
    {
        if (val.Equals(value.ToString()))
        {
            return value;
        }
    }
    return 0;
}

There is a few problems with this method. For first: we can use it with only one predefined type of enum. Sadly this is impossible to create generic enum-only method in C#. But we can do pretty close. Second problem is that we don't have too have default value of enum type with integer equivalent of 0. 0 can not be in an enum at all!
 For first issue we can add generic type argument with constraints of interfaces implemented by enum types and with struct special constraint.
 For second issue we can use default C# keyword. Our 'better' method will be declared as following:

public static TEnum GetEnumValue(string val)
    where TEnum : struct, IComparable, IFormattable, IConvertible
{
    var values = Enum.GetValues(typeof(TEnum));
    foreach (TEnum value in values)
    {
        if (val.Equals(value.ToString()))
        {
            return value;
        }
    }
    return default(TEnum);
}

Of course there are can be other types that can work with this method but are in fact not an enumerable types, but this is best solution available in C# (that I know of). Default statement in case where string value is not found in method will return first defined value of an enum.

Next step will be add possibility of integers values in strings. For that we have to cast enum values to type int. We cannot do that with simple casting operation in C#, because it is not possible with generic type we defined in our improved method. But we can use IConvertible interface and its ToInt32 method. It requires format provider for casting though. I used CultureInfo.CurrentCulture property which was OK in my application, but this could be a problem in others. It depends where it will be used. Changed method will look like this:

 

public static TEnum GetEnumValue2(string val)
    where TEnum : struct, IComparable, IFormattable, IConvertible
{
    var values = Enum.GetValues(typeof(TEnum));
    foreach (TEnum value in values)
    {
        if (val.Equals(value.ToString())
            || val == (value.ToInt32(CultureInfo.CurrentCulture)).ToString())
        {
            return value;
        }
    }
    return default(TEnum);
}

This mostly work ok, but this might be a problem when it is used like this:

package.Status = GetEnumValue<PackageStatus>(newStatusString);

Why? Because when newStatusString value is not proper value for and enum, status property will reset to default status value. It might be a problem. Solution might be exception throwing when value is invalid. This would be good for UI. I decided to use custom default value:

package.Status = GetEnumValue(newStatusString, package.Status);

This way status will not change if value in string is inalid and old value will be assigned.

Finally I added case insensitivity for string comparison. There are plenty of possibilities to do that in .NET so this is something that should be considered in regards of application which will be using that code. For example we can do something like this:

public static TEnum GetEnumValue2(string val,TEnum current)
where TEnum : struct, IComparable, IFormattable, IConvertible
{
    var values = Enum.GetValues(typeof(TEnum));
    foreach (TEnum value in values)
    {
        if (val.Equals(value.ToString(), StringComparison.OrdinalIgnoreCase)
            || val == (value.ToInt32(CultureInfo.CurrentCulture)).ToString())
        {
            return value;
        }
    }
    return current;
}

Nice have feature is to defined this method as extension method for string. This way we can call it after writing name of the variable with our string value.

package.Status = newStatusString.GetEnumValue(package.Status);

I prefer to do this that way, because it is more expressive to my coding style. While writing solution for some kind of a problem I think: I want here this value but after mapping it in the following way. With using method GetEnumValue as a plain method not extension, it is in my opinion greater burden for someone who read code (which is mostly me and I always want to make my life easier :) ). But this is subject of another article.

Anyway this can be achieved just by adding this keyword and placing method in separate class.

public static class Extension
{
    public static TEnum GetEnumValue( this string val, TEnum current)
where TEnum : struct, IComparable, IFormattable, IConvertible
    {
        var values = Enum.GetValues(typeof(TEnum));
        foreach (TEnum value in values)
        {
            if (val.Equals(value.ToString(), StringComparison.OrdinalIgnoreCase)
                || val == (value.ToInt32(CultureInfo.CurrentCulture)).ToString())
            {
                return value;
            }
        }
        return current;
    }
}

This is very simple solution for this particular problem. There are more thing that can be changed/improved. You can download code sample and play with it yourself :)

Enjoy!

 Program.cs (3.55 kb)



Smart string builder

clock February 22, 2014 04:16 by author n.podbielski

In this rare times when I was writing code that suppose to create large string from smaller chunks and with some non string parameters I was using StringBuilder class of course. Of course because string adding, or string concatenation is very costly when talking about memory utilization. It is because every time you do this:

var text = "Hello world!"+ "\n"+ "How are you?";

new string is created for every '+' operation in memory. Not a best way of doing strings creations. StringBuilder is better because it do not creates strings until you call .ToString method of StringBuilder class.

So instead of doing something like was shown above code should look something like follows:

var stringBuilder = new StringBuilder();
stringBuilder.Append("Hello world!");
stringBuilder.Append("\n");
stringBuilder.Append("How are you?");
Console.WriteLine(stringBuilder);

This is better but while writing this code I disliked of how much you have to type to just add another string to string builder object. I thought: can this be done better? In fact it can. And this is way I created SmartStringBuilder class.

Requirement was to have something like this:

var smartBuilder = new SmartStringBuilder();
smartBuilder += "Hello World!";
smartBuilder += "\n";
smartBuilder += "How are you?";
Console.WriteLine(smartBuilder);

Luckily C# alows to write custom behaviors for operators like '+'. To do this we can use special operator keyword in C#. For managing chunks of strings we will use private instance of StringBuilder class:

public class SmartStringBuilder
{
    private StringBuilder internalStringBuilder = new StringBuilder();

    public SmartStringBuilder() { }

    public SmartStringBuilder(string str)
    {
        internalStringBuilder.Append(str);
    }

    public override string ToString()
    {
        return internalStringBuilder.ToString();
    }

    public static SmartStringBuilder operator +(SmartStringBuilder smartBuilder, string addString)
    {
        smartBuilder.internalStringBuilder.Append(addString);
        return smartBuilder;
    }
}

This allows us to execute first 3 lines of 'requirements code'

var smartBuilder = new SmartStringBuilder();
smartBuilder += "Hello World!";
smartBuilder += "\n";
smartBuilder += "How are you?";

To allows using our new class us string in Console.WriteLine method (or any other method that takes string parameter) we need to add implicit casting operator to string type:

public static implicit operator string(SmartStringBuilder smartBuilder)
{
   return smartBuilder.ToString();
}

With our class defined like this we can execute following line:

Console.WriteLine(smartBuilder);

Another nice feature is possibility of adding values of other types to our builder with + sign. We can do this by adding + operator to our class for each of them. For example for int type this method would look like this:

public static SmartStringBuilder operator +(SmartStringBuilder smartBuilder, int number)
{
    smartBuilder.internalStringBuilder.Append(number);
    return smartBuilder;
}

This allows us to execute following line safely:

smartBuilder += 1;

Ideally would be to add possibility of formatting values of other types with format strings i.e.:

smartBuilder += ("format {0}", 1);

But this is impossible without changing C# language itslelf. Best thing we can do is to add AppendFormat method that executes method with the same name in internal StringBuilder object.

public void AppendFormat(string format, params object[] parameters)
{
     internalStringBuilder.AppendFormat(format, parameters);
}

Our whole class will look like this:

public class SmartStringBuilder
{

    private StringBuilder internalStringBuilder = new StringBuilder();

    public SmartStringBuilder() { }

    public SmartStringBuilder(string str)
    {
        internalStringBuilder.Append(str);
    }

    public override string ToString()
    {
        return internalStringBuilder.ToString();
    }

    public static implicit operator string(SmartStringBuilder smartBuilder)
    {
        return smartBuilder.ToString();
    }

    public static SmartStringBuilder operator +(SmartStringBuilder smartBuilder, string addString)
    {
        smartBuilder.internalStringBuilder.Append(addString);
        return smartBuilder;
    }

    public static SmartStringBuilder operator +(SmartStringBuilder smartBuilder, int number)
    {
        smartBuilder.internalStringBuilder.Append(number);
        return smartBuilder;
    }

    public void AppendFormat(string format, params object[] parameters)
    {
        internalStringBuilder.AppendFormat(format, parameters);
    }
}

That is it. Not much of magic but it simplifies string manipulation a little Smile.



Expression parsing and nested properties

clock February 22, 2014 04:00 by author n.podbielski

In one of projects I was working on, I needed to get property value from property path and property path from expression.

First, lets cover second case.

With expression in form of nested property value:

()=>object1.object2.object3.object4

we cannot take simple value of some property, because we have only root object, object1.

Instead we have to take every value of every object in the middle and last object as value od our desired property.

If we would have expression like this:

()=>object1.object2

and value of root object (object1), we can just cast expression above to MemberExpression type and then retrieve name of the property, from property MemberExpression.Member.Name.

But if we need another property of object3 or even deeper, we need to retrieve another, nested MemberExpression from expression above.

Without knowledge of depth of our expression, we had to repeat that operation as long as property MemberExpression.Expression has value different than null.

To take MemberExpression from given Expression (which can be of many types: MemberExpression, LambdaExpression, UnaryExpression) we can use following method:

public static MemberExpression GetMemberExpression(Expression expression)
{
    if (expression is MemberExpression)
    {
        return (MemberExpression)expression;
    }
    else if (expression is LambdaExpression)
    {
        var lambdaExpression = expression as LambdaExpression;
        if (lambdaExpression.Body is MemberExpression)
        {
            return (MemberExpression)lambdaExpression.Body;
        }
        else if (lambdaExpression.Body is UnaryExpression)
        {
            return ((MemberExpression)((UnaryExpression)lambdaExpression.Body).Operand);
        }
    }
    return null;
}

 

This method will return MemberExpression from any of above types.

Armed with method like this we can write loop to retrieve property name for all levels of expression. For example we can use rarely used do...while loop. Of course we can use while loop, but this way we can have additional learning experience, using less known language constructs :)

 

public static string GetPropertyPath(Expression expr)
{
    var path = new StringBuilder();
    MemberExpression memberExpression = GetMemberExpression(expr);
    do
    {
        if (path.Length > 0)
        {
            path.Insert(0, ".");
        }
        path.Insert(0, memberExpression.Member.Name);
        memberExpression = GetMemberExpression(memberExpression.Expression);
    }
    while (memberExpression != null);
    return path.ToString();
}

In my code I placed those two methods in one class called ExpressionOperator and then used them in extension for type Object:

public static string GetPropertyPath<TObj, TRet>(this TObj obj, Expression<Func<TObj, TRet>> expr)
{
    return ExpressionOperator.GetPropertyPath(expr);
}

which can be used like this:

object1.GetPropertyPath(o =>o.object2.object3.object4)

which should return "object2.object3.object4" string. With possibility for returning a property path from any expression, we can now write method that returns value of destination property (last in the expression).

Method like this is even simpler than for returning property path. We just need to find value of every property in the middle of path to the point, when we have our destination property. For that we can use while loop :)

public static object GetPropertyValue(this object obj, string propertyPath)
{
    object propertyValue = null;
    if (propertyPath.IndexOf(".") < 0)
    {
        var objType = obj.GetType();
        propertyValue = objType.GetProperty(propertyPath).GetValue(obj, null);
        return propertyValue;
    }
    var properties = propertyPath.Split('.').ToList();
    var midPropertyValue = obj;
    while (properties.Count > 0)
    {
        var propertyName = properties.First();
        properties.Remove(propertyName);
        propertyValue = midPropertyValue.GetPropertyValue(propertyName);
        midPropertyValue = propertyValue;
    }
    return propertyValue;
}

Above code returns value of property by reflection. Property name is taken by splitting property path into parts separated by '.'. For example we can use this method in a following way:

object1.GetPropertyValue(o=>o.object2.object3.object4);

This is really simple example. For better usability you should add validating for root object (if this object have value in the first place) and for any of mid-objects. They did not have to have value too, they could not have value because of lack of initialization of object tree. Also good idea is to add boolean flag, if we want method to return an error in above cases or false (lack of success) and null value (value of desired property).

Another way to improve things is to add similar way to retrieve type of nested property or set property value by property path given as string.

In project I was working on I used this mechanism to retrieve properties path in C# code, transport it to client (it was web application so client was a browser) and set property value in JavaScript object, which had the same object tree. In other way I transported value of changed property at the client side and its path, to apply changes at server side. Very useful to synchronize two different data schemes.

Attached to this article is example console application for retrieving property path and property value of a nested property.

 Program.cs (3.78 kb)

I hope this will help :)



WebSocket libraries comparison

clock February 21, 2014 15:38 by author n.podbielski

 

Web project often requires to push data to clients as fast as possible, whenever it is necessary without waiting for client request. It is perfect for website with real time communication between users, like with online communicators for example. Or document collaboration tools. Or maybe system status updates on long running calculation/tasks performed by server. In every case two way communication mechanism is ideal.

Before, following solution was used for this kind of problems:

 

 

But now we have something better: WebSocket. Standard is implemented for some time now in modern browsers. It was released in 2011, which is even better because with changes and upgrades with have more secure and mature protocol.</>

 

Few remarks:

Comparison was made few months ago and can be outdated a bit, but I think that is still useful if anyone will be looking for good WebSocket library.

Only libraries published as NuGet packages was taken into account, beside one, SuperWebSocket which I found using NuGet repository, but it was needed to download from web page anyway.

Maybe if I will find time, I will update this article with new libraries or new version of already tested libraries.

 

  1. Fleck

    https://github.com/statianzo/Fleck


    I did found it really simple to install and use. I did not have any problems with library, documentation, examples etc. Just add package, copy some samples and run project. Simple.

    But with simplicity there is a price: it is not very powerful nor configurable solution.

    private static void Main(string[] args)
    {
         var server = new WebSocketServer("ws://localhost:8181");
         server.Start(socket =>
         {
              socket.OnOpen = () => OnOpen(socket);
              socket.OnClose = () => OnClose(socket);
              socket.OnMessage = m => OnMessage(socket, m);
         });
    }

    I would use this library for quick or simple project. If you do no need complex data structure to be send through WebSocket, command-like messages, many servers or fallback when you client do not have WebSocket support, this may be library for you.

    Advantages:

    • Simple
    • No dependencies

    Disadavantages:

    • Not very configurable
    • No fallback in case your browser do not support WebSocket

  2. SignalR

    http://www.asp.net/signalr


    It is library from Micrsoft which I personally treat as advantage. It has integration with existing framework, ASP.NET and good abstraction for both: client and server code. It means that you do not have to know much about a protocol which is good. And it is able to fallback gracefully to other communication mechanism whenever your client cannot use WebSocket. Also it is possible to accomplish something that is called Remote Procedure Call, from server to client.

    It can broad cast data to all clients or send message to only one. And scale to really great number of simultaneous connections. And is open source!

    Sounds really great, right? Yeah... except it needs IIS8 on Windows Server 2012 (or Windows 8 for that matter, but you would not host any really big project on that system, right?). For me it is one of cool features of ‘Microsoft-new-server-OS-which-you-should-buy’. It is not bad if you want to develop enterprise project, but for small projects this library is too expensive even if it is open source.

    Of course this requirements are actually needed if you want WebSocket communication. But this article is about WebSocket communication, so I count this as really big disadvantage.

    public class MyHub1 : Hub
    {
        public void Send(string name, string message)
        {
            // Call the broadcastMessage method to update clients.
            Clients.All.broadcastMessage(name, message);
        }
    } 
    $(function () {
        var chat = $.connection.myHub1;
        chat.client.broadcastMessage = function (name, message) {
            //...
        };
        $.connection.hub.start().done(function () {
            $('#sendmessage').click(function () {
                chat.server.send('message');
            });
        });
    });

     

    Advantages:

    • Good abstraction
    • Good integration with IIS and ASP.NET
    • Many fallbacks
    • Open source
    • Microsoft library
    • Scallable

    Disadvantages:

    • IIS 8 required…
    • … which needs very expensive server OS, Windows Server 2012


  3. AlchemyWebSocket

    http://alchemywebsockets.net/

     

    This one do not really comes to mind when I recall WebSocket libraries. There is nothing wrong with this one really. It can be placed right behind Fleck. It is also really simple, easy to use, easy to install (Nuget package available) and has documentation with good examples.

    It has server part and client part code built-in. It is also scalable.

    static void Main(string[] args)
    {
        // instantiate a new server - acceptable port and IP range,
        // and set up your methods.
    
        var aServer = new WebSocketServer(81, IPAddress.Any)
        {
            OnReceive = OnReceive,
            OnSend = OnSend,
            OnConnect = OnConnect,
            OnConnected = OnConnected,
            OnDisconnect = OnDisconnect,
            TimeOut = new TimeSpan(0, 5, 0)
        };
    
        aServer.Start();
        string consoleReadLine;
        do
        {
            consoleReadLine = Console.ReadLine();
            sockets.ForEach(s => s.Send(consoleReadLine));
        } while (consoleReadLine != "exit");
    }

     

    But it also have some awkwardness, that I cannot shake off. For example there is no simple event method “OnReceive” with just string, with actual message that was sent from client. You have to do it yourself. Yes, you have to call only .ToString() to get actual message, but whole point of using a library is to not force yourself to thing about how communication protocol is implemented.


    private static void OnReceive(UserContext context) { Console.WriteLine("Client " + context.ClientAddress.ToString() + " sended: " + context.DataFrame.ToString()); }

     

    WebSocket server initialization method takes first port and then IP setting. I always thinks, in terms of address as IP and THEN port, if port is necessary. Or timeout setting: why there is timeout anyway? I can understand that it may be sometimes useful, but as a feature not as one of primary settings. But those are details really.

    For me this force your code to abstract it away, with another layer of code, which should be done by this library in the first place.

    Anyway you can try it out, compare performance to Fleck, and decide which would be better for your simple project.

     

    Advantages:

    • Simple
    • No dependencies
    • Good documentation

    Disadvantages:

    • A bit awkward and little more complicated from Fleck
    • No fallback


  4. XSockets

    http://xsockets.net/


    This one seemed really promising. I really tried and spend much more time, trying to make it work than on other libraries (even with performance tests etc). But I had no luck unfortunately. Really, anything I can think of which can be wrong with library is wrong with this one. Bad documentation which differs from code. Which one is outdated? Code or documentation? It is not easy to install and get it running. In fact this library has examples that I had hard time to build and run. Or examples that you could say, shows more about MVC framework then about XSockets library itself. I tried to run this inside ASP.NET project, MVC and WinService. Sadly none of them worked.

    I really hoped for this one but eventually give up in favor of better (read any other) library. Seriously, why use library that is hard to even start simple project? You can predict more issues to come while actually using it inside a project. I recommend to stay away from this project.

     

    public static class XSocketsBootstrap
    {
        private static IXBaseServerContainer wss;
        public static void Start()
        {            
            wss = XSockets.Plugin.Framework.Composable.GetExport();
            wss.StartServers();
        }
    }


    Advantages:

    • Seems powerful
    • Should have good JavaScript integration

    Disadvantages:

    • Complicated and hard
    • Complicated to configure and run inside of WebForms, MVC and WinService
    • Differences between code and documentation
    • Outdated documentation and examples


  5. Microsoft.WebSocket

    http://msdn.microsoft.com/en-us/hh969243.aspx


    Another library from Microsoft. And it requires IIS 8 too, so I did not have means to test it. Examples are really low level, so it force you to deal with buffers and streams instead of strings. In some cases this can be good, but mostly there is no point. If you have IIS 8 on server why bother with this library if you can use SignalR, which will take care most of the stuff for you.

    I think this is more of proof-of-concept then usable library.

    int count = receiveResult.Count;
    
    while (receiveResult.EndOfMessage == false)
    {
        if (count >= maxMessageSize)
        {
            string closeMessage = string.Format("Maximum message size: {0} bytes.", maxMessageSize);
            await socket.CloseAsync(WebSocketCloseStatus.MessageTooBig, closeMessage, CancellationToken.None);
            return;
        } receiveResult = await socket.ReceiveAsync(new ArraySegment(receiveBuffer, count, maxMessageSize - count), CancellationToken.None);
        count += receiveResult.Count;
    } var receivedString = Encoding.UTF8.GetString(receiveBuffer, 0, count);
    var echoString = "You said " + receivedString;
    ArraySegment outputBuffer = new ArraySegment(Encoding.UTF8.GetBytes(echoString));
    await socket.SendAsync(outputBuffer, WebSocketMessageType.Text, true, CancellationToken.None);

     

  6. SuperWebsocket

    http://superwebsocket.codeplex.com/

    Last but not least is SuperWebsocket. I was a bit skeptical about this one (if I remember correctly this is only one package that I somehow found through NuGet website but is not available as a package). It may seems a little complicated, but in fact it is very easy. Examples supported by documentation takes you step by step from simplest WebSocket servers, to more complicated ones, with command requests, JSON, multiple servers instances, .config file configuration and more.

    This library maybe do not have all cool features that other does, but it does not matter because it is very configurable and easy to make it do what you want to. It can work in ASP.NET, as console application, and windows service. Documentation however recommends running server as system service. I from my experience recommend not running it inside web application because of slowness of such solution (very bad performance, about fifty times slower than console app). From other hand standalone application with server, requires to run .exe that is not strictly part of library, but part of SuperSocket project (on which SuperWebSocket is based). This force you to do a little ‘magic’ to start server with debug session, or to enable debug at all. When you run server as application which is not strictly part of solution, there is also issue with forcing server to use latest version of assemblies from other projects.

    In return you get well known solution for flexible WebSocket.

    It is also open source so you can change something if you want.

    From the other hand, as a disadvantage you can count lack of JavaScript client for this server (but there is C# client). Also this one has third party dependencies.

    After working with this library for few months I do not know about any major issues.

     

    Advantages:

    • Nice fetueres and very configurable
    • Great examples
    • Example (with documentation of recommended setup)
    • Can work as WinService and inside ASP.NET and console app
    • Good performance

    Disadvantages:

    • No fallback communication
    • Dependencies

     


Summary:

For complicated solutions/projects I recommend use of SuperWebSocket which is stable and very configurable library. For simple and fast projects I would choose Fleck, but I would give up both for SignalR if have means to use latest Windows Server as machine for tests and production.

 

Projects:

Test projects I was using to compare libraries you can find here:

WebSocketTest.zip



Ninject and WCF

clock May 26, 2013 13:01 by author n.podbielski

Ninject is very simple and in the same time powerful IoC container. I used it in few project and had very little or none problems.

Most of the time getting instances from Ninject kernel requires simple line with binding expression, or even this is sometime unnecessary if type is self bind able (i.e. if it is concrete class with parameterless constructor).

Little harder is getting Ninject to work with WCF. You cannot just bind interfaces types because proxies which implements them are created through .NET mechanism. Luckily WCF system is very flexible and mostly can be changed/extended with custom functionality.

How we can do that? Best solution is to add new behavior for our WCF services. Behavior is a class that implements IServiceBehavior interface. ApplyDispatchBehavior  method accessible through that interface allow our code to change instance provider of our service. Instance provider on the other hand is object with IInstanceProvider interface implementation and GetInstance method. This method is defined in following way:

object GetInstance(InstanceContext instanceContext);
object GetInstance(InstanceContext instanceContext, Message message);

 

Inside one of them we can create instance of our service from Ninject container.

Let us start from the top, with behavior class. It can be applied to service from attribute.

public class NinjectBehaviorAttribute : Attribute, IServiceBehavior
{
	public void AddBindingParameters(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase,
				Collection endpoints, BindingParameterCollection bindingParameters)
	{
	}

	public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
	{
		Type serviceType = serviceDescription.ServiceType;
		IInstanceProvider instanceProvider = new NinjectInstanceProvider(NinjectServiceLocator.Kernel, serviceType);

		foreach (ChannelDispatcher dispatcher in serviceHostBase.ChannelDispatchers)
		{
			foreach (EndpointDispatcher endpointDispatcher in dispatcher.Endpoints)
			{
				DispatchRuntime dispatchRuntime = endpointDispatcher.DispatchRuntime;
				dispatchRuntime.InstanceProvider = instanceProvider;
			}
		}
	}

	public void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
	{
	}
}

All interesting things happens inside of ApplyDispatchBehavior method. First is created NinjectInstanceProvider class, to which is passed instance of Ninject kernel, and our desired service type information. Instance provider is defined as following:

public class NinjectInstanceProvider : IInstanceProvider
{
	private Type serviceType;
	private IKernel kernel;

	public NinjectInstanceProvider(IKernel kernel, Type serviceType)
	{
		this.kernel = kernel;
		this.serviceType = serviceType;
	}

	public object GetInstance(InstanceContext instanceContext)
	{
		return this.GetInstance(instanceContext, null);
	}

	public object GetInstance(InstanceContext instanceContext, Message message)
	{
		return kernel.Get(this.serviceType);
	}

	public void ReleaseInstance(InstanceContext instanceContext, object instance)
	{
	}
}

Inside second overload of GetInstance method is created actual service instance, through Ninject kernel. Ninject kernel is acquired from simple implementation of service locator. It's just static class with public read only property with Ninject kernel.

public static class NinjectServiceLocator
{
	public static IKernel Kernel { get; private set; }

	public static void SetServiceLocator(IKernel kernel)
	{
		Kernel = kernel;
	}
}

Instance of kernel is injected into property with SetServiceLocator method after initialization, preferably inside NinjectWebCommon class, which is created in App_Start directory after adding Ninject to project from NuGet.

private static IKernel CreateKernel()
{
	var kernel = new StandardKernel();
	kernel.Bind<Func<IKernel>>().ToMethod(ctx => () => new Bootstrapper().Kernel);
        kernel.Bind<IHttpModule>().To<HttpApplicationInitializationHttpModule>();
	RegisterServices(kernel);
	NinjectServiceLocator.SetServiceLocator(kernel);
	return kernel;
}

I decided to go with this solution instead of actual implementation of Microsoft ServiceLocator class to keep it simple, which in fact is working in similar way.

After creating instance provider object, we apply it to all endpoints inside ApplyDispatchBehavior.

Last thing is to actually registering service types inside Ninject. Typically we are creating all data necessary to make service proxy inside web.config file. WCF channel can be created from such configuration with ChannelFactory class. Lets implement this functionality inside class that implements Ninject.Activation.Provider<T> type available from Ninject assembly.

public class ConfigServiceProvider<TService> : Provider<TService>
{

	protected override TService CreateInstance(IContext context)
	{
		var @interface = typeof(TService);
		var interfaceTypeName = @interface.FullName;
		var endpointsConfig = (ClientSection)ConfigurationManager.GetSection("system.serviceModel/client");
		string address = null;
		foreach (ChannelEndpointElement endpoint in endpointsConfig.Endpoints)
		{
			if (endpoint.Contract == interfaceTypeName)
			{
				address = endpoint.Address.OriginalString;
				break;
			}
		}
		var factory = new ChannelFactory<TService>(new WSHttpBinding(), address);
		return factory.CreateChannel();
	}
}

First provider is accessing configuration of all services, then searching inside of configuration for matching interface type name. If finds one, address of WCF service endpoint is passed to ChannelFactory class which will create service proxy. With such provider we can do actual type binding inside Ninject module:

public class WcfModule : NinjectModule
{
	public override void Load() { }

	public IBindingWhenInNamedWithOrOnSyntax BindServiceFromConfig()
	{
		return Bind().ToProvider<ConfigServiceProvider>();
	}
}

public class ServicesModule : WcfModule
{
	public override void Load()
	{
		BindServiceFromConfig();
	}
}

WcfModule class can be placed inside some library so we can use it in more than project. I am sure that not only one of them is using WCF services Smile. ServicesModule on the other hand should be placed inside assembly with services interfaces and loaded from NinjectWebCommon class inside WCF project.

And that is all. WCF web application registers services interfaces inside Ninject, creates kernel and setting its instance inside our custom ServiceLocator class. After that when service instance is accessed from .NET framework, NinjectBehaviorAttribute do its magic and acquires instance of NinjectInstanceProvider class, which is asking for instance of specified service from kernel. Kernel from its binding creates ConfigServiceProvider through one is created actual instance of proxy thanks to our configuration.

 

 



Mapping collection of entities in EF with AutoMapper

clock May 25, 2013 11:38 by author n.podbielski

In my last post I explained why is useful to add base entity class in EF. Today I will write how with use of this base class an AutoMapped map collection of data objects (i.e. DTOs to existing collection of entities).

Problem with doing:

dataColection.MapTo(entitiyCollection);

is that AutoMapper removes all entities from entity collection because data item mapped to entity has diffrent hash code and diffrent reference then original entity. Then when AutoMapper search for same item in original entity collection as mapped entity, it can not find one. That is causing AutoMapper to ads another one entity with the same Id as original, after removing original entity. Entity collection changed in that way cannot be saved to database, because EF complaints that removed entities has to be removed explicitly from database on commit.

To fix that problem we will use custom ValueResolver. To create one we will create class which will derive from IValueResolver available in AutoMapper assembly.

public interface IValueResolver
{
    ResolutionResult Resolve(ResolutionResult source);
}

There is also available ValueResolver<T1,T2>:

public abstract class ValueResolver<TSource, TDestination> : IValueResolver
{
    protected ValueResolver();

    public ResolutionResult Resolve(ResolutionResult source);
    protected abstract TDestination ResolveCore(TSource source);
}

But this class make available to override only ResolveCore method, which will be not sufficient since it does not have information about destination type of entity. Without this infomation wi wont be able to create generic resolver class. So instead this class we will use intrerface.

Our generic mapping class has to take two type parameters type of data object (DTO) and type of entity. Also ResolutionResult object of auto mapper mapping context does not have information of which source member is being mapped inside ValueResolver. This information has to be passed to. It is best to passed it as expression instead of a string, to make it less error prone. To make it possible we will add third type parameter which will be parent type of data object collection.

public class EntityCollectionValueResolver<TSourceParent, TSource, TDest> : IValueResolver
    where TSource : DTOBase
    where TDest : BaseEntity, new()
{
    private Expression<Func<TSourceParent, ICollection>> sourceMember;

    public EntityCollectionValueResolver(Expression<Func<TSourceParent, ICollection>> sourceMember)
    {
        this.sourceMember = sourceMember;
    }

    public ResolutionResult Resolve(ResolutionResult source)
    {
        //get source collection
        var sourceCollection = ((TSourceParent)source.Value).GetPropertyValue(sourceMember);
        //if we are mapping to existing collection of entities...
        if (source.Context.DestinationValue != null)
        {
            var destinationCollection = (ICollection<TDest>)
                //get entities collection parent
                source.Context.DestinationValue
                //get entities collection by member name defined in mapping profile
                .GetPropertyValue(source.Context.MemberName);
            //delete entities that are not in source collection
            var sourceIds = sourceCollection.Select(i => i.Id).ToList();
            foreach (var item in destinationCollection.ToList())
            {
                if (!sourceIds.Contains(item.Id))
                {
                    destinationCollection.Remove(item);
                }
            }
            //map entities that are in source collection
            foreach (var sourceItem in sourceCollection)
            {
                //if item is in destination collection...
                var originalItem = destinationCollection.Where(o => o.Id == sourceItem.Id).SingleOrDefault();
                if (originalItem != null)
                {
                    //...map to existing item
                    sourceItem.MapTo(originalItem);
                }
                else
                {
                    //...or create new entity in collection
                    destinationCollection.Add(sourceItem.MapTo<TDest>());
                }
            }
            return source.New(destinationCollection, source.Context.DestinationType);
        }
        //we are mapping to new collection of entities...
        else
        {
            //...then just create new collection
            var value = new HashSet<TDest>();
            //...and map every item from source collection
            foreach (var item in sourceCollection)
            {
                //map item
                value.Add(item.MapTo<TDest>());
            }
            //create new result mapping context
            source = source.New(value, source.Context.DestinationType);
        }
        return source;
    }
}

 Expression of type Expression<Func<TSourceParent, ICollection>> help as to make sure that inside Resolve method we will get correct property without necessity of using existing object source or creating new one to pass in inside some lambda.
 GetPropertyValue method is extension of object type. It works by taking MamberExpression from our Expression<Func<TSourceParent, ICollection>>, and then property MamberExpression.Member.Name of source member. After that with source property name we can take its value with reflection:

public static TRet GetPropertyValue<TObj, TRet>(this TObj obj,
	Expression<Func<TObj, TRet>> expression,
	bool silent = false)
{
	var propertyPath = ExpressionOperator.GetPropertyPath(expression);
	var objType = obj.GetType();
	var propertyValue = objType.GetProperty(propertyPath).GetValue(obj, null);
	return propertyValue;
}

public static MemberExpression GetMemberExpression(Expression expression)
{
	if (expression is MemberExpression)
	{
                return (MemberExpression)expression;
}
else if (expression is LambdaExpression)
{
var lambdaExpression = expression as LambdaExpression;
if (lambdaExpression.Body is MemberExpression)
{
return (MemberExpression)lambdaExpression.Body;
}
else if (lambdaExpression.Body is UnaryExpression)
{
return ((MemberExpression)((UnaryExpression)lambdaExpression.Body).Operand);
}
}
return null;
}

Whole Resolve method is enclosed in if statement:

if (source.Context.DestinationValue != null)

this will ensure that we cover 2 case when we map data collection to existing collection of entities and to new collection of entities. Second case is inside else and is not complicated since it is simple mapping of all items inside collection.
 Interesting part is happening inside if and it is composed from three phases:

1. Deleting of entities

All entities from destination collection, that are not present inside our data collection, are being deleted. That prevents EF from throwing an error mentioned above. Entities and DTOs have both Ids, which are used to find which of items was deleted. This is where base entity class is useful since it has Id defined inside.

2. Mapping changed items.

If entity with the same Id as item in data collection has been found, it is being used as destination of mapping

3. Mapping of new (added) entities, as new objects.

 This generic class then can be used as this inside AutoMapper profile:

CreateMap<ParentDTO,ParentEntity>()           
                .ForMember(o => o.DestinationCollection, m =>
                        m.ResolveUsing(new EntityCollectionValueResolver<
                            ParentDTO, SourceDTO, DestEntity>
                            (s => s.SourceCollection))
                           )
            ;

One more thing: this solution will cause StackOverflowException if SourceDTO to DestEntity mapping profile will try to map again ParenDTO -> ParentEntity, from ParentEntity property inside DestEntity. Usually child entities has reference to parent entities. If they are not ignored during mapping, AutoMapper will try do mapping: ParentDTO -> SourceCollection -> SourceDTO -> SourceEntity -> ParentDTO which will cause circular mapping.

Also this resolver will not cover case when Destination Collection is collection of derived items from parent item. For example when you have collection of people with students and teachers inside it, this will try to do mapping only for people. All derived types data will be ignored.

Unfortunately this will be not enough to map collection. Its because even items are removed collection, they are not set for deletion inside DbContext class. This will cause critical error in application during SaveChanges method in DbContext. To correct that issue we have to mark them for deletion from context class.

To do that there are 3 options:

1. Use context class inside EntityCollectionValueResolver class and mark deleted items for deletion. This is less elegant, but much more quick solution.

2. Use custom collection class which will mark deleted items for deletion using context class

3. Use custom collection class with items state tracking. This collection could subscribe an OnSaveChanges event of DBContext class in which event handler delete from context items deleted before from collection.

First and second options (maybe third too, it depends from implementation) will suffer from necessity to synchronize DbContext which will be used to save changes inside parent entity. Entity which was mapped from DTO. Also first solution is less elegant because is mixing AutoMapper and DbContext. Those two should live separately.

In this article I will show second option since third, which is better I think will involve changing of entities classes and repositories. It's to much for one article.

First, we have to acquire instance of context class. In my application I have IoC container which have single-for-thread instance of this class. This makes sure of synchronization of Context which loaded parent entity, deletes child entities and will save changes to parent entity.

At the beginning of the method Resolve we will add code that will return current instance of Context class (example with using Microsoft Patterns & Practices IServiceLocator implementation):

var context = ServiceLocator.GetInstance<DbContext>();

With this instance we can delete items from context:

if (!sourceIds.Contains(item.Id))
{
    destinationCollection.Remove(item);
    ((IObjectContextAdapter)context).ObjectContext.DeleteObject(item);
}

After that ObjectContextManager private property _entriesWithConceptualNulls, will have 0 items, which is good because any item in this collection will cause EF to throw critical error.

With breakpoint set after line with DeleteObject method call, you can see this collection with expression:

(context.Database._internalContext).ObjectContext.ObjectStateManager._entriesWithConceptualNulls

as like in the image:

 

This is whole body of Resolve method:

public ResolutionResult Resolve(ResolutionResult source)
{
        var context = ServiceLocator.GetInstance<DbContext>();
        //get source collection
        var sourceCollection = ((TSourceParent)source.Value).GetPropertyValue(sourceMember);
        //if we are mapping to existing collection of entities...
        if (source.Context.DestinationValue != null)
        {
            var destinationCollection = (ICollection)
                //get entities collection parent
                source.Context.DestinationValue
                //get entities collection by member name defined in mapping profile
                .GetPropertyValue(source.Context.MemberName);
            //delete entities that are not in source collection
            var sourceIds = sourceCollection.Select(i => i.Id).ToList();
            foreach (var item in destinationCollection.ToList())
            {
                if (!sourceIds.Contains(item.Id))
                {
                    destinationCollection.Remove(item);
                    ((IObjectContextAdapter)context).ObjectContext.DeleteObject(item);
                }
            }
            //map entities that are in source collection
            foreach (var sourceItem in sourceCollection)
            {
                //if item is in destination collection...
                var originalItem = destinationCollection.Where(o => o.Id == sourceItem.Id).SingleOrDefault();
                if (originalItem != null)
                {
                    //...map to existing item
                    sourceItem.MapTo(originalItem);
                }
                else
                {
                    //...or create new entity in collection
                    destinationCollection.Add(sourceItem.MapTo());
                }
            }
            return source.New(destinationCollection, source.Context.DestinationType);
        }
        //we are mapping to new collection of entities...
        else
        {
            //...then just create new collection
            var value = new HashSet();
            //...and map every item from source collection
            foreach (var item in sourceCollection)
            {
                //map item
                value.Add(item.MapTo());
            }
            //create new result mapping context
            source = source.New(value, source.Context.DestinationType);
        }
        return source;
    }
}

From now on after mapping from DTO to entity with automapper and saving mapped entity to database should work just fine.

That is all! Smile



Entity Framework and Base Entity class

clock May 16, 2013 14:32 by author n.podbielski

Entity Framework does great job with taking care of entities changes and entities collection internally. And its uses almost plain POCO objects. Almost because collections of dependents data like in 1 to many tables relationship needs virtual keyword. Its understandable since EF needs to track what happens to collection. For instance lets have 2 tables: Customer and dependent Order.

To map this relation in EF model we need to create two classes: Customer with relationship to Orders collection and Order with relationship to single Customer. Both end of relation needs to be virtual. Oh well. Actually they aren't have to be virtual, but as you can read here this allow for 'lazy loading' and 'change tracking' capabilities of EF, so its pretty useful to add virtual keywords in those places.

public partial class Customer
{
public int Id { get; set; } public string Name { get; set; } public virtual ICollection<Order> Orders { get; set; } } public partial class Order {
public int Id { get; set; } public virtual Customer Customer { get; set; } }

That's pretty much it. There some rare cases when for example useful is to initialize collection of child entities on entity creation but its not requirement from EF point of view. So what could be the reason to add base class for entity?

Thing is that whenever you want to transfer entity to some another subsystem to make changes there, Entity Framework cannot and could not track those changes (you NEVER EVER should transfer entity object directly but map it to some other data object first!). Mapping data from entity to some data transfer object (DTO), making changes and returning them again to EF, will more likely fail. With simple object its achievable and should working, but it can spawn errors with collections mapping. In last application I was working that was the case. But I used EntityFramework.Patters extension so, in pure EF this can work fine (but I doubt it).

Root of the problem was Order collection in Customer entity. When I tried to transfer customer through i.e. WCF service, make changes at client side, and return them to EF, it caused EF to think that I deleted all items from Orders, and added some new instead. Its cool that it can tell that collection changed. To bad that this happens even when it not changed at all! To be frank EF wasn't entirely responsible it was mapping mechanism in AutoMapper. It wasn't considering two Order items with same values the same because they weren't referencing same object. After replacing old Order with new Order in AutoMapper, EF wasn't considering them equal for exactly same reason: reference to them was different.

How to remedy that? There have to be some custom mechanism of entities equality. And its best to that on some BaseEntity object, right?

public class BaseEntity : BaseEntity where TEntity : BaseEntity
    {
        private int? oldHashCode;

        public override bool Equals(object obj)
        {
            if (obj == null)
            {
                return false;
            }
            if (obj == this)
            {
                return true;
            }
            if (obj is TEntity)
            {
                var objAsBaseEntity = obj as TEntity;
                return objAsBaseEntity.Id == this.Id;
            }
            return false;
        }
    }

This implementation is partly based on MSDN Equals guidelines and on base entity class I have found for NHibernate. But is not enough.  With Equals implementation we have to implement also GetHashCode. Problem with this is that, new entity should have whole time its exists the same hash code. More important hash code should be only dependent on Id property, which is only thing that allow two separate objects to be equal. So we need to make entity hash code dependent on id. But what about two new objects? They will have empty id and because of that the same hash code? One solution is to make hash code to be generated from base class when Id is empty.

public override int GetHashCode()
{
    // once we have a hashcode we'll never change it
    if (oldHashCode.HasValue)
    return oldHashCode.Value;
    // when this instance is new we use the base hash code
    // and remember it, so an instance can NEVER change its
    // hash code.
    var thisIsNew = Id == Guid.Empty;
    if (thisIsNew)
    {
        oldHashCode = base.GetHashCode();
        return oldHashCode.Value;
    }
    return Id.GetHashCode();
}

When id is set, hash code is generated from it. When it is not set, hash code is generated from base class which is object in our case. And first time hash code is generated is final. Hash code cannot be changed. It is because it would brake functionality of HashSet and Dictionary classes. EF uses both of them so it would be big issue. It is best implementation I could find, and still it have some drawbacks. For example entity loaded from DB and mapped from outside of EF with the same data and the same Id will have different hash code. It is because in first situation hash code is generated from database id. In second situation hash code is generated from new object which have empty id. It does not matter that after hash code generation id is set to real value. Once it is generated it can not be changed.

To remedy that we need to make mapping of mapped entities and entities loaded from DB by hand.
In next post I will explain how to make class that will resolve mapping of entities to correct this issue.

 

 



Ninject and Entity Framework

clock May 3, 2013 03:50 by author n.podbielski

Last time I wrote about problem with testing Entity Framework in Visual Studio 2012. During working on this project I also encounter problems with configuration of Ninject to allow injecting Entity Framework classes from its Patterns extension. To be frankly this was not much a problem then lack of documentation. I just was forced to experiment with Ninject and new-thing-for-me Entity Framework Patterns extension.

So to make world of .NET a little easier I decided to write about my experience.

I started with this documentation for EF extension. It is instruction for someone that decided to use Unity in their project.

There is also link to similar instruction with Ninject IoC container, but its empty. Why authors of this extension decided to include in documentation of their project link to something that is empty eludes me. Oh well you have to make through with things you have. It should not be hard to translate Unity configuration syntax to its equivalent in Ninject.

 

public static class CompositionRoot
    {

        private static readonly IUnityContainer UnityContainer = new UnityContainer();

        public static IUnityContainer Container { get { return UnityContainer; } }

        public static void RegisterServices()
        {
            // Registering interfaces of Unit Of Work & Generic Repository
            UnityContainer.RegisterType(typeof(IRepository<>), typeof(Repository<>));
            UnityContainer.RegisterType(typeof(IUnitOfWork), typeof(UnitOfWork));

            // Every time we ask for a EF context, we'll pass our own Context.
            UnityContainer.RegisterType(typeof(DbContext), typeof(Context));

            // Tricky part.
            // Your repositories and unit of work must share the same DbContextAdapter, so we register an instance that will always be used
            // on subsequente resolve.
            // Note : you should not use ContainerControlledLifetimeManager when using ASP.NET or MVC
            // and use a per request lifetime manager
            UnityContainer.RegisterInstance(new DbContextAdapter(UnityContainer.Resolve()), new ContainerControlledLifetimeManager());

            UnityContainer.RegisterType(
                new InjectionFactory(con => con.Resolve())
                );

            UnityContainer.RegisterType(
                new InjectionFactory(con => con.Resolve())
                );
        }
    }

As you can see it is a little convoluted. That is one of reasons why I like Ninject more. It is cleaner.

Oh right. First lets create Ninject module:

public class InjectModule : NinjectModule
    {
        public override void Load()
        {
           
        }
    }

Modules like that one are used by Ninject in following way:

var kernel = new StandardKernel();
kernel.Load<InjectModule>();

Depending of where you are starting Ninject container it can look a little diffrent but idea is the same you are using modules for registering packs of types. Using that i created module for EF in EF assembly and that way it can be shared in my tests, services, console applications etc.

Now lets fill this new module with meaningfull code. Firstly we will register repository and unit of work interfaces:

Bind(typeof(IRepository<>)).To(typeof(Repository<>));
Bind<IUnitOfWork&gt().To<UnitOfWork&gt();

Isn't that cleaner than Unity syntax? Next we need to register EF context class. It is as simple as:

Bind<DbContext>().To<DBEntities>();

Ofcourse DBEntities is your type of EF context class.

Now the harder part, a Unity 'tricky' registering of DBContextAdapter class. What we trying to do here is to make every instance of Repository and UnitOfWork, that is created by Ninject and is transient, created whenever necessary and then disposed, had to share same instance of DBContextAdapter. I decided to make this class single for thread scope. It is not ideal but is bult-in behavior of Ninject. In best case it should be shared for our whole application inlcuding spawned threads. Or for Web apps it could be better to use InRequestScope

Bind<DbContextAdapter>().To<DbContextAdapter>().InThreadScope();

To bind interfaces used by UnitOfWork an Repository classes, which allow them use context class indirectly through shared DBContextAdapter class, we have to bind those interfaces to shared instance of this class. To do that we register interfaces IObjectSetFactory (used by repository) and IObjectContext (used by UnitOfWork) with custom method that returns DBContextAdapter:

Bind<IObjectSetFactory, IObjectContext>()
               .ToMethod(con =>con.Kernel.Get<DbContextAdapter>());

Whole module for configuration of EF will looks like this:

 

public class EFModule : NinjectModule
{
public override void Load()
{
Bind(typeof(IRepository<>)) .To(typeof(Repository<>));
Bind<IUnitOfWork>().To<UnitOfWork>();
Bind<UnitOfWork>().ToSelf();
Bind<DbContext>().To<DBEntities>();
Bind<DbContextAdapter>().To<DbContextAdapter>().InThreadScope();
Bind<IObjectSetFactory, IObjectContext>()
.ToMethod(con =>con.Kernel.Get<DbContextAdapter>());
}
}


This is sufficient to make EF with Patterns extension working in our application.

this.kernel = new StandardKernel();
kernel.Load();
repository = kernel.Get<IRepository>();

var uow = kernel.Get<IUnitOfWork>();
repository.Insert(new Enity{});
uow.Commit();

This is it. Operation with UoW and repository pattern should work with EF.

Personally I am unhappy that this particular implementation of UnitOfWork does not implement IDisposable interface. This would greatly improve clarity of code as would be helpfull during maintaining of the code.

this.kernel = new StandardKernel();
kernel.Load();
repository = kernel.Get<IRepository>();

using(var uow = kernel.Get<IUnitOfWork>())
{
	repository.Insert(new Enity{});
	uow.Commit();
}

But you cant have everything Smile



Using Matlab from C# application.

clock January 20, 2013 11:25 by author n.podbielski

While writing an application for my Masters degree diploma, I wrote a simple application that was using COM Matlab server. I have found it hard to use it mainly due to lack of documentation, which is really basic with only few code examples for C#. I guess writing programs that use Matlab for calculating is not encouraged by MathWorks, you would became competition that way Smile. Nonetheless, I accomplished why I was required to do, so I decided to share this with the rest of the world.

Important: I was using R2010a version of Matlab. I realize that there is a newer version. But I had only this version at my disposal. Since interface for communicating with Matlab server is dependent on installed Matlab version and registry entries, it may have been different from yours. But I suspect not too much. I also tried with 7.1 and (if I remember correctly) it required only to swap reference in Visual Studio. But again... it was only a test so there might be other problems that I am not aware of.

Let's start with a simple console application. Let's call it MatlabTest. First, we will add DLL reference with COM interface. Click RMB on project and choose [Add Reference] option. In new window, click COM tab. In search text box, write 'Matlab'. Then choose "Matlab Application (Version 7.10) Type Library".

You should get a new reference like below:

Great. Now we should test if it is working. In order to use it, we should create our Matlab server from C# application. To do that, we can add code to our main program:

var activationContext = Type.GetTypeFromProgID("matlab.application.single");
var matlab = (MLApp.MLApp)Activator.CreateInstance(activationContext);
Console.WriteLine(matlab.Execute("1+2"));
Console.ReadKey();

This code will create a single Matlab COM server through Activator class. Program Id "matlab.application.single" means single Matlab COM server for our application. When it will try to create another Matlab, it will just return a reference to the same object. Contrary to that, we could use "matlab.application" which will create another instance of Matlab anytime Activator.CreateIntance method will be executed. In more complex applications or for web applications or other programs which run for a long time, it may create big memory leaks since 1 instance costs around 220 Mb (on 64 bit Windows 7 and Matlab R2010a).

After creating Matlab program, execute a simple sum of 2 integers, just for testing communication - we don't need anything more sophisticated. It should return also a simple string in console:

It's really simple and more importantly, it works! Smile

This way, we can send and receive string messages with MATLAB only. It's not very useful. Also, there is one way to find out if our statement had errors. It will have 'error' string in response.

Let's try to run something like this: '1*', which will result in error:

So to check our command had errors, we have to check if output string has "??? Error" in it.

To send some parameters along with our command, we have to use one of 'Put*' methods. Best is one called PutWorkspaceData. It takes three parameters. First is the desired name of our new Matlab variable. Two others are much more tricky. To set variable correctly (so you could reference it in command), you must use global workspace. But it is called? This one took much more time than I would want. It is not mentioned in the documentation of this method. If I remember it right, I found it in some code example and it should be only "base". In the end, I created in my application another method that encapsulated PutWorkspaceData and forgot about it Smile. The third parameter is the value of our variable. It should be simple. Let's change our code to:

matlab.PutWorkspaceData("a", "base", 2);
Console.WriteLine(matlab.Execute("a*a"));

The result will be as shown below:

But it is just int. What about more complicated structures? How about multiplication of two vectors? Matlab is using .NET type double to send and receive information with our application. Again, I did not find it anywhere in the documentation, but rather reverse engineered this from data returned from Matlab. So let us try to send 2 arrays of doubles. and multiply them in Matlab. First will be named 'a' and second 'b'. Matlab command will be "a'*b". Transposition will give us a nice matrix instead for a single number.

matlab.PutWorkspaceData("a", "base", new[]{2d,3d,1d});
matlab.PutWorkspaceData("b", "base", new[]{4d,3d,2d});
Console.WriteLine(matlab.Execute("a'*b"));

And in return we will get:

Next step will be to return this output to our console app. To do that, we can use GetWorkspaceData that works similar to PutWorkspaceData or... we can use GetVariable method. This one returns dynamic so our application needs to run on .NET 4. It takes two parameters - name of variable we want to return from Matlab and again name of workspace. You really should save this string as const somewhere Smile. Change our code to:

matlab.PutWorkspaceData("a", "base", new[]{2d,3d,1d});
matlab.PutWorkspaceData("b", "base", new[]{4d,3d,2d});
Console.WriteLine(matlab.Execute("c=a'*b"));
var c = matlab.GetVariable("c", "base");

After that, in our console app, we will have variable c and it will be a two dimensional array of doubles. To show values of this array in console, we can just iterate it with a simple foreach loop. But instead of that, we will iterate two of the dimensions. This will give us information about values and dimensions of this matrix.

for (var i = 0; i < c.GetLength(0); i++)
{
   for (var j = 0; j < c.GetLength(1); j++)
        Console.Write(c.GetValue(i, j) + " ");
   Console.WriteLine();
}

Ok.That was matrix. How about vectors? Luckily, vectors are for Matlab just a special case of matrix so it will spit out two dimensional array also. More complex is the case of empty matrix. No, it is not null. It would be too easy. Instead, it is a special type Missing. I guess it has some logic behind. Null would indicate that variable will have no value at all or this is not defined. But we have an empty matrix, so this is not defined nor is this a lack of value. So why not just array with zero values in it? No idea.

Let's try to run code like below to test it:

Console.WriteLine(matlab.Execute("c=a'*b"));
Console.WriteLine(matlab.Execute("d=[]"));
var c = matlab.GetVariable("c", "base");
var d = matlab.GetVariable("d", "base");
for (var i = 0; i < d.GetLength(0); i++)
{
   for (var j = 0; j < d.GetLength(1); j++)
      Console.Write(d.GetValue(i, j) + " ");
   Console.WriteLine();
}

Running those commands in Matlab will work just fine. Instead of that, error will be thrown on first for loop:

Not very nice. To prevent this error, the application will have to check if dynamic type returned from matlab is in fact empty (Missing). Not very clean it is better to wrap this in other method that will perform this check for us whenever we want to get our data from Matlab. In my project, I ended up writing few methods for returning vectors, matrices, numbers, strings, etc. Vector one looked like this:

public static Vector GetVector(this MLApp.MLApp matlabComObject, string variabla_name)
{
    var dynamicVariable = matlabComObject.GetBaseWorkSpaceVariable(variabla_name);
    var dataList = new Vector();

    if (TypeChecker.IsVector(dynamicVariable))
    {
        dataList = Vector.DynamicToVector(dynamicVariable);
    }
    else if (TypeChecker.IsNumber(dynamicVariable))
    {
        dataList.Add(dynamicVariable);
    }
    else if (TypeChecker.IsEmptyMatrix(dynamicVariable))
    {
        //do nothing empty vector or matrix ([0x0] or [1x0])
    }
    else throw new Exception(string.Format(
      "Type of dynamic variable ({0}) is not supported!", dynamicVariable.GetType()));
    return dataList;
}

TypeCheker checks for type of dynamic. It's pretty straightforward. For our Missing type, it's just one line:

public static bool IsEmptyMatrix(object dynamicVariable)
{ return dynamicVariable.GetType() == typeof(Missing); }

After checking type of dynamic, we can just cast it to another type to take advantage of static properties of language. Why use dynamic at all? First, I think that ref and out method parameters are messy and second: I prefer:

var d = matlab.GetVariable("d", "base");

from:

object e;
matlab.GetWorkspaceData("d", "base", out e);

It is only 1 line. And since object you have to create first and then cast it also, don't bother and just use dynamic. But I guess it is just what you will like better.

Strings are much more friendly and there are just string. Or null. Smile

Now we know how to put and get data from Matlab. How to execute commands and check them for errors.

If you executed any of these examples, you probably took notice of a very simple Matlab window which had opened during the life of your console app. If you terminate it by disabling debugging or it will close due to error, Matlab window will remain opened. If not, it will close nicely with console window. But still, it is a huge memory leak risk if not maintained properly. To that, I recommend creating some special class that will create an instance or instances of Matlab. It should track one of the created instances and in case anything bad happens (but dynamic variable cast is probability Smile), it will close all instances prior to application exit. Tracking should be employed through WeakReference class, so whenever Garbage Collector would want to destroy Matlab instance, it should not be stopped by our tracking class. To destroy COM instance, we can use method Marshal.FinalReleaseComObject. So with weak reference code for that will look like this:

public static void Release(WeakReference instance)
{
    if (instance.IsAlive)
    {
        Marshal.FinalReleaseComObject(instance.Target);
    }
}

Method Quit of matlab server instance does not close it immediately and the code above will.

If you want to hide that popup Matlab window, you can set this in its instance:

matlab.Visible = 0;

This way, it will fade away and run as service.

This is the most basic information about Matlab in C#, but it will get you started. So happy coding! Smile



Server controls in separete assembly part 2

clock December 24, 2012 05:14 by author n.podbielski

Yesterday I was writing about creating server controls in separete assembly. Today I will cover more complicated example then simple "Hello World!" control. My goal was to create text box that will take handler method to run when validation event will be triggered, and when validation fail will show approriate messege. Text box with bultin validation would be nice, since you always should validate user input, right? But lets start creating our control.

First create new control named TextBoxWithValidation.ascx in our yesterday web site WebSite1 project.

 

Lets now fill it with some html code.

<span>
    <asp:TextBox runat="server" ID="tbText" OnTextChanged="tbText_TextChanged"></asp:TextBox>
    <asp:CustomValidator runat="server" ID="cvValidator" ControlToValidate="tbText" 
        OnServerValidate="cvValidator_ServerValidation"
        EnableClientScript="false"
        ErrorMessage="Dummy text error message" 
        Display="None" ValidateEmptyText="true">
        <%--validator must have ErrorMessage string--%>
    </asp:CustomValidator>
    <div class="ui-widget validation-error" id="divValidationContainer" runat="server"
        visible="false">
        <div class="ui-state-error ui-corner-all">
            <span style="float: left; margin-right: .3em;" class="ui-icon ui-icon-alert"></span>
            <asp:Label ID="lValidationMessage" CssClass="validation-message" runat="server"></asp:Label>
        </div>
    </div>
</span>


This HTML is using some CSS classes as it some HTML code from jQueryUI. So it will be good if you grab newest version from http://jqueryui.com/.

 You can ofcourse use you own html for validation error message.

ASCX file contains 2 ASP.NET controls, text box and custom validator. Validator will be validating value of our text box by custom handler assigned from external page or control. We are setting ErrorMessege property with "Dummy text error message" because validator would not run validation event without it. It's some kind internal logic in .NET Framework. That's why we set it to 'dummy' to in case some um... error it would display some hint that there is something wrong with validation, and this is not validation message. Lets jump to .cs file.

 

public partial class TextBoxWithValidation : System.Web.UI.UserControl
{
    public string Text
    {
        get
        {
            return tbText.Text;
        }
        set
        {
            tbText.Text = value;
        }
    }

    public bool IsValid
    {
        get
        {
            return cvValidator.IsValid;
        }
    }
    
    public string ErrorMessage
    {
        get
        {
            return cvValidator.ErrorMessage;
        }
        set
        {
            cvValidator.ErrorMessage = value;
        }
    }

    public event ServerValidateWithMessageEventHandler ServerValidate;

    protected void cvValidator_ServerValidation(object source, ServerValidateEventArgs args)
    {
        if (ServerValidate != null && Visible)
        {
            var argsWithMessage = new ServerValidateWithMessageEventArgs(args);
            ServerValidate(source, argsWithMessage);
            argsWithMessage.ToServerValidateEventArgs(args);
            divValidationContainer.Visible = !argsWithMessage.IsValid;
            if (!argsWithMessage.IsValid)
            {
                if (string.IsNullOrEmpty(argsWithMessage.ErrorMessage))
                {
                    lValidationMessage.Text = cvValidator.ErrorMessage;
                }
                else
                {
                    lValidationMessage.Text = argsWithMessage.ErrorMessage;
                }
            }
            else
            {
                lValidationMessage.Text = "";
            }
        }
    }
}


First we are making proxy of properties IsValid and ErrorMessage, and also of property Text of TextBox. They will be handy in future Smile

Next are public event for text box value Validation ServerValidate and protected function that will run this event cvValidator_ServerValidation. Argument of this method args is defined as below:

public delegate void ServerValidateWithMessageEventHandler
            (object source, ServerValidateWithMessageEventArgs args);

public class ServerValidateWithMessageEventArgs : ServerValidateEventArgs
{

    public ServerValidateWithMessageEventArgs(ServerValidateEventArgs args)
        : base(args.Value, args.IsValid)
    { }

    public string ErrorMessage { get; set; }

    internal void ToServerValidateEventArgs(ServerValidateEventArgs args)
    {
        args.IsValid = IsValid;
    }
}

In this argument in parent control or page we can set ErrorMessege property and will obtain IsValid property value after event execution will complete. In control method cvValidator_ServerValidation we will also show or hide our custom validation message depending if value of text box is valid or not.

Let's try it out. We will publish WebControl project, and add reference to this new control by adding App_Web_textboxwithvalidation.ascx.cdcab7d2.dll file. Next add another line to web.config like yesterday:

<add tagPrefix="WebControls" namespace="ASP" assembly="App_Web_textboxwithvalidation.ascx.cdcab7d2" />


Now we can add our new control to page:

 

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title></title>
    <link href="JqueryUI/css/redmond/jquery-ui.custom.min.css" rel="stylesheet" type="text/css" />
    <link href="jQueryUI.Validation.css" rel="stylesheet" type="text/css" />
</head>
<body>
    <form id="form1" runat="server">
    <div>
        <WebControls:helloworld_ascx runat="server" />
        <WebControls:textboxwithvalidation_ascx ID="test" runat="server" ErrorMessage="testMessage" OnServerValidate="test_ServerValidate" />
        <br />
        <asp:Button runat="server" Text="Validate" />
    </div>
    </form>
</body>
</html>

with some button to trigger postback and some method to validate:

        protected void test_ServerValidate(object source, ServerValidateWithMessageEventArgs args)
        {
            if (args.Value.Length > 5)
            {
                args.ErrorMessage = "Value is too long!";
args.IsValid = false; } }

 

As for styles first is standard jQueryUI redmond theme css file and second is custom css for jQueryUI theme which places validation message right from text box as popup with nice eye-candy effects Smile:

/*validation*/
.validation-error
{
    display: inline-block;
    position: absolute;
    z-index: 1000;
}
.validation-error .ui-icon-alert, .errorResultContainer .ui-icon-alert
{
    float: left;
    margin-right: .3em;
}
.validation-error .ui-state-error
{
    padding: 5px;
    box-shadow: 5px 5px 5px #888888;
}
.validation-error .validation-message
{
    display: none;
    position: relative;
    text-shadow: 2px 2px 2px activeborder;
}
.validation-error:hover .validation-message
{
    display: block;
}
.validation-error:hover
{
    z-index: 1001;
}
.ui-icon-alert
{
    float: none !important;
}

After running our project and clicking on button when value of text box is longer then 5 chars we will se validation popup:

 

 

when you hover on popup you will see error description:

 

 

You can validate value for more then one constraint. Just remember to set ErrorMessage property and IsValid property to false of args parameter. That is all.