Friday, October 23, 2009

Best practice to develop jQuery plugin

  1. Create a private scope for $
  2. Attach plugin to $.fn alias
  3. Add implicit iteration
  4. Enable chaining
  5. Add default options
  6. Add custom options
  7. global custom options
<!DOCTYPE html><html lang="en"><body> <div id="counter1"></div><div id="counter2"></div> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js"></script><script> (function($) { $.fn.count = function(customOptions){ var options = $.extend({},$.fn.count.defaultOptions, customOptions); return this.each(function(){ var $this = $(this); $this.text(options.startCount); var myInterval = window.setInterval(function(){ var currentCount = parseFloat($this.text()); var newCount = currentCount+1; $this.text(newCount+''); }, 1000); }); }; $.fn.count.defaultOptions = { startCount:'100' }; })(jQuery); jQuery.fn.count.defaultOptions.startCount = '300'; jQuery('#counter1').count(); jQuery('#counter2').count({startCount:'500'}); </script></body></html>

Wednesday, October 21, 2009

type check in jQuery

String: typeof object === "string" Number: typeof object === "number" Boolean: typeof object === "boolean" Object: typeof object === "object" Function: jQuery.isFunction(object) Array: jQuery.isArray(object) Element: object.nodeType null: object === null undefined: typeof variable === "undefined" or object.prop === undefined null or undefined: object == null

Saturday, October 17, 2009

it must be "new"ed.

The following is constructor which prevent not using "new" keyword

function User(first, last){ if ( !(this instanceof arguments.callee) ) return new User(first, last); this.name = first + " " + last; }

Friday, October 16, 2009

javascript scope

In JavaScript, {blocks} do not have scope. Only functions have scope. Vars defined in a function are not visible outside of the function.

function overload

function.length can tell you the number of parameters defined, using this we can create overload functions

function addMethod(object, name, fn){ // Save a reference to the old method var old = object[ name ]; // Overwrite the method with our new one object[ name ] = function(){ // Check the number of incoming arguments, // compared to our overloaded function if ( fn.length == arguments.length ) // If there was a match, run the function return fn.apply( this, arguments ); // Otherwise, fallback to the old method else if ( typeof old === "function" ) return old.apply( this, arguments ); }; } function Ninjas(){ var ninjas = [ "Dean Edwards", "Sam Stephenson", "Alex Russell" ]; addMethod(this, "find", function(){ return ninjas; }); addMethod(this, "find", function(name){ var ret = []; for ( var i = 0; i < ninjas.length; i++ ) if ( ninjas[i].indexOf(name) == 0 ) ret.push( ninjas[i] ); return ret; }); addMethod(this, "find", function(first, last){ var ret = []; for ( var i = 0; i < ninjas.length; i++ ) if ( ninjas[i] == (first + " " + last) ) ret.push( ninjas[i] ); return ret; }); } var ninjas = new Ninjas(); assert( ninjas.find().length == 3, "Finds all ninjas" ); assert( ninjas.find("Sam").length == 1, "Finds ninjas by first name" ); assert( ninjas.find("Dean", "Edwards").length == 1, "Finds ninjas by first and last name" ); assert( ninjas.find("Alex", "X", "Russell") == null, "Does nothing" );

later method

Object.prototype.later = function (msec, method) {     var that = this, args = Array.prototype.slice. apply(arguments, [2]); if (typeof method === 'string') { method = that[method]; } setTimeout(function () { method.apply(that, args); }, msec); return that; };

fixed a closure bug

Closure refer the ability that a function can access and manipulate external variable from with a function. Sometimes, this is not good because the if a function depends on the external state, and the external state changes, when the function may produce unexpected result.

//this is because the function inside setTimeout refers to the i only when it is //executed, by then i==4, the problem is that i is external variable for (var i = 0; i < 4; i++) { setTimeout(function() { alert(i); //it is always 4 }, i * 1000); } //the solution is make the i as local variable, so parameter is a solution, and //self-execution var count = 0; for (var i = 0; i < 4; i++) { (function(j) { setTimeout(function() { alert(j); //it will show 0, 1, 2, 3 }, j * 1000); }) (i); }

So sometimes it is good to remove the external dependencies. The follow code is anther example.

var ninja = { yell: function(n) { return n > 0 ? arguments.callee(n - 1) + "a" : "hiy"; } } /* this is also ok var ninja = { yell: function x(n) { return n > 0 ? x(n - 1) + "a" : "hiy"; } }; */ /*this is has bug, because ninja.yell is inside of function which depends external state var ninja = { yell: function(n) { return n > 0 ? ninja.yell(n - 1) + "a" : "hiy"; } }; */ var sumurai = { yell: ninja.yell }; ninja = null; ok(sumurai.yell(4) == "hiyaaaa", "argumnets.collee is the function itself");

efficency string operation

Because string is immutable in JavaScript, concatenation with array.join('') is much efficient. The following code use all the chinese characters. Click here to show

var sb = []; sb[sb.length] = "<p>"; for (var i = 0x4e00; i <= 0x9fcf; i++) { sb[sb.length] = String.fromCharCode(i); } sb[sb.length] = "</p>"; $("#chinese").html(sb.join("")); return false;

the bind function

John Resig has page Learning Advanced JavaScript to explain how the following script works.

// The .bind method from Prototype.js Function.prototype.bind = function(){ var fn = this, args = Array.prototype.slice.call(arguments), object = args.shift(); return function(){ return fn.apply(object, args.concat(Array.prototype.slice.call(arguments))); }; };

His explanation is wonderful. And this piece of code is simple, powerful. But it maybe still hard for anyone to understand without any explanation. So I refactored it as follow, and add one use case.

Function.prototype.bind = function() { var function_to_be_bound = this; var args = Array.prototype.slice.call(arguments); var context_object = args.shift(); var binding_parameters = args; return function() { var invoking_parameters = Array.prototype.slice.call(arguments); var combined_parameters = binding_parameters.concat(invoking_parameters); var result_from_function_run_in_new_context = function_to_be_bound.apply(context_object, combined_parameters); return result_from_function_run_in_new_context; }; } function reply_greeting(your_name) { //"this" is the context object alert("my name is " + this.name + ", Nice to meet you, " + your_name); } var fred = { name: "fred" }; var reply_greeting_of_fred_to_you = reply_greeting.bind(fred); var reply_greeting_of_fred_to_john = reply_greeting.bind(fred, "john"); reply_greeting_of_fred_to_you("jeff"); //expect: "my name is fred, Nice to meet you, jeff" reply_greeting_of_fred_to_john(); //expect: "my name is fred, Nice to meet you, john"

Another article may help you to understand is Functional Javascript

Thursday, October 15, 2009

memoried function

Function.prototype.memorized = function(key) { this._values = this._values || {}; if (this._values[key] !== undefined) { return this._values[key] } else { //"this" is parent function object this._values[key] = this.apply(this, arguments); /* the "this" passed as context object is optional? */ return this._values[key]; } }; function isPrime(num) { alert(this); var prime = num != 1; for (var i = 2; i < num; i++) { if (num % i == 0) { prime = false; break; } } return prime; } var a = isPrime.memorized(5); alert(a); var b = isPrime.memorized(5); alert(b);

curry function

Function.method('curry', function() { //arguments can not be passed in to closure function //var l = arguments.length; //var args = []; //for (var i = 0; i < l; i++) { // args[i] = arguments[i]; //} var args = Array.prototype.slice.apply(arguments); var original_function = this; return function() { //arguments is not the external arguments //for (var i = 0; i < arguments.length; i++) { // args[args.length] = arguments[i]; //} args = args.concat(Array.prototype.slice.call(arguments)); return original_function.apply(null, args); }; }); function add() { var sum = 0; for (i = 0; i < arguments.length; i++) { sum += arguments[i]; } return sum; } var add1 = add.curry(1); var s = add1(2); alert(s);

Tuesday, October 13, 2009

an issue caused by prototype inheritance in javascript

This issue is not obvious, in some case, it does not even cause any error at all. But fixing this issue brings some benefit. Here the use case.

var user_constructor = function() { alert(this.constructor); //function() { alert(this .. }; this.constructor.count++; this.Id = this.constructor.count; }; user_constructor.count = 0; var c = new user_constructor(); alert(c.Id); //1 alert(c.constructor == user_constructor); //true alert(user_constructor.count); //1

We know that constructor is also an object, (an object is not necessarily a constructor), in our case our constructor has property "count" to count the instance created by the constructor. And the Id is this last count. Very straight forward. Now if we want add in inheritance, we can implement as follow.

var user_constructor = function() { alert(this.constructor); //function Object() { [native code] } this.constructor.count++; this.Id = this.constructor.count; }; user_constructor.count = 0; var user_prototype = { Name: "Unknown" }; user_constructor.prototype = user_prototype; var c = new user_constructor(); alert(c.Id); //NaN alert(c.constructor == user_constructor); //false alert(user_constructor.count); //0

Suddenly, the code broke down. Is it because the fault of the prototype object? No. We need to have deeper understand how constructor works. When an object is created by using "new user_constructor()", the javascript runtime need to determine what constructor is used used to create an object "this". But how. According to ECMAScript Language Specification Edition 3

ECMAScript supports prototype-based inheritance. Every constructor has an associated prototype,and every object created by that constructor has an implicit reference to the prototype(calledthe object’s prototype)associated with its constructor.

To implement this feature, the runtime need to create "this" that has be behavior of user_constructor.prototype, so runtime will use user_constructor.prototype.constructor function to create "this". In the first case, there is no prototype based inheritance. So user_constructor.prototype is [object Object], naturally, user_constructor.prototype.constructor should [object Object] as well, but it is actually function() { alert(this .. }. Why. I don't know exactly, but this may be because user_constructor has no user assigned prototype, so the object should be created by user_constructor itself. In the second case, user_constructor.prototype is user_prototype, which is create by "function Object() { [native code] }". So runtime will use that function as constructor. After we rationalize this, we can easily fix this behavior as follow by adding one line "user_prototype.constructor = user_constructor;"

var user_constructor = function() { alert(this.constructor); //function() { alert(this .. }; this.constructor.count++; this.Id = this.constructor.count; }; user_constructor.count = 0; var user_prototype = { Name: "Unknown" }; user_constructor.prototype = user_prototype; user_prototype.constructor = user_constructor; var c = new user_constructor(); alert(c.Id); //NaN alert(c.constructor == user_constructor); //false alert(user_constructor.count); //0

Sunday, September 27, 2009

4 Equals, Reference Type, Value Type

The very fundamental design in .net clr is that type system is classified into two type, reference type and value type. This design decision has profound implication on the .net. One examples is to test the equality between objects.

Basically we have two kinds of comparison, identity comparison(whether two object has the same identity), semantic comparison(whether two object means the same thing, most people use value equality comparison, I use "semantic" because value of reference type is a reference, event the values of reference typed variable are the sames, it is possible that they mean the same thing in semantics). Since we have the two different type, this makes things complicated. For example, can we compare the "value" of reference type, or can we compare the reference of value type. If there had been only reference type, if there had been no value type, the .net world will be simpler. Why we need two types? This is a deep question, lots of this topics has been covered in a book "CLR via C#". Basically, this a consideration of memory efficiency and performance. What we need to know is that the value of reference type is reference, the value of value type is value.

Reference type identity comparison

To do identity comparison for reference type, we should call Object.ReferenceEquals(objA, objB), or you can use shortcurt operator "==" like "objA == objB". The following source code shows that ReferenceEquals and == operator is the same.

public class Object { [ReliabilityContract(Consistency.WillNotCorruptState, Cer.Success)] public static bool ReferenceEquals (Object objA, Object objB) { return objA == objB; } }

If they are the same, why we still need ReferenceEquals. It turns "==" means different things for different type value type. What exactly "==" does? For all reference type and all primitive value type, like int, double, enum, it become "ceq" instruction after it is compiled msil. What does "ceq" do? It is clr implementation question, I guess it compare identity equal for reference type and compare value equal for primitive value type. But it means "==" operator for custom value type like struct, which has not default implementation.

Reference type semantic comparison

The default semantic comparison of reference type is identity comparison, because the value of reference type variable is a reference. The default implementation is as follow.

// Returns a boolean indicating if the passed in object obj is // Equal to this. Equality is defined as object equality for reference // types and bitwise equality for value types using a loader trick to // replace Equals with EqualsValue for value types). // public virtual bool Equals(Object obj) { return InternalEquals(this, obj); } [MethodImplAttribute(MethodImplOptions.InternalCall)] internal static extern bool InternalEquals(Object objA, Object objB);

According the comments, for reference type object, InternalEquals just compare the reference, it does not compare referenced content. The following code shows this behavior.

static void Main(string[] args) { Customer c1 = new Customer { Name = "fred" }; Customer c2 = new Customer { Name = "fred" }; Customer c3 = c1; Console.WriteLine(object.ReferenceEquals(c1, c2)); //False Console.WriteLine(object.ReferenceEquals(c1, c3)); //True Console.WriteLine(c1 == c2); //False Console.WriteLine(c1 == c3); //True Console.WriteLine(c1.Equals(c2)); //False, event the reference content is same Console.WriteLine(c1.Equals(c3)); //True } public class Customer { public string Name { get; set; } }

But sometimes, we want to change this semantics. In our case, we can say if the name of customer is the same, regardless their identity. So we can override the instance Equals method like the following.

public class Customer { public string Name { get; set; } public override bool Equals(object obj) { var c = obj as Customer; if (c == null) { return false; } else { return this.Name == c.Name; } } }

Value type identity comparison

Can you compare identity of value type variable. "Yes". Should you compare identity of value types variable. "No". The result will always return "False", because object put in different boxes before comparison.

Console.WriteLine(object.ReferenceEquals(1, 1)); // False

Value type semantic comparison

Although you can use "==" operator with primitive value type like System.Int32, but you can not use it with custom value type such as struct before you implement the operator by your self. But you can use object type's instance Equals to do semantic comparison, which use reflection to check content equality like below.

public abstract class ValueType { public override bool Equals (Object obj) { BCLDebug.Perf(false, "ValueType::Equals is not fast. "+this.GetType().FullName+" should override Equals(Object)"); if (null==obj) { return false; } RuntimeType thisType = (RuntimeType)this.GetType(); RuntimeType thatType = (RuntimeType)obj.GetType(); if (thatType!=thisType) { return false; } Object thisObj = (Object)this; Object thisResult, thatResult; // if there are no GC references in this object we can avoid reflection // and do a fast memcmp if (CanCompareBits(this)) return FastEqualsCheck(thisObj, obj); FieldInfo[] thisFields = thisType.GetFields(BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic); for (int i=0; i<thisFields.Length; i++) { thisResult = ((RtFieldInfo)thisFields[i]).InternalGetValue(thisObj,false); thatResult = ((RtFieldInfo)thisFields[i]).InternalGetValue(obj, false); if (thisResult == null) { if (thatResult != null) return false; } else if (!thisResult.Equals(thatResult)) { return false; } } return true; } }

So we should always override instance Equals() for your custom value type struct to improve performance.

Comparing objects of unknown type

If we don't know the types of two object, the best bet is to use static method object.Equals(objA, objB). This method check if the identity equal first, then check semantic equality, this if This method is as follow.

public static bool Equals(Object objA, Object objB) { if (objA==objB) { return true; } if (objA==null || objB==null) { return false; } return objA.Equals(objB); }

To wrap it, what does this means to me? We can follow the following pseudo code

if (we compare two object of the same type) { if (type is reference type) { if (we want semantic compare && we have override the objA.Eqauls method) { objA.Equals(B); } else //we just want to identity compare { always use "objA == objB"; but object.ReferneceEqual(objA, objB) and objA.Eqauls(objB) do the same thing in this case } } else //type is value type { if (we want identity compare) { forget about it, although we can call object.ReferenceEqual(objA, objB) it will always return false because of boxing } else //we should always use semantic compare { if (type is primitive value type like int) { x == y // it is compiled to ceq il instruction } else { if (you have implment the == operator for this type) { use objA == objB } else { use objA.Equels(objB) //if you want more efficent comparison override instece Equals method } } } } } else //we compare two object of unknown type { Object.Equals(objA, objB); }

For reference type, "==" is enough for a situation, unless you want to change the default semantics comparison. For primitive value type, "==" is enough for most situations. For struct, you are encourage to override default semantics comparison obj.Equals() for performance, although not mandatory, and use obj.Equals for comparison.

Saturday, September 26, 2009

IEnumberable, IQueryable , Lambda expression - part2

I have seen such following piece of code written by a developer from a client.

interface IContactRepository { IEnumberable<Contact> GetSomeContacts(); } class ContactRepository : IContactRepository { public IEnumerable<Contact> GetSomeContacts() { //query is linq to sql query object IQueryable<Contact> query = ... return query; } }

Is it a better choice to using IEnumerable<T> instead of IQueryable<T>. I guess his concerns is that, if the interface is too specific, first this may give client more functionality than is required, second this may limit the server's choice of implementation. In lots case, this concern is right, we should give client the only functionality which client needs, nothing less and nothing more, and server should has more freedom to implement.

interface IPerson { void Eat(); void Sleep(); } interface ISales : IPerson { void Sell(); } interface ITeacher : IPerson { void Teache(); } class Service { //Unappropriate // public ISales GetPerson() // { // return ... // } //better public IPerson GetPerson() { return ... } }

Firstly, if the method return a ISales, First client will have one extra unnecessary method Sell. Secondly If the client only needs a IPerson, and the contract says client needs a IWorker, this will limit server's ability to serve the client, for example, server can not return a ITeacher.

Is this design guideline also applicable to the case of IContactRepository.

public interface IQueryable<T> : IEnumerable<T>, IQueryable, IEnumerable {} public interface IQueryable : IEnumerable { Type ElementType { get; } Expression Expression { get; } IQueryProvider Provider { get; } }

First the the IQuerable<T> interface does give user more functionality than the IEnunumerable<T>, but these members are read only, and client can not use them directly for query. Because the query functionality comes from the static method in Enumerable and Queryable, but not the IQuerable<T>, and IEnumeralbe<T>, from the client's perspective, Two interfaces works identically. Secondly, the interface does limit limit server's implementation choice, because server cannot return a IEnumberable<T> . Initially, I thought I can implement easily a empty IQueryable<T> that wrap a IEnumberable<T>. It turns out to be even easier. Because the Enumerable already implement an static method AsQueryable() for you, the Linq team in Microsoft already expect this is a common use case. So all you need to do is call the method can you IEnumberable&lgt;T> will become IQueryable<T>. like the following.

int[] intEnumerable = { 1, 2, 3 , 5}; IQueryable intQuery = intEnumerable.AsQueryable().Where( number => number > 2); foreach (var item in intQuery) { Console.WriteLine(item); } Console.WriteLine(intQuery.GetType().ToString()); //System.Linq.EnumerableQuery`1[System.Int32] //code decompiled by reflector ParameterExpression CS$0$0000; IQueryable intQuery = new int[] { 1, 2, 3, 5 }.AsQueryable<int>().Where<int>(Expression.Lambda<Func<int, bool>>(Expression.GreaterThan(CS$0$0000 = Expression.Parameter(typeof(int), "number"), Expression.Constant(2, typeof(int))), new ParameterExpression[] { CS$0$0000 })); foreach (object item in intQuery) { Console.WriteLine(item); } Console.WriteLine(intQuery.GetType().ToString());

So a it seems be a better to replace IEnumberable<T> with IQueryable<T>. As for as the interface concerns, the replacement does not give client any exactly same query experience and it is more difficult to implement. A great benefit of this replacement is the performance, using IEnumberable<T> will be much slower than IQuerable<T>. Consider the following code, the Where method for IQueryable<T> will treat the lambda expression as expression tree and query will be executed at server side which is much faster, while the IEnumerable<T> will treat the lambda expression as delegate and query will be executed at client side, which will be slower. Consider the following code.

var thisContact = contaceRepository. GetSomeContacts().Where( ctc => ctc.Id = 1).First();

Linq provide us a new way to design our domain model. In the post Extending the World, author says

Typically for a given problem, a programmer is accustomed to building up a solution until it finally meets the requirements. Now, it is possible to extend the world to meet the solution instead of solely just building up until we get to it. That library doesn't provide what you need, just extend the library to meet your needs.

It is very important to build extensible domain model by taking the advantage of IQueryable<T> interface. Using IEnumberable<T> only will hit the performance very seriously. The only pitfall to user IQueryable<T> is that user may send unnecessary complex to the server, but this can be resolved by designing the method so that only appropriate IQueryable<T> is returned, for example return GetSome instead of GetAll. Another solution is adding a view model which return a IEnumberable<T>

IEnumberable, IQueryable , Lambda expression - part1

When we type the following code

IEnumerable<int> intEnumerable = null; var q1 = intEnumerable.Where( x => x > 10);

we know that Where method is not part of the IEnumberable<T> interface or IEnumberable interface, it comes from extension method of Enumerable, which is static class and it has no inheritance relation With IEnumerable or IEnumberable<T>. The power of Linq-To-Object does not come from IEnumberable or IEnumberable or its implemenation, it comes from the extension method. Let's take a look what does the extension method do? Using Reflector we get the following source code.

public static class Enumerable { public static IEnumerable<TSource> Where<TSource>(this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { throw Error.ArgumentNull("source"); } if (predicate == null) { throw Error.ArgumentNull("predicate"); } if (source is Iterator<TSource>) { return ((Iterator<TSource>) source).Where(predicate); } if (source is TSource[]) { return new WhereArrayIterator<TSource>((TSource[]) source, predicate); } if (source is List<TSource>) { return new WhereListIterator<TSource>((List<TSource>) source, predicate); } return new WhereEnumerableIterator<TSource>(source, predicate); } }

We can see there , the delegate passed in to the method is the code that does the filtering.

IQueryable inherit from IEnumerable. But what extra value does the IQueryable bring. Let's take a look of the following code. and the code it generated by c# compiler.

public interface IQueryable<T> : IEnumerable<T>, IQueryable, IEnumerable {} public interface IQueryable : IEnumerable { Type ElementType { get; } Expression Expression { get; } IQueryProvider Provider { get; } }

It does not tell too much? Let's move on an querable example and decomplie to see what it does.

IQueryable<int> intQuerable = null; var q2 = intQuerable.Where(x => x > 10); // decomplied by reflector ParameterExpression CS$0$0000; IQueryable<int> q2 = intQuerable.Where<int>(Expression.Lambda<Func<int, bool>>(Expression.GreaterThan(CS$0$0000 = Expression.Parameter(typeof(int), "x"), Expression.Constant(10, typeof(int))), new ParameterExpression[] { CS$0$0000 }));

From this example, we can see that the Lamda Expression is not converted to a delegate, but to an expression tree. But why the extension method Enumerable.Where(IEnumerable<TSource> Where<TSource>(this IEnumerable<TSource> source, Func<TSource, bool> predicate) is not used? It turns out that, the c# compiler pick a more a suitable extension from Queryable. Here is the code from Reflector.

public static IQueryable<TSource> Where<TSource>(this IQueryable<TSource> source, Expression<Func<TSource, bool>> predicate) { if (source == null) { throw Error.ArgumentNull("source"); } if (predicate == null) { throw Error.ArgumentNull("predicate"); } return source.Provider.CreateQuery<TSource>(Expression.Call(null, ((MethodInfo) MethodBase.GetCurrentMethod()).MakeGenericMethod(new Type[] { typeof(TSource) }), new Expression[] { source.Expression, Expression.Quote(predicate) })); }

Unlike the Enumerable.Where methhod, this method does not have a delegate to do the filtering. And also the expression can not do the filtering either, it is the IQuerable.Provider which does the filtering. The provider takes the expression tree and does filtering later by converting expression tree to provider specific algorithm like TSQL.

IEumberable<T> is very easy to implement, in fact all the collection they are IEnumberable<T*gt;. Iterator makes it even easier. So there is not such thing as implementing a IEnumberableProvider, because the delegate does the query. But to implement IQueryable is more difficult, because expression does not query. It is IQueryProvider does the job. You need to implement IQuerableProvider

public interface IQueryProvider { IQueryable CreateQuery(Expression expression); IQueryable<TElement> CreateQuery<TElement>(Expression expression); object Execute(Expression expression); TResult Execute<TResult>(Expression expression); }

Sunday, July 19, 2009

Disconnected Update with Entity Framework

In the first demo, we get disconnected entity, make change of it, and get copy of original copy from the database, and apply the changed entity to the original entity by using context.ApplyPropertyChanges method, and save it back to the database.

public void DemoDisconnectedUpdate1() { //using NoTracking to simulate the disconnected enviorment //or you can use context.Detach() to simulate that context.Contacts.MergeOption = MergeOption.NoTracking; var pendingContact = context.Contacts.Where(c => c.ContactID == 709).First(); //change pendingContact.FirstName = "somebody"; // ApplyChange1(pendingContact); } public void ApplyChange1(EntityObject pendingEntity) { context = new PEF(); context.GetObjectByKey(pendingEntity.EntityKey); context.ApplyPropertyChanges(pendingEntity.EntityKey.EntitySetName, pendingEntity); context.SaveChanges(); }

Unlike the first demo, in the second demo, we use a anonymous typed object to represent the change of the entity, and apply the change directly to the original version directly using reflection.

public void DemoDisconnectUpdate2() { EntityKey key = new EntityKey("PEF.Contacts", "ContactID", 709); var changes = new { FirstName = "xyz" }; UpdateEntity(key, changes); } public void UpdateEntity(EntityKey key, object changes) { var original = context.GetObjectByKey(key); ApplyChange(changes, original); context.SaveChanges(); } public void ApplyChange(object changes, object original) { Type newType = changes.GetType(); Type oldType = original.GetType(); var newProperties = newType.GetProperties(); foreach (var newProperty in newProperties) { var oldProperty = oldType.GetProperty(newProperty.Name); if (oldProperty != null) { oldProperty.SetValue(original, newProperty.GetValue(changes, null), null); } } }

Thursday, July 16, 2009

Reference and EntityKey

When add an entity to your objectContext, if the entity reference an other existing entity, but that entity is not in memory, you need to create a EntityKey like the following.

var address = new Address(); address.City = "SomeCity"; address.AddressType = "Home"; address.ModifiedDate = DateTime.Now; address.ContactReference.EntityKey = new EntityKey("PEF.Contacts", "ContactID", 709); context.AddToAddresses(address); context.SaveChanges();

Sunday, July 12, 2009

Naming in Entity Framework

When using the entity framework designer, you create you entity model with a naming convention. For example, a table "Customer" will map to a entity type "Customer" and entity set "CustomerSet" . It is very tempting to change the name of "CustomerSet" to to Customers. But what about Criterion, its plural forms is Criteria, what about Equipment, it is plural forms is also Equipment. I feel that the default naming convention is good enough, because it tells you it is a set and also my configuration is kept to minimum, isn't this the spirit of convention over configuraiton?

How ObjectContext manage entities

Those objects were created by an internal process called object materialization, which takes the returned data and builds the relevant objects for you. Depending on the query, these could be EntityObjects, anonymous types, or DbDataRecords. By default, for any EntityObjects that are materialized, the ObjectContext creates an extra object behind the scenes, called an ObjectStateEntry. It will use these ObjectStateEntry objects to keep track of any changes to their related entities. If you execute an additional query using the same context, more ObjectStateEntry objects will be created for any newly returned entities and the context will manage all of these as well. The context will keep track of its entries as long as it remains in memory. The ObjectContext can track only entities. It cannot keep track of anonymous types or nonentity data that is returned in a DbDataRecord.

ObjectStateEntry takes a snapshot of an entity's values as it is first created, and then stores the original values and the current values as two separate sets. ObjectStateEntry also has an EntityState property whose value reflects the state of the entity (Unchanged, Modified, Added, Deleted). As the user modifies the objects, the ObjectContext updates the current values of the related ObjectStateEntry as well as its EntityState.

The object itself also has an EntityState property. As long as the object is being managed by the context, its EntityState will always match the EntityState of the ObjectStateEntry. If the object is not being managed by the context, its state is Detached.

ObjectContext has a single method, SaveChanges, which persists back to the database all of the changes made to the entities. A call to SaveChanges will check for any ObjectStateEntry objects being managed by that context whose EntityState is not Unchanged, and then will use its details to build separate Insert, Update, and Delete commands to send to the database. ObjectContext can monitor the change of both entity and entity reference.

Pros and Cons of Load and Include

You have some things to consider when choosing between the Load and Include methods. Although the Load method may require additional round trips to the server, the Include method may result in a large amount of data being streamed back to the client application and then processed as the data is materialized into objects. This would be especially problematic if you are doing all of this work to retrieve related data that may never even be used. As is true with many choices in programming, this is a balancing act that you need to work out based on your particular scenario. The documentation also warns that using query paths with Include could result in very complex queries at the data store because of the possible need to use numerous joins. The more complex the model, the more potential there is for trouble.

You could certainly balance the pros and cons by combining the two methods. For example, you can load the customers and orders with Include and then pull in the order details on an as-needed basis with Load. The correct choice will most likely change on a case-by-case basis.

public static void DeferredLoadingEntityReference() { var addresses = from a in context.Addresses select a; foreach (var address in addresses) { if (address.CountryRegion == "UK") address.ContactReference.Load(); } } public static void EagerLoadWithInclude() { var test = from c in context.Contacts.Include("Addresses") where c.LastName == "Smith" select c; test.OuputTrace(); }

Debuging ObjectQuery

When you write query with ObjectQuery, it use IQueryable interface. Following extension function help your to debug ObjectQuery more easily.

public static class IQueryableExtenstion { public static ObjectQuery ToObjectQuery(this IQueryable query) { return query as ObjectQuery; } public static ObjectQuery<T> ToObjectQuery<T>(this IQueryable<T> query) { return query as ObjectQuery<T>; } public static string ToDatabaseSql(this IQueryable query) { try { return query.ToObjectQuery().ToTraceString(); } catch { return null; } } public static string ToEntitySql(this IQueryable query) { try { return query.ToObjectQuery().CommandText; } catch { return null; } } public static void OuputTrace(this IQueryable query) { Console.WriteLine(query.ToDatabaseSql()); Console.WriteLine(query.ToEntitySql()); } } //to use the extension function you can write the following code var test = from a in context.Addresses let c = new { a.Contact.FirstName, a.Contact.LastName, a.CountryRegion } group c by c.CountryRegion into mygroup where (mygroup.Count() > 150) select mygroup; test.OuputTrace();

Saturday, July 11, 2009

Seperate your EMD to 3 files

When you generate your EMDX, you medata data is embedded in the output assembly as resource files. So you you connection string looks like "metadata=res://*/Model1.csdl|res://*/Model1.ssdl|res://*/Model1.msl" . But you have an option to save your metadata in loose file, so that your connection string will be something like "metadata=.\Model1.csdl|.\Model1.ssdl|.\Model1.msl"

Object Service (ObjectContext, ObjectQuery and EntityObject)

The core objects of object service is ObjectContext and ObjectQuery and EntityObject. You can think of ObjectContext as entity repository. The repository is responsible to Insert/Update/Delete/Select entity.

To select entity, ObjectContext actually create ObjectQuery, which implement IQueryable. The object return from ObjectQuery, is not normal object, but EntityObject. Entity has EntityKey and EntityState.

Thursday, July 9, 2009

Domain-Driven Development with EF v1

Entity Framework version 1 is data-centric in the features it implements. Domain-driven development begins with the model, not the database. Many developers who embrace the tenets of domain-driven design will find the Entity Framework to be too restrictive. However, some of the advocates of this point of view are working with the Entity Framework team to enable version 2 to expand its capabilities so that you can use it with this approach.

Challenges with Change Tracking Distributed Applications

To put it mildly, using the Entity Framework in distributed applications can be challenging when it comes to the change tracking performed by Object Services, because the change-tracking information is not stored in the entities and instead is maintained by a separate set of Object Services objects. When an entity is transferred across a process, it is disconnected from the object that contains its change-tracking information. Those objects that own the tracking data are not serializable, so they can't easily be shipped across to the new process along with the entities. Therefore, when the entities arrive at the new process, they have no idea whether they are new or preexisting, or whether they have been edited or marked for deletion. There's no way to simply use the ObjectContext's default method for saving changes to the database without doing additional work.

Wednesday, July 8, 2009

Limitation of Entity Data Model Designer

The disigner does not support all the features of the EDM. With some of less frequently used EDM features, you will have to work with the XML after all.

  • Stored procedures

    The Designer supports a narrow use of stored procedures. Using the Designer, you can override the Entity Framework's automatic generation of Insert, Update, and Delete commands by mapping an entity to a set of stored procedures with two important rules. The first is that the stored procedure must line up with the entity. For inserts and updates, that means the values for the stored procedure parameters must come from an entity's property. The second rule is that you have to override the Insert, Update, and Delete commands, or no commands at all, so you'll need to map all three functions.

    In addition, the Designer supports read queries as long as the query results map directly to an entity. If you have a query that returns random data, you will need to manually create an entity for it to map to. That's not too hard in the Designer, but there's another requirement that will necessitate doing some work in the XML.

  • Unsupported EDM types

    The EDM has a very rich set of modeling capabilities. But the Designer does not support all of these advanced modeling techniques, requiring you to handcode some of them in the XML. In most cases, you can continue to work with the model in the Designer even though you won't see these particular model types, though you can leverage them in your code. However, there are a few model types, such as the very useful complex type, that, when included in the XML, will make it impossible to open the model in the Designer. The Designer is well aware of these limitations, and at least provides an alternative view that displays a message explaining why the model can't be opened.

  • Generating a database from the model

    The EDM is based on a data-driven design with the assumption that there is an existing database for the model to map back to. This makes a lot of sense if you are building an application for an existing database. Domain-driven developers prefer to create their object model first and have a database generated from that. The current designer does not support this capability. However, model first development will be possible in the next version of the Entity Framework tools, which will ship in Visual Studio 2010. In the meantime, developers in the community and at Microsoft are playing with a code generator called T4 Templates (Text Template Transformation Toolkit) to read the model and generate SQL script files to generate database objects for you.

Entity in Entity Framework

Entities are not the same as objects. Entities define the schema of an object, but not its behavior. So, an entity is something like the schema of a table in your database, except that it describes the schema of your business objects. Entity Framework is to build a conceptual model, entity data model from database schema, but not only database schema.

An EDM is a client-side data model and it is the core of the Entity Framework. It is not the same as the database model, that belongs to the database. EDM describes the structure of your business objects. It's as though you were given permission to restructure the database tables and views in your enterprise's database so that the tables and relationships look more like your business domain rather than the normalized schema that is designed by database administrators. Below compare a database model and a entity data model.

The entity data model doesn't have any knowledge of the data store, what type of database it is, much less what the schema is. And it doesn't need to. The database you choose as your backend will have no impact on your model or your code.

The Entity Framework communicates with the same ADO.NET data providers that ADO.NET already uses, but with a caveat. The provider must be updated to support the Entity Framework. The provider takes care of reshaping the Entity Framework's queries and commands into native queries and commands. All you need to do is identify the provider and a database connection string so that the Entity Framework can get to the database.

This means that if you need to write applications against a number of different databases, you won't have to learn the ins and outs of each database. You can write queries with the Entity Framework's syntax (either LINQ to Entities or Entity SQL) and never have to worry about the differences between the databases. If you need to take advantage of functions or operators that are particular to a database, Entity SQL allows you to do that as well.

Although the Entity Framework is designed to let you work directly with the classes from the EDM, it still needs to interact with the database. The conceptual data model that the EDM describes is stored in an XML file whose schema identifies the entities and their properties. Behind the conceptual schema described in the EDM is another pair of schema files that map your data model back to the database. One is an XML file that describes your database and the other is a file that provides the mapping between your conceptual model and the database.

During query execution and command execution (for updates), the Entity Framework figures out how to turn a query or command that is expressed in terms of the data model into one that is expressed in terms of your database.

When data is returned from the database, it does the job of shaping the database results into the entities and further materializing objects from those results.

Tuesday, July 7, 2009

WF runtime

public class WorkflowRuntime { public WorkflowRuntime(); public void AddService(object service); public void RemoveService(object service); public void StartRuntime(); public void StopRuntime(); public WorkflowInstance CreateWorkflow(XmlReader reader); public WorkflowInstance GetWorkflow(Guid instanceId); /* *** other members *** */ } public sealed class WorkflowInstance { public Guid InstanceId { get; } public void Start(); public void Load(); public void Unload(); public void EnqueueItem(IComparable queueName, object item, IPendingWork pendingWork, object workItem); /* *** other members *** */ } class Program { static void Main() { using(WorkflowRuntime runtime = new WorkflowRuntime()) { TypeProvider typeProvider = new TypeProvider(runtime); typeProvider.AddAssemblyReference("EssentialWF.dll"); runtime.AddService(typeProvider); runtime.StartRuntime(); WorkflowInstance instance = null; using (XmlTextReader reader = new XmlTextReader("OpenSesame.xoml")) { instance = runtime.CreateWorkflow(reader); instance.Start(); } string s = Console.ReadLine(); instance.EnqueueItem("r1", s, null, null); // Prevent Main from exiting before // the WF program instance completes Console.ReadLine(); runtime.StopRuntime(); } } }

When the Start method is called on the WorkflowInstance, the WF runtime runs the WF program asynchronously. But other threading models are supported by the WF runtime. When a ReadLine activity executes, it creates a WF program queue. When our console application (which is playing the role of a listener) reads a string from the console, it resumes the execution of the bookmark established by the ReadLine by enqueuing the string. The name of the WF program queue is the same name, "r1", that we gave to the ReadLine activity (per the execution logic of ReadLine).

In order to illustrate the mechanics of passivation, we can write two different console applications. The first one begins the execution of an instance of the Open, Sesame program.

class FirstProgram { static string ConnectionString = "Initial Catalog=SqlPersistenceService;Data Source=localhost;Integrated Security=SSPI;"; static void Main() { using (WorkflowRuntime runtime = new WorkflowRuntime()) { SqlWorkflowPersistenceService persistenceService = new SqlWorkflowPersistenceService(ConnectionString); runtime.AddService(persistenceService); TypeProvider typeProvider = new TypeProvider(runtime); typeProvider.AddAssemblyReference("EssentialWF.dll"); runtime.AddService(typeProvider); runtime.StartRuntime(); WorkflowInstance instance = null; using (XmlTextReader reader = new XmlTextReader("OpenSesame.xoml")) { instance = runtime.CreateWorkflow(reader); instance.Start(); } Guid durableHandle = instance.InstanceId; // save the Guid... instance.Unload(); runtime.StopRuntime(); } } }

The WF program instance never completes because it is expecting to receive a string after it prints the key, and we do not provide it with any input. When the WorkflowInstance.Unload method is called,[2] the instance is passivated. Inspection of the SQL Server database table that holds passivated WF program instances will show us a row representing the idle Open, Sesame program instance.

In order to resume the passivated instance in another CLR application domain, we need to have some way of identifying the instance. That is precisely the purpose of the InstanceId property of WorkflowInstance. This globally unique identifier can be saved and then later passed as a parameter to the WorkflowRuntime.GetWorkflow method in order to obtain a fresh WorkflowInstance for the WF program instance carrying that identifier.

class SecondProgram { static string ConnectionString = "Initial Catalog=SqlPersistenceService;Data Source=localhost;Integrated Security=SSPI;"; static void Main() { using (WorkflowRuntime runtime = new WorkflowRuntime()) { SqlWorkflowPersistenceService persistenceService = new SqlWorkflowPersistenceService(ConnectionString); runtime.AddService(persistenceService); TypeProvider typeProvider = new TypeProvider(runtime); typeProvider.AddAssemblyReference("EssentialWF.dll"); runtime.AddService(typeProvider); runtime.StartRuntime(); // get the identifier we had saved Guid id = "saveed from first program"; WorkflowInstance instance = runtime.GetWorkflow(id); // user must enter the key that was printed // during the execution of the first part of // the Open, Sesame program string s = Console.ReadLine(); instance.EnqueueItem("r1", s, null, null); // Prevent Main from exiting before // the WF program instance completes Console.ReadLine(); runtime.StopRuntime(); } } }

The passivated (bookmarked) WF program instance picks up where it left off, and writes its result to the console after we provide the second string.

WF Programming Model

Workflow is queue and scheduler.

public class ReadLine : Activity { private string text; public string Text { get { return text; } } protected override ActivityExecutionStatus Execute( ActivityExecutionContext context) { WorkflowQueuingService qService = context.GetService<WorkflowQueuingService>(); WorkflowQueue queue = qService.CreateWorkflowQueue(this.Name, true); queue.QueueItemAvailable += this.ContinueAt; return ActivityExecutionStatus.Executing; } void ContinueAt(object sender, QueueEventArgs e) { ActivityExecutionContext context = sender as ActivityExecutionContext; WorkflowQueuingService qService = context.GetService<WorkflowQueuingService>(); WorkflowQueue queue = qService.GetWorkflowQueue(this.Name); text = (string) queue.Dequeue(); qService.DeleteWorkflowQueue(this.Name); context.CloseActivity(); } }

In WF, the data structure chosen to represent a bookmark's capacity to hold data is queue. This queue, which we shall call a WF program queue is created by ReadLine using WorkflowQueuingService.

namespace System.Workflow.Runtime { public class WorkflowQueuingService { // queueName is the bookmark name public WorkflowQueue CreateWorkflowQueue( IComparable queueName, bool transactional); public bool Exists(IComparable queueName); public WorkflowQueue GetWorkflowQueue(IComparable queueName); public void DeleteWorkflowQueue(IComparable queueName); /* *** other members *** */ } }

The WorkflowQueue object that is returned by the CreateWorkflowQueue method offers an event, QueueItemAvailable. Despite the syntactic sugar of the C# event, this event represents the asynchronous delivery of stimulus from an external entity to an activity, and is exactly the same pattern of bookmark resumption. The more refined WF version of the programming model for bookmarks allows a bookmark's payload (a WF program queue) to hold an ordered list of inputs that await processing (instead of a single object as did the bookmark in Chapter 1). The physical resumption point of the bookmark is still just a delegate (ContinueAt) even though in the WF programming model the delegate is indicated using the += event subscription syntax of C#.

namespace System.Workflow.Runtime { public class WorkflowQueue { public event EventHandler<QueueEventArgs> QueueItemAvailable; public object Dequeue(); public int Count { get; } public IComparable QueueName { get; } /* *** other members *** */ } }

The return value of the ReadLine activity's Execute method indicates that, at that point in time, the ReadLine has pending bookmarks; its execution is not complete. When an item is enqueued in its WF program queue, perhaps days after the ReadLine began its execution, the bookmark is resumed and, as a result, the ContinueAt method is invoked. After obtaining the item from its queue and setting the value of its text field, the ReadLine activity reports its completion.

public class Sequence : CompositeActivity { protected override ActivityExecutionStatus Execute( ActivityExecutionContext context) { if (this.EnabledActivities.Count == 0) return ActivityExecutionStatus.Closed; Activity child = this.EnabledActivities[0]; child.Closed += this.ContinueAt; context.ExecuteActivity(child); return ActivityExecutionStatus.Executing; } void ContinueAt(object sender, ActivityExecutionStatusChangedEventArgs e) { ActivityExecutionContext context = sender as ActivityExecutionContext; e.Activity.Closed -= this.ContinueAt; int index = this.EnabledActivities.IndexOf(e.Activity); if ((index + 1) == this.EnabledActivities.Count) context.CloseActivity(); else { Activity child = this.EnabledActivities[index + 1]; child.Closed += this.ContinueAt; context.ExecuteActivity(child); } } }

Sequence cannot directly execute its child activities since the Activity.Execute method has accessibility of protected internal. Instead, Sequence requests the execution of a child activity via ActivityExecutionContext.

Sequence subscribes to the Activity.Closed event before it requests the execution of a child activity. When the child activity completes its execution, the execution of the Sequence is resumed at the ContinueAt method. The Sequence activity's subscription to the Closed event of a child activity is syntactic sugar for the creation of a bookmark that is managed internally, on behalf of Sequence, by the WF runtime.

The ActivityExecutionContext type is effectively an activity-facing abstraction on top of the WF runtime.

namespace System.Workflow.ComponentModel { public class ActivityExecutionContext : System.IServiceProvider { public void ExecuteActivity(Activity activity); public void CloseActivity(); public T GetService<T>(); public object GetService(Type serviceType); /* *** other members *** */ } }

Sunday, July 5, 2009

When workflow is persisted.

If a persistence service is loaded, the state of the workflow is persisted in the following situations:

  1. When a workflow becomes idle. For example, when a workflow is waiting for an external event or executes a DelayActivity. To persist and unload a workflow when it becomes idle, the service must return true from the UnloadOnIdle method. With the standard, SqlWorkflowPersistenceService, you can affect this behavior by setting an UnloadOnIdle parameter during the construction of the service.
  2. When a workflow completes or terminates.
  3. When a TransactionScopeActivity (or CompensatableTransactionScopeActivity) completes. A TransactionScopeScopeActivity identifies a logical unit of work that is ended when the activity completes.
  4. When a CompenstableSequneceActivity completes. A CompensatableSequenceActivity identifies a set of child activities that a compensatable. Compensation is the ability to undo the actions of a completed activity.
  5. When a custom activity that is decorated with the PersistOnCloseAttribute completes.
  6. When you manually invoke one of the methods on a WorkflowInstance that cause a persistence operation. Examples are Unload and TryUnload. The Load method results in a previously unloaded and persisted workflow being retrieved and loaded back into memory.

It is important to make a distinction between saving a workflow and saving the state of a workflow. Not all persistence operations result in a new serialized copy of a workflow being saved. For instance, when a workflow completes or terminates, the standard SQL Server persistence service (SqlWorkflowPersistenceService) actually removes the persisted copy of the workflow. It persisted the workflow in the sense that it updated the durable store with the state of the workflow. If you implement your own persistence service, you may choose to do something else when a workflow completes.

Why ManualWorkflowSchedulerService should be used in asp.net enviroment

Why ManualWorkflowSchedulerService should be used in asp.net enviroment

Monday, June 29, 2009

What is a fake

A fake is a generic term that can be used to describe either a stub or a mock object (handwritten or otherwise), because they both look like the real object. Whether a fake is a stub or a mock depends how it's used in the current test. If it's used to check an interaction(asserted against), it's a mock object. Otherwise, it is a stub.

What is mock

A mock object is a fake object in the system that decides whether the unit test has passed or failed. It does so by verifying whether the object under test interacted as expected with the fake object.

Refactory to break dependency

  • Extract an interface to allow replacing underlying implementation
  • Inject stub implementation into a class under test
  • Receiving an interface at the constructor level.

    If your code under test requires more than one stub to work correctly without dependencies, adding more and more constructors ( or more and more constructor parameters) becomes a hassle, and it can even make the code less readable and less maintainable. One possible solution is using inversion of control containers. You can think of IoC containers as "smart factories" for you objects. A container provide special factory methods that take in the type of object you'd like to create and any dependencies that it needs, and then initialize the object using special configurable rules such as what constructor to call, what properties to set in what order, and so on. They are powerful when put to use on a complicated composite object hierarchy where creating an object requires creating and initializing object serveral levles down the line. Using constructor arguments to initialize objects can make you testing code more cumbersome unless you're using helper frameworks such as IoC containers for object creation. Every tie you add another dependency to the class under test, you have to create a new constructor that takes all other arguments plus a new one, make sure it calls the other constructors correctly, and make sure other users of this class initialize with the new constructor.

    On the other hand, using parameters in constructors is a great way to signify to the user of your API that these parameters are non-optional. They have to be sent in when creating the object.

    If you want these dependencies to be optional, use properties, which is much more relexed way to define optional dependencies adding different constructors to the class for each dependency. If you choose to use constructor injection, you'll probably also want to use IoC containers. This would be a great solution if all code in the world were using IoC, containers. The future of unit testing will use more and more of these Ioc pattern.

  • Receive an interface as a property get or set.

    Use this technique when you want to signify that a dependency of the class under test is optional, or if the dependency has a default instance created that doesn't create any problems during the test.

  • Get a stub just before a method call

Thursday, June 25, 2009

Cancel Hanlder

The cancel handler view provides a way to define activity cancellation logic. A cancel handler has some similarities to the fault handlers just discussed. Like fault handlers, they are attached only to a composite activity.However, cancel handlers don’t catch and handle exceptions. Instead, they specify the cleanup actions that take place when an executing activity is canceled.

actions that take place when an executing activity is canceled. The need for cancel handlers is best illustrated with an example: A ListenActivity is a composite activity that allows you to define multiple child activities under it. Assume that each child of the ListenActivity is a HandleExternalEventActivity that listens for a different external event. With this scenario, each HandleExternalEventActivity is executing at the same time waiting for an event. Only one of these children will eventually receive its event and complete execution. The other sibling activities will be canceled by the ListenActivity parent. By entering a cancel handler for the parent ListenActivity, you define the steps to execute when the incomplete children are canceled. You won’t always need a cancel handler. But if your activities require any cleanup when they are canceled, a cancel handler is an appropriate place to define that logic.

Tuesday, June 23, 2009

ActivityExecutionContext

  1. It is a container of services that is availabe to activities ruing their execution. This set of service is the same for all activities in all WF program instances. Some services are provided by the WF runtime and are always obtainable from AEC. Custom services can be offered by the application that hosts the WF runtime; such services are made available to activities by using the AddService method of WorkflowRuntime. class ReadLine:Activity { private string text; public string Text { get { return text; } } protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext) { WorkflowQueuingService qService = executionContext.GetService<WorkflowQueuingService>(); WorkflowQueue queue = qService.CreateWorkflowQueue(this.Name, true); queue.QueueItemAvailable += ContinueAt; return ActivityExecutionStatus.Executing; } void ContinueAt(object sender, QueueEventArgs e) { ActivityExecutionContext context = sender as ActivityExecutionContext; WorkflowQueuingService qService = context.GetService<WorkflowQueuingService>(); WorkflowQueue queue = qService.GetWorkflowQueue(this.Name); text = (string)queue.Dequeue(); qService.DeleteWorkflowQueue(this.Name); context.CloseActivity(); } }
  2. ActivityExecutionContext is as an API surface through which activities can interact with the (internal) scheduler component of the WF runtime. For example, the ExecuteActivity method requests that a work queue. The CloseActivity method requests that the WF runtime finalize the current activity's transition to the Closed state, and resume the internal bookmark that notifies the parent composite activity of the activity's completion. AEC therefore abstracts the internal machinery of the WF runtime; even though we have explained the execution model of the WF runtime in terms of a scheduler and a work queue, these entities are not represented directly in the public API of the WF programming model. public class Sequence : CompositeActivity { protected override ActivityExecutionStatus Execute(ActivityExecutionContext context) { if (this.EnabledActivities.Count == 0) return ActivityExecutionStatus.Closed; Activity child = this.EnabledActivities[0]; child.Closed += this.ContinueAt; context.ExecuteActivity(child); return ActivityExecutionStatus.Executing; } void ContinueAt(object sender, ActivityExecutionStatusChangedEventArgs e) { ActivityExecutionContext context = sender as ActivityExecutionContext; e.Activity.Closed -= this.ContinueAt; int index = this.EnabledActivities.IndexOf(e.Activity); if ((index + 1) == this.EnabledActivities.Count) { context.CloseActivity(); } else { Activity child = this.EnabledActivities[index + 1]; child.Closed += this.ContinueAt; context.ExecuteActivity(child); } } }
  3. The execution of a WF program instance is episodic, and at the end of each episode when the WF program instance becomes idle, the instance can be persisted in durable storage as continuation. This continuation, because it represents the entirety of the program instance's state that is necessary for resuming its execution, holds the relevant (internal) WF runtime execution state plus user-defined state, sometimes called the application state. The application state is nothing but the WF program instance's tree of activities (the actual CLR objects), which are usually stateful entities. The runtime state includes the state of the scheduler work queue, WF program queues, and bookkeeping information about internally managed bookmarks (such as subscriptions to the Activity.Closed event).

    The resumption point of a bookmark is called an execution handler, so we can refer to the (heap-allocated) execution state required by an execution handler as its execution context. Because an execution handler is typically a method on an activity, we will often refer to this execution context as activity execution context.

    ActivityExecutionContext is a programmatic abstraction for precisely this execution context. ActivityExecutionContext is passed to every execution handler either as an explicit argument (as for Activity.Execute) or as the sender parameter in the case of execution handlers that conform to a standard .NET Framework event handler delegate type.

Test asp.net mvc route

//arrange RouteCollection routes = new RouteCollection(); MvcApplication.RegisterRoutes(routes); var httpContextMock = new Mock<HttpContextBase>(); httpContextMock.Expect(c => c.Request.AppRelativeCurrentExecutionFilePath).Return("~/product/list"); //act RouteData routeData = routes.GetRouteData(httpContextMock.Object); //assert Assert.IsNotNull(routeData, "Should have found the route"); Assert.AreEqual("product", routeData.Value["Controller"]); Assert.AreEqual("list", routeData.Value["action"]); Assert.AreEqual("", routeData.Values["id"]);

Thursday, June 11, 2009

mvc routing

Routing is actually not part of the asp.net mvc component. But mvc depends on this asp.net components. To add a route to the route table, you can use the following code.

Route r = new Route("url", new SomeRouteHandler()); r.Constraints.Add("key", "value"); r.Defaults.Add("key", "value"); r.DataTokens.Add("key", "value"); RouteTable.Routes.Add("route_name", r); // or you can write the code in .net 3.0 syntax, they do the same job //but it looks cleaner. RouteTable.Routes.Add(new Route("url", new SomeRouteHandler()) { Constraints = new RouteValueDictionary(new { key = value }), Defaults = new RouteValueDictionary(new { key = value }), DataTokens = new RouteValueDictionary(new { key = value }), });

asp.net mvc, add some extension method to the RouteCollection like the following.

public static Route MapRoute(this RouteCollection routes, string name, string url, object defaults, object constraints, string[] namespaces) { if (routes == null) { throw new ArgumentNullException("routes"); } if (url == null) { throw new ArgumentNullException("url"); } Route route = new Route(url, new MvcRouteHandler()) { Defaults = new RouteValueDictionary(defaults), Constraints = new RouteValueDictionary(constraints) }; if ((namespaces != null) && (namespaces.Length > 0)) { route.DataTokens = new RouteValueDictionary(); route.DataTokens["Namespaces"] = namespaces; } routes.Add(name, route); return route; }

This method create a route use MvcRouteHandler as IRouteHandler. So if use this method, the MVC component comes into play. So you can these extension method to simplify the mvc routing

routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults );

The default values only work when every URL parameters after the one with default also has a default value assigned. So the following code doesn't work.

routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home" } // Parameter defaults );

The following code demo a catch-all parameter

routes.MapRoute("r2", "catch/{*all}", new { controller = "test", action = "catch", all="empty" }); public string Catch(string all) { return string.Format("<h1>all:{0}</h1>", all); }

Wednesday, June 10, 2009

EF JOIN

Zlatko Michailov, the Entity SQL program manager at Microsoft, writes in his blog: "A well defined query against a well defined entity data model does not need JOIN. Navigation properties in combination with nesting sub-queries should be used instead. These latter constructs represent task requirements much more closely than JOIN does."—http://blogs.msdn.com/esql/ (November 1, 2007). To show it means, here is an example.

In this model the Contact is referenced by multiple Address. Here we explicitly expression their relationship by a reference object. Because we have this navigation properties, we can write the following code using this navigation properties and sub-query like below.

var test = from c in context.Contacts select new { c.FirstName, c.LastName, StreetsCities = from a in c.Addresses select new { a.Street1, a.City } };

If the navigation properties is missing(as shown in the following chart), then you have to use join.

var test = from c in context.Contacts join address in context.Addresses on c.ContactID equals address.ContactID select new { c.FirstName, c.LastName, StreetsCities = new { address.Street1, address.City } };

Monday, June 8, 2009

Entity Framework stack

The Entity Framework's most prominent feature set and that which you are likely to work with most often is referred to as Object Services. Object Services sits on top of the Entity Framework stack, and provides all the functionality needed to work with objects that are based on your entities. Object Services provides a class called EntityObject and can manage any class that inherits from EntityObject. This includes materializing objects from the results of queries against the EDM, keeping track of changes to those objects, managing relationships between objects, and saving changes back to the database.

EntityClient is the other major API in the Entity Framework. It provides the functionality necessary for working with the store queries and commands (in conjunction with the database provider) connecting to the database, executing the commands, retrieving the results from the store, and reshaping the results to match the EDM.

You can work with EntityClient directly or work with Object Services, which sits on top of EntityClient. EntityClient is only able to perform queries, and it does this on behalf of Object Services. The difference is that when you work directly with EntityClient, you will get tabular results (though the results can be shaped). If you are working with Object Services, it will transform the tabular data created by EntityClient into objects.

The tabular data returned by EntityClient is read-only. Only Object Services provides change tracking and the ability to save changes back to the data store.

Object Services' core object is ObjectContext and ObjectQuery. ObjectContext is like a strongly type entity storage. ObjectQuery is used to retrieve the Entity from ObjectContext. ObjectQuery implements the linq feature, it implments interface, IQueryable, and IEnumberable.

using (AdventureWorksEntities advWorksContext = new AdventureWorksEntities()) { try { // Define an ObjectQuery to use with the LINQ query. ObjectQuery<Product> products = advWorksContext.Product; // Define a LINQ query that returns a selected product. var result = from product in products where product.ProductID == 900 select product; // Cast the inferred type var to an ObjectQuery // and then write the store commands for the query. Console.WriteLine(((ObjectQuery<Product>)result).ToTraceString()); } catch (EntitySqlException ex) { Console.WriteLine(ex.ToString()); } }

The ObjectQuery is the core of Linq to Entity Framework, it will convert the query into Entity SQL, and send it the EntityClient. You can also ask ObjectContext create a ObjectQuery by write write Entity Sql, but eventually the Entity Sql will be executed by the EntityClient. EntityClient will convert the Entity Sql to native sql for the data provider below. Here is some sample code.

var qStr = @"SELECT VALUE c FROM PEF.Contacts AS c WHERE c.FirstName='Robert'"; var contacts = context.CreateQuery<Contact>(qStr);

Wednesday, May 27, 2009

What is a good unit test

A unit test should have the following properties:

  1. It should be automated and repeatable.
  2. It should be easy to implement.
  3. Once it’s written, it should remain for future use.
  4. Anyone should be able to run it.
  5. It should run at the push of a button.
  6. It should run quickly.

Sunday, May 3, 2009

IRequiresSessionState

Be default, your page does not implement IRequireSessionState, so your page should not be able to access HttpContext.Current.Session. But why you can always use that it? It is because the asp.net compiler and code generator modify your page to support the interface, so you don't need to explicitly implement this interface your page.

Thursday, April 30, 2009

Reflection performance

int x = 10; Assembly a = Assembly.Load("mscorlib"); Type t1 = typeof(int); Type t2 = x.GetType(); Type t3 = a.GetType("System.Int32"); Type t4 = Type.GetType("System.Int32"); Console.WriteLine(t1 == t2); Console.WriteLine(t2 == t3); Console.WriteLine(t3 == t4);

All of the equality comparisons will evaluate to true and, assuming that Type's Equals method is transitive (as all Equals implementations are supposed to be), then we can infer that t1 == t2 == t3 == t4. Loading t1 is the most efficient, t2 is a close second, and t3 and t4 are horribly inefficient. t3 is slightly more efficient because it only searches mscorlib.dll, whereas t4 searches all loaded assemblies. In fact, in a little test program I wrote, t2 is twice as slow as t1, t3 is 100 times slower than t2, and t4 is twice as slow as t3.

Tuesday, April 28, 2009

How many chinese characters can a computer display?

Recently there is news Name Not on Our List? Change It, China Says? Some people think of it from a perspective of culture, philosophy, or politics. But I a software engineer, let's talk about it from perspective of computer science.

When we say put your name in computer, what does it means? Let's say your name is "a", it will be converted a number 97 by the computer, this number is represented in a serial of bits (0 or 1). This process is called encoding. But the text you see, is not a number, why? Because computer convert the number into a picture, we call it decoding? The most poplular encoding format today, it is unicode. It uses 16bit to store all the charactors(no matter it is english letter, or chinese charactor, greek, let's just call them charactors at this moment). 16 bit means 65536 possibilities of combination, which means the system can only accommodate 65536 character. As I know the unicode system is still evolving, so it actually can accommodate more than 65536. But the fact is it can only handles limited number of character. In unicode system, the English letter 'a' is converted to number 97. And the Chinese character '一' (one) is converted to 19968. Lets say, I have a new baby, I want to give it a Chinese name, I look up a ancient Chinese dictionary, the dictionary was created long before modern computer was invented. And I found a character in the dictionary, which is not our encoding system for example unicode. So there is no way to input the name into existing computer system. What option do we have? We can either update encoding system to encode your name. We need to create a number, and create a picture, and associate the number with the picture. And we also need to distribute the encoding system to all the computers? Obviously, it is not an easy job. The easy solution is to pick your name from the characters in the existing encoding system, for example unicode, but not from an ancient Chinese dictonary or even invent a Chinese character which never exists. That is what the Chinese government is try to do. Actually, unicode already can display more than 20944 Chinese character, even the most knowledgeable Sinologist can use. There are surely some people who will be affected, but 1.3 billion chinese, it is not that significant. Click here to see a list chinese character in unicode ranged from 19968 to 40911.

Monday, April 27, 2009

divitus classitus

In the book of CSS Mastery: Advanced Web Standards Solutions, there is a section about divitus and classitus.

Using too many divs is often described as divitus and is usually a sign that your code is poorly structured and overly complicated. Some people new to CSS will try to replicate their old table structure using divs. But this is just swapping one set of extraneous tags for another. Instead, divs should be used to group related items based on their meaning or function rather than their presentation or layout.

The root reason of this phenomenon is that designer tend to rush to see the result first, and forget the semantic of the content(the markup) express. My experience is that the a clean markup with strong semantic is a good document. A good document is starting point of good design. A good design is easy to be restyled and extended. Most of visual style can be implemented by CSS of current version. If it can not satisfying your requirements, we can wait for next generation (CSS 3 or CSS 4?). But before they are available, we can use javascript implement your advanced design needs. But the bottom line is to forget about style, when you authorize your markup, and always remember make your markup semantic, which means your document is still understandable even it is opened in notepad or lynx.

Semantic and style , they are different concern. While some developers and visual designer understand the rule of separation of concern, but web technology (like asp.net server controls, Web form) tend to make people deviate from it. Although it is not the fault of these technology, in fact, you can use these technology to achieve very semantic markup, but they tend to lure people to mix them together. ASP.NET MVC is trying to fix this.

Below is example of divitus classitus.

<div class="login_page"> <div class="login_header"> Registration</div> <div class="clear"> &nbsp;</div> <div class="clear"> &nbsp;</div> <!--FORM FIELD--> <div class="reg_form"> <div class="reg_title"> <div class="reg_txt"> First name:</div> </div> <div class="reg_box"> <input type="text" size="50" /> </div> </div> <!--FORM FIELD--> <div class="reg_form"> <div class="reg_title"> <div class="reg_txt"> Last name:</div> </div> <div class="reg_box"> <input type="text" size="50" /> </div> </div> <!--FORM FIELD--> <div class="reg_form"> <div class="reg_title"> <div class="reg_txt"> Email address:</div> </div> <div class="reg_box"> <input type="text" size="50" /> </div> </div> <!--FORM FIELD--> <div class="reg_form"> <div class="reg_title"> <div class="reg_txt"> Password:</div> </div> <div class="reg_box"> <input type="text" size="50" /> </div> </div> <!--FORM FIELD--> <div class="reg_form"> <div class="reg_title"> <div class="reg_txt"> Password Confirmation:</div> </div> <div class="reg_box"> <input type="text" size="50" /> </div> </div> <!--FORM FIELD--> <div class="reg_form"> <div class="reg_title"> <div class="reg_txt"> Company / Access code:</div> </div> <div class="reg_box"> <input type="text" size="50" /> </div> </div> <!--FORM FIELD--> <div class="reg_form"> <div class="reg_title"> <div class="reg_txt"> &nbsp;</div> </div> <div class="reg_box"> <a href="#"> <img src="images/btn_submit.jpg" border="0" /></a> </div> </div> </div>

Here is restructure markup

<div id="membership"> <fieldset> <legend>Login</legend> <p> <label for="txtUserName"> Email Address </label> <input type="text" id="txtUserName" name="txtUserName" /> </p> <p> <label for=""> Password</label> <input type="password" id="txtPassword" name="txtPassword" /> </p> <p> <label for=""> Remember me</label> <input type="checkbox" id="chkRememberMe" name="chkRememberMe" /></p> <p> <input type="submit" id="btnSubmit" name="btnSubmit" /> <a href="registration.aspx">First time here?</a> <a href="ForgetPassword.aspx">Forgot your password?</a> </p> </fieldset> </div>

Wednesday, April 15, 2009

Object vs Function

Object.prototype.sayhi = function() { alert("hi"); } var o = {}; alert(o.constructor); //function Object(){[native code]}, created by function Object() alert(Object.prototype); //[object Object], ultimate prototype o.sayhi(); //show hi Function.prototype.saybye = function() { alert("bye"); } alert(f.constructor); //function Function(){[native code]}, created by function Fuction() alert(Function.prototype); //function prototype{[native code]}, //I think Function.prototype still link to an object constant {}, //so that it can be route to [object Object], the ultimate prototype alert(Function.prototype.constructor); //function Function(){[native code]}, created by function Fuction() alert(f.prototype); //[object Object], ultimate prototype function f() { } f.sayhi(); //show hi f.saybye(); //show bye alert(document.constructor); //[objectHTMLDocument alert(document.constructor.prototype); //[Interface prototype object] alert(document.constructor.prototype.constructor); //null in IE, function Object() in firefox

Tuesday, April 14, 2009

delete

The delete operator can be used to remove a property from an object. It will remove a property from the object if it has one. It will not touch any of the objects in the prototype linkage. Removing a property from an object may allow a property from the prototype linkage to shine through: var stooge = {}; stooge.name = "fred"; if (typeof Object.beget !== 'function') { Object.beget = function(o) { var F = function() { }; F.prototype = o; return new F(); }; } var another_stooge = Object.beget(stooge); another_stooge.name = "jeff"; alert(another_stooge.name); //jeff delete another_stooge.name; alert(another_stooge.name); //fred

do not use hasOwnProperty

The hasOwnProperty method does not look at the prototype chain: flight.hasOwnProperty('number') // true flight.hasOwnProperty('constructor') // false //use if (typeof flight["number"]) == "undefined") { }

|| and &&

&& is guard operator aka "logical and", and || is default operator "aka logical or", we normally see the code like the following

if ( condtion1 && condition2) { } if (condition1 || condition2) { }

&& means, If first operand is truthy, the result is second operand, else result is is first operand. It can be used to avoid null reference.

if (a){ return a.memeber; } else { return a; } //this is the same return a && a.member

|| means, if first operand is truethy, then result is first operand, else result is second operand. It can be used to fill in default value like the following

var last = input || {}; //{} is default value

Monday, April 13, 2009

Sunday, April 12, 2009

context of setTimeout and eval

setTimeout is a method of window object. "this" context refer to the window. But you can change its context in the following way. setTimeout does not support call method. setTimeout(function() { alert(this); }, 0); //window //setTimeout.call({}, function() { alert(this); }, 0); //not supported setTimeout((function() { alert(this); }).call({}), 0); //object eval is also a method of Global object, in the case of browse, this is window object. But is context defined by its Containing context. Eval also does not support call method function Foo() { this.TestContext = function() { eval("alert(this==window);"); //show false setTimeout(function() { alert(this == window); }, 0); //show true } } var f = new Foo(); f.TestContext(); eval("alert(this);"); eval.call({}, "alert(this);"); //firefox does not support this, but IE support, but it does not change context

Anonymous function's contexgt

var name = "window"; var f = (function() { alert(this.name); }); f(); //window f.name = "local"; f(); //still window, the function still run in the context of window f.call(f); //local var p = { name: "local" }; p.f = f; p.f(); //local f.call(p); //local (function() { alert(this); })(); //the defining context is window, it will output window (function() { alert(this); }).call({}); //the defining context is window, but call switch the context

== vs ===

Here is the msdn documentation

Equality (==, !=)

  • If the types of the two expressions are different, attempt to convert them to string, number, or Boolean.
  • NaN is not equal to anything including itself.
  • Negative zero equals positive zero.
  • null equals both null and undefined.
  • Values are considered equal if they are identical strings, numerically equivalent numbers, the same object, identical Boolean values, or (if different types) they can be coerced into one of these situations.
  • Every other comparison is considered unequal.

Identity (===, !==)

These operators behave identically to the equality operators except no type conversion is done, and the types must be the same to be considered equal.

Here is some test case writing in Qunit.

test("Equality test", function() { ok(1 == 1 && 'a' == 'a' && 1 == '1' && 0 == false && '' == false); ok(null == null, "null equals to null"); ok(null == undefined, "null equals undefined"); ok(undefined == undefined, "undefined equals undefined"); ok({} != {}, "different objects are unequal"); }); test("Identity test", function() { ok(1 !== "1" && null !== undefined, "must be the same type, not conversion"); });

Saturday, April 4, 2009

Overflow

This property is used to control the behavior of container in which its content expend over its content area. It is not used to control the behavior of container of the container of the content. Normally overflow will not happen because the container of content can expend vertically or horizontally. For text content is non-replaced elements, overflow horizontally will not happen because it can be wrapped into next line. But overflow overcritical y is possible. But for Replaced Elements it can overflow horizontally.

Horizontal Formating of block element

The "seven properties" of horizontal formatting are: margin-left, border-left, padding-left, width, padding-right, border-right, and margin-right. Padding and margin is set to 0 by default. Width is set to auto. This means the block element will try expend as much as possible into its containing content space. /p>

If width is auto, whether margin-left and margin-right is set to auto or just 0 has not effect. Because either way, their computed value will be the same 0. But if width is has a value, and margine-left and margin-right is set to auto, then that margin will expand or contracted automatically. If both margin is set to auto, and the width has a value, the the content of the block will be set to horizonal center.

The Containing Block

Every element is laid out with respect to its containing block; in a very real way, the containing block is the "layout context" for an element.For an element in the normal, Western-style flow of text, the containing block is formed by the content edge of the nearest block-level, table cell, or inline-block ancestor box. You don't need to worry about inline elements since the way they are laid out doesn't depend directly on containing blocks.

vertical-align

In CSS, the vertical-align property applies only to inline elements and replaced elements such as images and form inputs. vertical-align is not an inherited property. It can be one of baseline | sub | super | top | text-top | middle | bottom | text-bottom | | | inherit. The initial value is baseline.