This is my second post on one of the ways of applying the Repository, Specification and Unit of Work pattern using the persistence ignorance Plain Old CLR .NET Object (POCO) with the upcoming ADO.NET Entity Framework 4.0. This post will explain how to implement the Repository, Specification & Unit of Work with Entity Framework 4.0. Sample project codes included.
As of preparing this writing and the source codes, I was using the .NET Framework 4.0 Beta 1, Visual Studio 2010 Beta 1 and ADO.NET Entity Framework Feature CTP 1. However, the .NET Framework 4.0 Beta 2 and Visual Studio 2010 Beta 2 made their debut on 21 October 2009. I decided to upgrade the projects to this latest beta 2 version. So, the requirements to run the sample projects that come with this post has changed to .NET Framework 4.0 Beta 2 and Visual Studio 2010 Beta 2 which you can download them at this address, http://msdn.microsoft.com/en-us/vstudio/dd582936.aspx. I am using the Visual Studio 2010 Ultimate Beta 2 which is downloadable as ISO file at http://www.microsoft.com/downloads/details.aspx?FamilyID=dc333ac8-596d-41e3-ba6c-84264e761b81&displaylang=en. However, the Entity Framework Feature Community Technology Preview 2 is not yet available while I have already finished my code development on Visual Studio 2010 Beta 2. It is only available on 04 November 2009 which is downloadable at http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=13fdfce4-7f92-438f-8058-b5b4041d0f01. To learn more about the Entity Framework CTP Preview 2, please follow http://blogs.msdn.com/adonet/archive/2009/11/04/ado-net-entity-framework-community-technology-preview-released.aspx.
While the Entity Framework CTP Preview 1 only installs on machine with .NET Framework 4.0 Beta 1 and Visual Studio 2010 Beta 1, I have to find some workaround solutions to make sure my Northwind POCO entities and Northwind ObjectContext can be generated from the two .tt files, Northwind.Types.tt and Northwind.Context.tt which were generated from the Entity Framework POCO T4 generator in Entity Framework CTP Preview 1. The solution is to copy over the EF.Utility.ctp.CS.ttinclude T4 include file from the includes directory of the Entity Framework CTP Preview 1’s installation folder into the two projects, MyCompany.Data.AppName and MyCompany.PocoEntities.AppName. These two .tt files are actually dependent on EF.Utility.ctp.CS.ttinclude for the automatic code generation of entity and object context.
Repository Pattern with ADO.NET Entity Framework 4.0 Beta 2Okay. It is time for the technical stuff after much ado in the introduction and Entity Framework history. Well, you may ask why we need to have this repository pattern. The Repository pattern is a key pattern in Domain Driven Design. According to Martin Fowler,
“A Repository mediates between the domain and data mapping layers, acting like an in-memory domain object collection. Client objects construct query specifications declaratively and submit them to Repository for satisfaction. Objects can be added to and removed from the Repository, as they can from a simple collection of objects, and the mapping code encapsulated by the Repository will carry out the appropriate operations behind the scenes. Conceptually, a Repository encapsulates the set of objects persisted in a data store and the operations performed over them, providing a more object-oriented view of the persistence layer. Repository also supports the objective of achieving a clean separation and one-way dependency between the domain and data mapping layers”.
Based on the above definition, the Repository only accepts domain objects and returns domain objects but not any data specific infrastructure objects such as the DataSet, DataRow, DataReader and etc. The repository itself provides an in-memory like collection interface for accessing domain objects. As far as the client component is concerned, it just uses the repository as a collection. The repository itself provide a clean separation of concern by abstracting the Create, Update, Read and Delete operations performed to the data stores in a transparent manner through translating the operations into the data access calls relevant to the data store.
The effect, the domain model is decoupled from the need of storing objects and accessing the underlying persistence infrastructure. Domain Driven Design, as it name implies, we are designing our applications from the point of view of the Domain based on the feedback and requirement gathering. There is no database involves in this case and by bringing the elements of database into the picture, we would risk compromising the purity of the domain model just to support the persistence concern we have chosen.
Continuing our explanation, we get to another interesting pattern in the Domain Driven Design. The Specification pattern which help formalizing and declaring condition as set of specification that encapsulate business logic. With specification, we split the logic of how a selection is made from the objects we are selecting. By having this piece of small logic sit on its own give us huge advantages over the traditional query pattern. Every time there is a new criterion or criteria change, we need only to add new type of concrete specification or change the existing specification type without the need to modify or add Get/Find methods to the repository which increase the number of Get/Find permutations. Please refer to the Figure 1.0 for an overview of the Repository. This diagram is taken from the Domain Driven Design Quickly by Abel Avram & Floyd Marinescu.
In Domain Driven Design, we have the notion of entity, value object, aggregate and aggregate root. What is entity and value object? Domain Driven Design split the idea of business object into two distinct types, entity and value object.
Entity is an object that has identity. This identity makes the object unique within the system. There is no other Entity; no matter how similar is the same Entity unless it has the same identity.
The identity that uniquely describes the Entity can be represented in anything as long as it projects the uniqueness. It can be a numeric identifier, for e.g. customer. It can be a GUID. Examples for Entity are Customer, Product, Order and etc.
Value Object can be defined as an object that is lack of identity. The idea of Value Object is to represent something by its attributes only. If two values objects have the same attribute, then they are identical. Examples for Value Object are address, monies, date and etc. I can have multiple copies of the value object representing the date 01 Jan 2009 and they are the same.
Aggregates are groups of Entity and Values Object that belong together. Aggregate Root is the thing that holds them together. In the object-oriented world, we have on object references another object. It is no different in the DDD. For e.g. Customer object references the Order object. Let take an example. There are Orders and Customers entities in an order entry system. They can exist in the system independently. However, there are entities like Order lines which depend on the Order entity. The Order line itself can’t exist alone in the system without the parent entity, Order and it doesn’t belong to any other Entity. In this case, the Order and Order Line can become one Aggregate and the Order itself is the Aggregate Root. The Aggregate Root will control access to its children, Order lines.
The Aggregate Roots are the Entities that the Repository will deal with. So, the Repository behaves like a collection of Aggregate Root and acts as a facade between the domain and the persistence mechanism. In the above order entry system, we would have the Customer Repository and Order Repository but not Order Line Repository because the access to the Order lines is controlled by the Aggregate Root, Order. An Aggregate Root must be Entities and cannot be Value Objects. The Repository doesn’t know how to retrieve an Aggregate Root if it doesn’t have identity.
The Repository Pattern Implementation
After all the examples and explanation, I am sure you are clear with the Repository. Let us go into our IRepository interface. I choose to have generic repository by having generic repository interface, IRepository to maximize reuse. Later we will have generic implementation of this interface and make use of the Entity Framework to do most of the heavy lifting of object persistence and access internally. IRepository interface contains signature methods that represent a set of operations for retrieving and updating entities which the Repository will provide.
where o.OrderId == 1
Next, we declare the generic MyCompany.Data.Repository abstract base class to make it easy to create different implementations of IRepository generic interface. Figure 1.2 is the Repository abstract base class diagram that any specific or concrete Repository type must inherit from.
Figure 1.3 shows EntityRepositories that implement the Repository abstract base class, the NorthwindDataModel entity data model and the Northwind’s ObjectContext (NorthwindContainer) generated from the T4 template (Northwind.Context.tt) that end with the .tt extension.
Figure 1.4 shows the type of T4 generator to select from the Add new item dialog box. The EntityFramework POCO Code generator will help generating two T4 templates. The Northwind.Context.tt is responsible to generate the ObjectContext for the Northwind Entity Data Model and the Northwind.Types.tt which is responsible to generate the POCO types for the Northwind Entity Data Model.
Note that I have only exposed a few of the widely used features of System.Data.Entity.ObjectContext such as CreateObject
The custom configuration types keep and support the reading of different configuration information for the ObjectContext, EntitiesRepository and the Lifetime Manager for the ObjectContext from the application configuration file. These types are located in the MyCompany.Configuration namespace under the MyCompany.Data.Entity project. If you are familiar with the .NET System.Configuration then you will be able to figure out quickly what the codes and types inside the MyCompany.Configuration and MyCompany.Data.Entity do. Please refer to the Figure 1.5 for a list of framework configuration base types and the configuration types for use in our entity framework.
Should Repository Expose IQueryable?
There is an argument over whether we should expose the IQueryable from the repository. As we know, IQueryable doesn’t store the result of the execution but stores only the query to get the result we want in a data structure called Expression tree in the context of Entity Framework and Linq to SQL.
If we pass the IQueryable object from the data layer to the business layer and finally to the presentation layer, we are experimenting deferred execution because execution doesn’t occur until we call some operators like the ToList(), ToDictionary(), Count() and etc that must return a value that is not a IEnumerable
The main concern is not the deferred execution and deferred execution is good. But it is the leak of data access concerns to wider levels of the application. There are number reasons behind this. First of all, how the IQueryable is resolved will depend on the query provider underneath it. What may work for the Linq to Objects may not work with Linq to SQL because not every expression can be parsed into SQL. Secondly, allowing the query to be specified at any point in the application, a developer may write something like orderRepository.GetAll () that may return thousands or millions of orders which might hurt the application performance without realizing it or because of misuse. Hence, this leads to loss of control on when and where the execution takes place.
Besides, it poses problems when the developer defines IQueryable queries in different places which make them harder to maintain and to reuse. Furthermore, if the queries are distributed across different layers and if the domain model changes, several places need to be updated as well.
I find that implementing the IQueryable provides Linq facilities which come in handy when we want to do some simple query over the repository to get some data anywhere in the application and it allows chaining of extension method for us to specify complex conditions.
Anyway, it depends all on how we want to use it, what architecture we have defined for the application, the scale of the software project and etc. Take for example, when the Repository pattern is used in a large scale project, then it is better to have all the queries defined and executed in the repository instead of passing the queries across different layers. The reason being that this kind of application generally has hundred of query logic. Having them in once centralized location promote easy discoveries of existing and new queries. It also prevents the developers coming up with queries that have lot of similarities. This of course, requires discipline on the developers themselves and also development practices like proper documentation of the API and code review to be in place to make sure all the standards and guidelines have been followed properly.
In small/medium project with small team members and codebase, then exposing the IQueryable shouldn’t be a big deal compared to the benefits it provides us.
Later in next section, we will be discussing Specification pattern that allow the encapsulation of business logic into a reusable business component.
Generic Repository vs. Specific Repository
Generic repository allows reuse of code because someone can write code to operate on Repository directly and with Dependency Injection container, we can use various repositories in a polymorphic fashion.
However, from the DDD, finding a place or how this polymorphism would be used is extremely difficult because of the abstraction provided by the Generic Repository.
Generic repository widens the contracts that are exposed to the domain layer. Each of the generic contract may not be applicable to every domain entity in the domain model. Different entities in the domain system may have different requirements from each other. For e.g. one can’t delete a customer but can delete an order that has been placed. All the entities in the domain model may have different create, read, update and delete requirements. By having the client object to call against the generic repository, Repository, we will face problem in the implementation stage.
Hence, we will introduce this Anti-Pattern problem because we tend to throw exception from the methods that are not implemented for a particular concrete type of the Repository
Query over the generic repository doesn’t give any clue what the query actually does. For e.g. would you able to tell what actual query is by looking at the following line;
Of course, I can trace it to the condition that is being passed by the caller which can be in the service or presentation layer but it is not intuitive enough to let me know what it does by looking at it.
The Repository represents the domain’s contract. This is crucial as it tells the user every possible way that the repository is going to interact with the data store by looking at the contract. However, this is only possible if the repository’s contracts are specific and unique enough to tell. Consider the following comparison between the generic and specific repository approach,
If we were to code/operate directly on the generic repository, then the code in the client object will look like
If we were to code/operate directly on the specific repository, then the code in the client object will look like
On the second example, we know what the contract to the data store actually. It has a GetCustomerById() method which expects a customer id (The width of the contract is minimized). This is very straightforward and saves us the time to trace the Find() method as in the first example. The composition approach in this second example gives us a better control of what operations of the generic repository to expose. Since there is no delete operation is required for the customer, then the specific customer repository doesn’t provide it to the client object. This make customer repository simpler to understand and provide an insight into what the customer repository can do.
Since repository is part of the domain model being modeled and that the domain in not generic, then using specific repository is desired. This leads to clearer call pattern between the client object and the repository itself as shown in the second example.
Generic Repository allows reuse and has its place in the DDD. In order to tap on its reusability and minimize the width of the contract of the repository within the domain, the composition is used instead of the inheritance as shown in the second example.
The disadvantage of the specific repository approach is the increasing number of Get methods if the query logic become more and more due to new or changing requirements. However, this problem can be solved by the Specification pattern by encapsulating all these query logic into a set of specification in a reusable business component.
In the sample solution that comes with this post, the composition approach is used to tap the advantages of the generic and specific repository approach. The generic repository is encapsulated within the specific repository instead of inheriting the concrete repository type from the generic repository type. So, the interfaces or contracts of the generic repository is never exposed to the domain layer. The so called, specific repository are the CustomerRepository and OrderRepository that you can find in the domain/business layer project, MyCompany.Business.AppName. The generic repository is encapsulated within the specific repository as in the following code segment.
So far in the post, I never touch on how the sample solution comes with this post is organized and structured. So, I took this opportunity to give you a high level overview of the application architecture of my sample solution.
Domain Driven Design promotes the layered design. The following figure 1.6 shows the common application architecture diagram taken from the Microsoft’s Application architecture Guide 2.0.
The application architecture has been broken into mainly 3 logical layers, presentation layer, service layer, business layer and the data layer.
Presentation layer is where the application’s user interfaces reside. It main purpose is to display the user interfaces and manage user interaction. This layer can be using Winform, ASP.NET or WPF technology.
Service/Application layer. This layer is not shown in the diagram but it is employed when the application will act as the provider of services to other applications as well as implementing features to support client directly. For e.g. web services. In this layer, we define the service interfaces; implement the service interfaces as in Windows Communication Foundation and provide the translator components that translate the data format between the business layer and the external data contract. It is important this layer never expose the internal entities that are used by the business layer to the outside world.
Business/Domain layer contains the domain model and our repositories. Domain model are classes that represents actual concepts from the reality. In the DDD terminology, they are Entities and Value objects that been previously mentioned. Repository is not part of the domain model but it is part of the Domain layer because it represents the store view of the domain objects underpinned by some form of persistence framework in the data layer. Whatever business processes/rules in the problem domain are contained here.
Data/ Infrastructure layer is where the persistence framework resides. In our sample code, it is the Entity Framework and the repository implementation. The class library has classes which implement the Domain’s Repository interfaces as well as having dependencies on the core Entity Framework classes and libraries.
Figure 1.7 below shows how my sample solution has been structured and laid out according to the common application architecture.
The presentation layer is not included in this sample solution as I only have made the unit tests for the Repository available. You can have your presentation project put under this ApplicationSource folder. If your presentation is very complex which include lot of UI custom controls and libraries (Models/View/Presenters and etc), then you can put your presentation projects under a new logical folder something called, PresentationSource.
I have another logical folder, FrameworkSource which actually stores codebase or libraries that can be reused and shared across different applications. The assemblies in this logical folder are not meant to be application specific. The IRepository interface, Repository abstract base class, ISpecification interface and etc in the MyCompany.Data.Entity assembly provides abstractions that are meant to be code against directly while they serve as interfaces that are to be implemented by their concrete types in the application specific assembly. There are other class types in the Configuration folder under this project to support the custom configuration of the repository and lifetime management of the ObjectContext. It includes also some default ObjectContext lifetime managers in the provider folder which can be used out of the box. The ObjectContext lifetime management will be discussed further in the Unit of Work pattern section.
There other logical solution folders like SharedLibray that is meant to store the third party libraries referenced by the application project. Database logical folder is to store the database script or other database related stuff used by the application project. The UnitTests logical folder, as it name implies, will contain all the unit tests for the application project.
Overall, there is a clear separation of concerns between layers except the domain model entities (Northwind POCO entities), MyCompany.PocoEntities.AppName which are shared by the data and business layers. The domain entities are moved into its own assembly so that both the business and data layer relies on domain entities as shared contracts.
The Specification Pattern
According to Martin Fowler, the idea of Specification is to separate the statement of how to match a candidate, from the candidate object that it is matched against. As well as its usefulness in selection, it is also valuable for validation and for building to order. For the wiki explanation, please click here. You can also follow the Specification white paper by Martin Fowler if you want to indulge yourself deeply into this programming pattern.
In simple terms, it is a small piece of logic which is independent and give an answer to the question “does this match ?”. With Specification, we isolate the logic that do the selection into a reusable business component that can be passed around easily from the entity we are selecting.
There is a link between specifications and repositories in that repositories often act as the collections against which specifications are applied to produce a set of matching results. The candidate for the Specification pattern would be the validation and business rules in the real word business application. For e.g. there can be business rule that checks if they are promotional product or non promotional product which can be differentiated by the product unit price more than 100 and its quantity in stock also more than 100. These promotional and non promotional product can be defined with the Specification pattern which can be reused not only to query the promotional and non promotional products but also in the business operations that need to print out the promotional product item on ad-hoc basis.
Let look at the hypothetical business entity example.
The above business entity has actually encapsulated the condition or criteria ( UnitPrice > 100 && UnitsInStock > 100) of what defines the product as promotional item inside the public bool property, IsPromotionalProduct. This has actually coupled the selection criteria with the entity together which has forced the domain logic to live int the Customer entity. As the number of questions we want to ask increases, the customer entity become polluted.
Even if we are not so concerned with the number of these properties which can pollute the entity, the selection criteria for each of these properties may change over time. For e.g. the selection criteria for the IsPromotionalProduct property may change to (UnitPrice > 50 & UnitsInStock > 50) when the business is not good.
So, the above IsPromotionalProduct property might be better written as
Following the Specification pattern, we can define our business rules in a reusable business logic component which can be passed around to satisfy certain business criteria. In other words, the specifications can be used as parameter to service, to filter a set of business objects, and it can be used as business rule for a business object before it is persisted into the data store.
Another typical use of such the system is to have a list of promotional product item. To implement this effectively, the IsPromotionalProduct predicate need to be expressed in a form(Lambda expression) that can select the records from the data store.
In the above example, the IsPromotionalProduct predicate, (p => p.UnitPrice > 100 && p.UnitsInStock > 100) is subject to duplication in other part of the application.
As the number of query we have to perform on the Product Repository increase, the number of specialized Get methods will increase as well, Interface Float. For e.g. GetDiscontinuedProducts, GetProductFallBelowReorderLevel and etc.
To avoid this is to use the Specification pattern where we extract the selection criteria or logic in to a Specification class. The basic implementation of the Specification has a single method IsSatisfiedBy. The actual implementation of is variable in how it is defined. However, the simple version is
However, in our sample solution, we are going to harness the power of lambda expression as predicate. Hence, our Specification interface will end up with the following:
The generic interface ISpecification has one property Predicate that returns the matching criteria as Expression the specification uses. Note that it is critical that the matching criteria is of type Expression. This ensures we have access to the expression parsing power of Linq. Simply using a Func or a Predicate will break this implementation. We need to create a generic Specification class that will allow us to create the Specification instance.
Notice that the generic Specification class handles all of the matching work for the specification. Any inheriting specification simply needs to specify their relevant matching criteria in the form of lambda expression and pass it to the Specification constructor.
Basically, the Specification constructor take in the Expression
There are many ways that you can go about to create your specifications. You can inherit your specific product specification from the generic Specification class or create a static class for ProductSpecification and has all the product specific specifications returned via the methods in the ProductSpecification static class. Below is the code of simple specification implementation. Notice, it has no other responsibility besides the selection criteria it represents.
Anyway, the above samples are just simple implementation of the Specification pattern. The true power of specification lies in its ability to put different specifications into composite using the AND,OR and NOT relationships between them. In our product specifications, you may ask what if I want to know which are the promotional product item that are also discounted product item. In this scenario, the Composite Specification pattern allows us to chain them together to create complex business rules that involve several rules that could be nested together. Our code may look something like below
You will need to overload the C# binary & and | operator in the generic Specification class. If you prefer, you can use the And and Or keyword as the logical operators.You can try finding some references on how to overload these operators to work with the Specification class on the web. I doesn’t include the overloading codes in the sample projects comes with this post.
ObjectContext Lifespan Management & Scope with Unit of Work Pattern
Let explain what the Unit of Work pattern is. According to Martin Flowler, Unit of Work, “Maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems.”
A Unit of Work keeps track of everything you do during a business transaction that can affect the database. When you're done, it figures out everything that needs to be done to alter the database as a result of your work.
Generally speaking, Unit of Work Pattern sounds something like transactions. I am sure that we all know what this means especially for those focusing on developing business applications. In ADO.NET, we call on the BeginTransaction of the DbConnection to return a DbTransaction object to ensure a logical group of operations are either committed successfully or not committed at all if any logical operation within the group throws exception or gives error. Partial commit will be meaningless from the business operation point of view. For e.g. I have an e-portal that will receive order and perform order processing once the payment is made. So, the order processing will consist of creating an entry for the authorized payment, reserving the order by deducting the quantity of the stock in the database and finally order fulfillment by creating a fulfillment record entry in the database for order to be dispatched. If one of these operations fails, then the whole order processing is failed. Without transaction, then the whole order processing will be in an inconsistent state. If we are required to rollback the changes manually, then we have to write extra codes to revert the changes which are tedious and cumbersome. In .NET 2.0, then we have the TransactionScope . It can be used to make a block of code transactional by placing these codes under a using block. It is more simple and straightforward to use compared to operate on the DbTransaction which require more code. Code executes within the scope will automatically participate in the transaction. The transaction will commit when the Complete() method is called in which all the changes will be persisted or aborted if error happens within the scope.
So, the DbTransaction and TransactionScope provide the mechanism to achieve what is defined in the Unit of Work pattern. However, TransactionScope provides an implicit programming model, in which transactions are automatically managed by the System.Transactions infrastructure.
In .NET Entity Framework, we are actually getting the unit of work free from the ObjectContext instance implicitly. All this is possible because of the change tracking provided by the Entity Framework and the ObjectStateManager will be notified of this changes. The ObjectStateManager instance is associated with each instance of the ObjectContext. The purpose of the ObjectStateManager is to maintain object state and identity management for entity type and relationship instances. The System.Data.Objects.ObjectContext has a SaveChanges() which can accept System.Data.Objects.SaveOptions value. Setting its value to None simply tells the Entity Framework to execute the necessary database commands, but hold on to the changes, so they can be replayed if necessary. Changes made through the ObjectContext will be committed to the actual persistence storage only once the AcceptAllChanges() is issued. Then the Entity Framework itself will reset changes tracking in the ObjectStateManager instance associated with the ObjectContext. This inherent changes tracking provides a way for us to recover from failure. For e.g. we have a web application that insert or update order confirmation details once the API call to the payment gateway return a successful flag. Prior to the API call to the payment gateway, we can have inserted or updated the order confirmation details to the ObjectContext with a call to the SaveChanges() method with the parameter for System.Data.Objects.SaveOptions enumeration set to None. Once the successful flag is returned from the payment gateway’s API call, then the AcceptAllChanges() method is invoked to persist the changes in the persistence storage. Otherwise, we can simply discard the changes by disposing the ObjectContext. If the ObjectContext instance is shareable for the entire application, then we can refresh the ObjectContext with the original values from the persistence storage through a call to the Refresh() method by setting the value for parameter RefreshMode enumeration to StoreWins. Any changes made to the entity object in the ObjectContext will be replaced with values from the persistence storage. However, this is only feasible if you don’t mind replacing your entity object with the values different from what you have originally have since the last read because in within the interval from the last read and restoring the entity object with the value from the persistence storage, someone may have changed the values. Other more robust way which we can try to undo the changes would be enclosing the ObjectContext and the API call to the payment gateway in a TransactionScope instance. Within the TransactionScope, we insert/update the order confirmation object and then make a call to SaveChanges(SaveOptions.None) of the ObjectContext instance. We then check for the approval status flag returned from the payment gateway. We commit the TransactionScope instance and invoke the AcceptAllChanges() method on the ObjectContext instance if the approval is successful. If the block of code to complete the scope and accept all the changes in the ObjectContext is never reached, then on exiting the scope, the changes made to the ObjectContext will be undone. The following code segment how this can be achieved.
The Entity Framework inherently provides unit of work support and by introducing the UnitOfWorkScope it makes things more explicit.
Before I go into the Unit of Work pattern implementation, I will discuss a bit on the pluggable model for the ObjectContextLifetimeManager used in the sample code as a simple way to manage the lifespan of the ObjectContext in the application. Please refer to Figure 1.8 for the Object Model that depicts the interface, abstract and concrete type of ObjectContextLifetimeManager to manage the lifespan of the ObjectContext instances used and encapsulated in the Repositories.
Scenario of using StaticObjectContextLifetimeManager
Since, the ObjectContext instances are meant to be long live until the application terminates, then there is nothing to implement in its Dispose() method to clean up the ObjectContext instances. As the application terminates, the system resources used up by these instances will be released. Each of the Repository instance will have access to its context name which can then be supplied to the ObjectContextLifetimeManager interface instance to return the correct ObjectContext instance that the Repository requires. The following code segment shows how the Repository abstract base class has its correct ObjectContext instance returned in its protected read only ObjectContext property member. The Repository abstract base class implements the IDisposable interface to release allocated resources for the derived type of ObjectContextLifetimeManager instance.
Scenario of using ScopeObjectContextLifetimeManager
The UnitOfWorkScope is used to achieve the Unit of Work pattern implementation and it has the same semantics as the TransactionScope discussed previously. Code that execute within the UnitOfWorkScope will automatically participate in that unit of work scope. The ScopeObjectContextLifetimeManager in the sample project allow us to use one or more ObjectContext type instances for a series of transaction that atomically belongs together from the business perspective like what has been discussed previously for the business order processing of an e-portal. The ScopeObjectContextLifetimeManager achieve this by relying on the UnitOfWorkScope to get the references to the ObjectContext instances that are currently engaging in the Unit of Work before returning the ObjectContext references to the Repository. The following shows the segment of code for the UnitOfWorkScope.
The UnitOfWorkScope actually mimics part of the behavior of the .NET Framework’s TransactionScope class which allow our Repository types instances to be self aware of the scope they belong to and in the mean time ensure they share and use the same ObjectContext instance they suppose to use in order to achieve the Unit of Work pattern. However, the responsibility to persist the changes to the underlying store has been delegated to the derived type of UnitOfWork class. If you look at the code segment of UnitOfWorkScope, it’s Save() method merely calls on the Commit() method of UnitOfWork derived type instance to do the data persistence. The UnitOfWorkScope merely encapsulates the UnitOfWork instance and initialize it through the UnitOfWorkFactory. The UnitOfWorkFactory is simply a simple factory class that contain some simple logic to return an instance of UnitOfWork derived type through its GetUnitOfWork() method. The following code segment shows the UnitOfWorkFactory implementation.
We proceed to the OrderRepository code segment taken from the MyCompany.Business.AppName domain layer project in the sample solution to see how the OrderRepository make use of the UnitOfWorkScope to achieve business transaction.
Compared to the StaticObjectContextLifetimeManager which have its ObjectContext instances global and shared by all the thread, the ScopeObjectContextLifetimeManager has its ObjectContext instances shared by the codes execute on the current thread.
Scenario of using AspNetObjectContextLifetimeManager
This is the last ObjectContextLifetimeManager we are going to discuss. Actually, there is nothing much to explain here as we all know that this is to cater for the ASP.NET web application scenario. The way it works is actually same as the ScopeObjectContextLifetimeManager. The only difference is that it relies on the HttpContext.Current to store the ObjectContext instances to ensure their lifespan is long enough to service per user request and also to achieve the business transaction through the UnitOfWorkScope that manage them. The AspNetUnitOfWork stores them in the HttpContext.Current.Items dictionary to ensure the Repositories engage in the business transaction always get the same ObjectContext instance they suppose to for a particular HTTP user request.
Download the sample project code, repository-specification-unitofwork-v1-1.pdf, by right click and choose Save Target As here. Rename the extension from .pdf to .zip once u have download the file. Let me know if you have any problem running the code due to some unknown or unforeseen reasons that I have oversight.
The sample project code actually does not take into the need of optimistic concurrency update and it always assumes last in win. So, in my next post I may choose to write something on the concurrency update or the self tracking entities made available in the Entity Framework 4.
Please be advised that these codes are not really production ready and it need lot of testing, playing around with and also it may not be suitable to all application needs and scenarios. So, please go through the codes and give me comments and feedback to this post so that I would know the mistakes that I have ignorantly make and let me to improve further.
Last but not least, I would like to thank to the authors of the blog or web site listed in my references section. Without them, I won’t be writing so much. I also want to thank to those have guided me before in my past careers.
See you in my next post.
Bye & stay tune...
- ADO.NET Team Blog -> http://blogs.msdn.com/adonet/default.aspx
- Entity Framework Design -> http://blogs.msdn.com/efdesign/
- Domain Driven Design Quickly by Abel Avram & Floyd Marinescu -> http://www.infoq.com/minibooks/domain-driven-design-quickly
- Charlie Calvert’s Community Blog on Linq and Deferred Execution -> http://blogs.msdn.com/charlie/archive/2007/12/09/deferred-execution.aspx
- Grey Young : DDD : The Generic Repository -> http://codebetter.com/blogs/gregyoung/archive/2009/01/16/ddd-the-generic-repository.aspx
- Entity Framework 4.0 Beta 1 – POCO, ObjectSet, Repository and UnitOfWork -> http://devtalk.dk/2009/05/26/Entity+Framework+40+Beta+1+POCO+Lazy+Load+And+Code+Generation+Of+ObjectContext.aspx
- Microsoft’s Application Architecture Guide 2.0 -> http://apparchguide.codeplex.com/