Rapid C# Windows DEVELOPMENT Visual Studio 2005, SQL Server 2005 & LLBLGen Pro
Rapid C# Windows DEVELOPMENT
Visual Studio 2005, SQL Server 2005, & LLBLGen Pro
Quickly Build Robust, Database-Driven Applications
Joseph Chancellor Š Joseph Chancellor 2006
Rapid C# Windows DEVELOPMENT Visual Studio 2005, SQL Server 2005, & LLBLGen Pro
Š Joseph Chancellor 2006 All rights reserved. No part of this book may be reproduced, stored in a database of any kind, or transmitted in any way, form, or means without consent of the author, except for brief quotations for reviews or articles. The information in this book is provided without any express or implied warranty. The author, publisher, and any distributor are not responsible for any damages or loss arising directly or indirectly from the use of this book. First Edition February 2006 Published by Lulu.com. Cover Photo: QiangBa DanZhen iStockPhoto.com Photographer http://www.photosource.com/2650 Back Photo: Elena Slastnova iStockPhoto.com Photographer http://hiero.ru/Yougen Cover and Book Design: Joseph Chancellor Visual Studio .NET 2005, SQL Server 2005, ClickOnce, Visual C#, and IntelliSense are trademarks of the Microsoft Corporation. LLBLGen and LLBLGen Pro are trademarks of Solutions Design.
Table of Contents 1: Introduction...............................................................................................................................1 Commanding the Army..............................................................................................................................................1 Prerequisites..............................................................................................................................................................2 O/R Mappers..............................................................................................................................................................4 Strongly-Typed Objects..............................................................................................................................................5 Native Language Filter Construction........................................................................................................................7 Strongly-Typed DataSets vs. LLBLGen Pro ..............................................................................................................8 N-Tier Application Design........................................................................................................................................11 Stored Procedures, Inline, and Dynamic SQL........................................................................................................11 Data Type Conversion.............................................................................................................................................14 Visual Studio Advantages........................................................................................................................................14 Summary..................................................................................................................................................................14
2: The O/R Mapper.....................................................................................................................17 Preparing the database...........................................................................................................................................17 Our schema ... AdventureWorks.............................................................................................................................18 LLBLGen Pro Objects...............................................................................................................................................18 Scanning the Schema.............................................................................................................................................19 Creating Entities......................................................................................................................................................21 Entity Options...........................................................................................................................................................22 Adding Views............................................................................................................................................................26 Creating Typed Lists................................................................................................................................................28 Adding Stored Procedures......................................................................................................................................30 Self Servicing vs. Adapter.......................................................................................................................................31 One Class vs. Two Classes......................................................................................................................................32 Generating the Code...............................................................................................................................................34
3: Solution Setup........................................................................................................................37 Creating the solution...............................................................................................................................................37 Adding references....................................................................................................................................................39 App.config settings..................................................................................................................................................40 LLBLGen Pro Project Overview ..............................................................................................................................41
4: MDI Parent Form....................................................................................................................43 MDI Parent...............................................................................................................................................................43
5: Simple Forms..........................................................................................................................49
Calling a Stored Procedure.....................................................................................................................................49 Alternative Approach Using Entities and Relationships........................................................................................54 Using Typed Views...................................................................................................................................................58 Using Views as Entities............................................................................................................................................63 Using Typed Lists.....................................................................................................................................................64 Summary..................................................................................................................................................................64
6: Search Form............................................................................................................................65 Order Search Form..................................................................................................................................................65
7: Edit Form.................................................................................................................................75
Entity Extensions.....................................................................................................................................................75 Form Layout.............................................................................................................................................................76 Saving Entity Data...................................................................................................................................................81 Creating New Entities..............................................................................................................................................82 Deleting an Entity....................................................................................................................................................83
8: Validation................................................................................................................................85
Field Validation........................................................................................................................................................85 Entity Validation.......................................................................................................................................................91
9: Transactions ...........................................................................................................................95 Preparing for the Unexpected.................................................................................................................................95 Isolation Levels........................................................................................................................................................97
10:Tuning....................................................................................................................................101
Enhancing Performance.......................................................................................................................................101 SQL Server Profiler................................................................................................................................................101 LLBLGen Pro Tracing.............................................................................................................................................105 Simple Prefetching................................................................................................................................................106 Complex Prefetching.............................................................................................................................................107 Multi-threading......................................................................................................................................................108 Code Cleaning.......................................................................................................................................................112
11:Deployment...........................................................................................................................115
Measure Twice, ClickOnce....................................................................................................................................115 Certificates.............................................................................................................................................................116 Strong Name Key..................................................................................................................................................117 ClickOnce Configuration.......................................................................................................................................117 Deployment............................................................................................................................................................121 Installation.............................................................................................................................................................121
12:Regenerating Code..............................................................................................................123 Inevitable Changes................................................................................................................................................123 Minor Changes......................................................................................................................................................123 Major Changes......................................................................................................................................................129
13:Appendix 1............................................................................................................................133 14:Afterword..............................................................................................................................135
Introduction
1
Introduction “A good plan, violently executed now, is better than a perfect plan next week.” George S. Patton (1885 – 1945)
“Never put off until run time what can be done at compile time.” David Gries, Compiler Construction for Digital Computers
Commanding the Army After a long and bloody fight for the territory of ancient China, Liu Bang finally prevailed over his rivals. Born from a peasant family, Bang became the first emperor of the Han Dynasty. Known as a bold and arrogant man, he discussed his recent military successes with his finest general. “How many soldiers could a person like myself command?” the Emperor asked confidently. “Your Majesty could command but 100,000 men” the general stated matter-of-factly. “Then how about you?” the Emperor asked. “In my case,” the general replied, “the more, the better.” The Emperor laughed with surprise. “If that is so, then why are you my subject?” the Emperor asked. “Your Majesty’s talent lies not in the commanding of troops,” the wise general replied. “Your Majesty’s talent lies in the commanding of generals.” As a software developer, you also command a vast army of systems and architecture with almost limitless possibilities. But unfortunately for the developer, the technologies change almost hourly. Unlike the armies of yesterday, the codewarrior of today is likely to find his or her weapons obsolete and battle plans insufficient to face new challenges that did not even exist last week. The concept for this book arose from a set of new programming tools and techniques that are not yet widely adopted in the developer community. Much of the reason for this oversight is a lack of understanding, and it is the aim of this book to help correct that deficiency. These tools are very similar to the Chinese general in the Han Dynasty: they are ruthlessly effective at what they do, and with them you can literally save hours, days, and weeks of development time and code maintenance. Do not make the mistake of becoming a micro-managing Emperor: you do not need to do it all yourself. You are not trying to directly command the most troops (or personally write the most lines of code). You are trying to win the war! And you are about to take a major step forward. Through the exercises in this workbook, you will learn how to use C# with Visual Studio .NET 2005, SQL Server 2005, and an invaluable tool called an O/R Mapper (in this case, LLBLGen Pro) to rapidly develop database-driven applications. You will gain an understanding of the benefits of using these technologies and you will see the complete process from start to finish, including scanning the database schema, generating code, adding business logic, and building the user interface. We will also cover validation, transactions, performance tuning, multi-threading, and deployment. While there are books, websites, and documentation that cover all of these technologies individually, we will aim in this book to demonstrate how they can be used together as a rapid and robust solution, giving the developer a practical walk-through with a multitude of explanations, diagrams, and screenshots. There are countless other methods and languages that a programmer could use to meet his or her database application needs. However, the methods in this book are expedient and have been proven to work. Judge for yourself as we walk through building a Windows application together. 1
Chapter 1
Who is this book for? • • • • • • •
Beginners who have some experience programming in C# and are familiar with basic programming terminology. Intermediate developers who want to learn new techniques to shorten their development time. Technology strategists who are investigating this approach in order to choose a platform for a project. Those who want to put together a quick-and-dirty proof of concept for a database-driven application. Those who have some familiarity with SQL programming. (You do not need to be an expert, but you do need a basic understanding of the SQL syntax to use and understand the generated framework.) Those wishing to learn cutting edge development skills and techniques. Small development teams. Using these methods, even a single developer can make a powerful database-driven application and deploy it to their organization in a short amount of time. No large budget or department of developers required.
Who isn’t this book for? • • •
•
Non-Windows developers. Developers who are completely unfamiliar with object-oriented programming. We are not going to explain all the details of OOP and you may not understand the concepts or code if object-oriented programming is completely new to you. Developers with no SQL programming experience. While LLBLGen Pro will help you compensate for weakness in SQL, you may have a difficult time understanding and using the framework to write queries if you have no understanding of SQL syntax. Fundamentally, LLBLGen Pro is a wrapper (and more) for SQL, and knowing the basics will ensure that you can take full advantage of the tools. Those who are so familiar with SQL that they think in stored procedures and refuse to learn new techniques. You will want to avoid the temptation to just write a stored procedure instead of figuring out a way to use the generated framework. Like anything else, learning these new techniques takes time. Be patient, and you will achieve the results you want.
How to read this book This book is a practical walk-through that will build a sample application—step-by-step—with directions, screenshots, and code samples. The book is intended to provide all the information necessary for a beginner to fully learn and grasp the tools. In order to gain maximum benefit, be sure to follow along with the sample application, performing the directions on your machine as explained in the text. However, if you are an intermediate user more familiar with the underlying concepts, you may elect to simply read the chapters and examine the code samples until you understand what is presented. Optionally, you can create your own project while reading through the book, using the concepts presented, and making substitutions where necessary to apply it to your database schema and make it fit your application requirements.
Prerequisites In order to fully utilize this book, you will need: Visual Studio .NET 2005 (highly recommended) Currently in full release as of November 2005, you can buy this software from Microsoft at http://msdn.microsoft.com/howtobuy. Visual Studio .NET 2005 has a variety of versions and prices. Look for a version that includes C# and that allows you to develop class libraries and Windows applications. (You can use older versions of Visual Studio, but you will miss some of the newer, time-saving features. Refer to Figure 1.1 for a detailed 2
Introduction
Figure 1.1. Visual Studio .NET 2005 version and feature comparison Source: http://msdn.microsoft.com/vstudio/products/compare/ *This book focuses on Windows forms although many of the techniques presented are also transferable to web forms.
comparison of the versions of Visual Studio .NET 2005 available and the versions which include the features necessary to take full advantage of this book.) SQL Server 2005 (or other compatible database required) In this book we will be building database-driven applications with SQL Server 2005. Microsoft offers a variety of SQL Server products—including a limited version that is available for free—and developer versions that accompany specific versions of Visual Studio .NET 2005. Please note that we will not be covering the creation of your database or designing your schema. We are assuming that you have already developed your database or already have a database available. Instead of SQL Server 2000/2005, you can also use previous versions of SQL Server, Microsoft Access, Firebird, Oracle, or MySQL, and for the most part, the process will be the same. Note, however, that SQL Server 2005 was used exclusively in the creation of this book. Therefore, if you use other database applications, your results may vary. LLBLGen Pro (required)1 Version: 1.0.2005.1. LLBLGen Pro is an O/R mapper. This tool will take an existing database schema and generate a data access tier (and more!) in a matter of seconds. LLBLGen Pro is available from http://www.LLBLGen.com for about $270 USD (EUR 229) and can be used by your entire development team. Although the product is an extra item to purchase, it is an invaluable tool for developers working with databases (a fully-functional 30 day demonstration version is also available and will allow you to work through the exercises in this book). Keep in mind that there is a free version of LLBLGen available, but we will not be using it because it uses stored procedures exclusively and only does a fraction of what the current version will do. The original version was entirely reengineered, rewritten, and released as
1) Please note that the products discussed in this book are recommended on their merit alone; the author is not employed by Solutions Design (the creators of LLBLGen Pro) or any other software company, and does not receive any kind of commission or compensation from any of these companies. 3
Chapter 1
Figure 1.2. Lines of code generated/automated versus hand-coded in each project in this book
LLBLGen Pro. You will not be able to follow any of the code samples in this book without the retail version of LLBLGen Pro. You may be asking yourself, “Aren’t those programs expensive? Why not do it by hand?” Our answer is “These programs save time!” We prioritize speed (but not at the expense of reliability or maintainability) and, therefore, recommend to you that you consider how much time and energy these programs can save you and not only the prices of the products. Looking at the projects developed in this book will serve as a good example (Figure 1.2). While you could write the same Windows application by hand without Visual Studio, the automation that Visual Studio affords can be tremendous. In our example application alone, about 68% of the code in the Windows application project is generated automatically. Most of this code is written as elements in the GUI are configured visually; only 32% of the code was completely written by hand. And the differences are even more pronounced using LLBLGen Pro in the application’s class library, which contains the business logic and data access layer (these terms are discussed later). In this project, over 98% of the code is automatically generated. What the developer adds by hand amounts to a mere 1,139 lines of code—less than 2%. (Keep in mind that this application is in the beginning stages of development, and that for this reason you will be adding much more custom code.) Another benefit of the code written by Visual Studio and LLBLGen Pro is that it is well commented, well spaced, and easy to read, adding to the line count of generated code. The main point here, however, is that with these tools you can take a great leap forward on Windows projects and projects that use databases. Automating repetitive code is a major part of rapid development, and it is worth paying for. Now, let’s take a look at the other concepts and principles that make this method of development a desirable choice.
O/R Mappers An O/R Mapper creates classes defining objects that correspond to the structure of your database. Every row becomes an entity and every table becomes an entity collection. The fields of the database table become public properties of the entity object. The framework also builds in constructors and other useful methods to find entity objects, set their properties, and save them back to the database with just a few lines of code.
Figure 1.3. Two related database tables 4
Introduction
If you are unfamiliar with O/R Mappers, take a look at Figure 1.3. Here are two tables from a database. The tables are named Individual and AddressBook and you will notice a relationship between the two. After running an O/R Mapper on this schema, you will get a class library that you can immediately begin referencing in your projects. To use a row from this database, you could write the following code in your project (Example 1.4). 1 2 3 4 5
// C# Example IndividualEntity MyIndividual = new IndividualEntity(23); MyIndividual.FirstName = “Joe”; MyIndividual.AddressBook[0].City = “New York”; MyIndividual.Save(); Example 1.4. O/R generated code example
Let’s walk through this code step-by-step. The code in Line 2 automatically retrieves the Individual record with an IndividualID of 23 and loads it into a custom object called an IndividualEntity (this name comes from the original table). As you can see in Line 3, all of the fields of the original table are properties that can be accessed and changed2. In Line 4, a related record in the AddressBook table was accessed and a property changed. And in Line 5, you see how easy it is to save those changes back to the database. Notice that all of these actions can be performed without writing any other extra code by hand, anywhere else. The generated code from the O/R Mapper handles all of the steps that you would normally have to code yourself, saving you from having to: • • • • • • • • • • • •
Find the database server. Log in and open a connection to the database server. Select the particular database containing the information you want. Find the correct table. Find the correct row. Read all the values for that row. Convert every type of value from its SQL data type into the .NET data type while checking and handling the possibility of a null value. Present those values in a strongly-typed format, so the consumer knows exactly what kind of object to expect (string, integer, array, etc.) and there are no surprises at run-time. Create a container to temporarily hold the values while they are being modified. Retrieve data from another row in a related table. Manage which values have changed and make appropriate INSERT’s, UPDATE’s, and DELETE’s in the appropriate tables in the database to reflect those changes. Close the connection.
Whew! That’s a lot of time saved! And that’s just the beginning. For those who are not familiar with all of the aspects of programming, let’s elaborate on what it means to be strongly-typed and why strongly-typed objects are so helpful to developers.
Strongly-Typed Objects A major feature of using an O/R Mapper to auto-generate your code involves the use of strongly-typed objects. Instead of exposing simple and generic properties and methods, your generated code should expose specific objects you will actually be using. To understand the advantages of working in this manner, consider this analogy. When Bob goes home every day from work, he puts his keys on the table, drops his briefcase on the floor, and heads straight to the kitchen to make himself dinner. Every day, Bob does the same thing: he grabs a frozen dinner from the freezer and pops it in the 2) More specifically, all columns in the table can be read and columns that are non-key, non-calculated fields can be changed. These concepts are explained later. 5
Chapter 1
microwave. Bob has performed these actions so often that he does not really think about doing them each day. He just opens the freezer, grabs something, and sticks it in the microwave. We could express this particular freezer-tomicrowave exchange in C# as the following: 1 Object MyDinner; 2 MyDinner = House.Freezer.GetObject(); 3 House.Microwave.Cook(MyDinner); Example 1.5
You do not have to be a programmer to know that this situation is a disaster waiting to happen! Most of the time whatever Bob grabs from the freezer will probably be a frozen dinner, or at least something edible. But can he really rely on mindlessly pulling anything from the freezer? If Bob has kids, he will not know from one day to the next whether he will find stuffed animals in the fridge, pop-tarts in the DVD player, or razor-sharp toys peppered along the staircase. Bob would be wise to check exactly what it is he is pulling out of the freezer before he sticks it in the microwave. Now here is the same code improved by using a more specific object. 1 FrozenDinner MyDinner; 2 MyDinner = (FrozenDinner)House.Freezer.GetObject(); 3 House.Microwave.Cook(MyDinner); Example 1.6
We have improved this code by using a FrozenDinner object. Now, if we try to grab something from Bob’s freezer that is not a FrozenDinner, we will get an error when we try to cast it. We could improve this even further by checking the type of the object before we cast it and accounting for the possibility of a non-FrozenDinner object. If we used the as keyword instead of casting the object, we would eliminate the chance of a casting exception, but then we would need to check for a null FrozenDinner object before trying to cook it. In the .NET world, this kind of situation is very common when you access data from external sources. One of the more common objects you will use is a DataTable. In .NET, DataTable objects are wonderful objects and extremely flexible. But when you read data from a database into a DataTable and you need to get specific with the contents of a particular field on a particular row, the .NET framework only gives you an object of type … well … object. A plain object is about as generic as you can get, and the .NET framework does this intentionally to give you maximum flexibility. But if you assume that that object is a string, and will always be a string, you are entering the world of assumptions—and code that is built on assumptions is brittle and unpredictable. If you accidentally change the name of a column in your database or reverse the order of columns in a set of records, you are asking for trouble. Minor changes can doom your code, and the worst part is that you will not know things have gone wrong until your code is running and the code fails miserably simply because what you always assumed would be a string happened to be an integer or a boolean. Now let’s return to Bob’s frozen dinner for a moment. We would eliminate a lot of guessing and unnecessarily complex code if the method that gave us the frozen dinner simply returned an object of type FrozenDinner and not an object of type object. That would save us the trouble of casting it and accounting for all the possibilities of a nonFrozenDinner object. Consider this final version of C# code: 1 FrozenDinner MyDinner; 2 MyDinner = House.Freezer.GetFrozenDinner(); 3 House.Microwave.CookFrozenDinner(MyDinner); Example 1.7
In the real world, our database solution might entail extending a DataTable and specifically defining the type of every column. It might also entail creating a custom class and writing methods that read the data from the database table and add it to the properties of the custom class. Repeating this process by hand, for every table, stored procedure, and 6
Introduction
view in your database, while a wonderfully effective programming practice, would take you the rest of your natural life. Just consider how long it would take to manually write a new method for every kind of object Bob might possibly want to take from his freezer. Unfortunately, these painfully slow methods of development are very common. But there are several ways to automate this process, which we will discuss in a moment. The other general principle related to strongly-typing worth mentioning is that compile-time errors are always preferable to run-time errors. And of course, having no errors is the most desirable! Until computers begin to write their own code without human input there will always be errors of one kind or another. But not all errors are created equal. Compiletime errors, which arise during compilation, are easier to fix because they happen 100% of the time. If you have one of these, you cannot compile your application no matter how many times you try. You must fix the problem before continuing. On the other hand, a run-time error which arises as the application is running will compile 100% of the time, but may only occasionally throw an error while executing. These kinds of errors are harder to test and harder to catch because they are inconsistent and only occur when a particular function is called and when specific conditions are met. Now, consider the three different frozen dinner code examples mentioned earlier. In the first example, we will never get any kind of error when compiling or executing (which is good), but unfortunately for Bob’s household, we could end up with strange items in the microwave (not good). In the second example we have prevented putting nonFrozenDinner objects in the microwave. We will have no compilation errors (good), but have a real chance of run-time errors if the object in the freezer is not a frozen dinner (not good). In the last example we get the possibility of compile-time errors if we code incorrectly (very easy to fix), no run-time errors (very good!), and still no strange objects in the microwave (what we hoped for). Although there are many ways to solve this problem, the third method is the most reliable, error-free solution.
Compile-time errors are always preferable to run-time errors
Native Language Filter Construction A new feature in the latest version of LLBLGen Pro is native language filter construction. In past versions of LLBLGen Pro, creating a SQL query programmatically required excessive verbosity, as you can see in Example 1.8. Each part of a query in this example is represented by objects which must be instantiated and combined in ways that are not necessarily intuitive nor easy to read. While these early versions were effective, they demanded that the developer take time to be completely comfortable with the syntax. 1 PredicateExpression MyPredicate = new PredicateExpression(); 2 MyPredicate.Add(AW.Data.FactoryClasses.PredicateFactory.CompareValue( EmployeeFieldIndex.HireDate,ComparisonOperator.GreaterEqual, 3 HireDateAfter)); 4 5 MyPredicate.Add(AW.Data.FactoryClasses.PredicateFactory.CompareValue( EmployeeFieldIndex.SalariedFlag,ComparisonOperator.Equal, 6 true)); 7 Example 1.8. Old way of creating predicates
In the newest version of LLBLGen Pro, however, queries can be constructed in an intuitive manner more consistent with normal C# syntax. Consider Example 1.9 , where the same predicate has been written in native language. The end result is more readable code and less time spent learning the syntax. 1 IPredicate MyPredicate = (EmployeeFields.HireDate >= HireDateAfter) & 2 (EmployeeFields.SalariedFlag == true); 3 Example 1.9. Creating predicates with natural language construction
7
Chapter 1
Figure 1.10. A strongly-typed DataSet in a C# Windows project
Native language filter construction in C# is a helpful and time-saving new feature in LLBLGen Pro, and throughout this book, we will use this technique in code examples.
Strongly-Typed DataSets vs. LLBLGen Pro Let's take a brief look at the code generating tools integrated into Visual Studio and how they compare to LLBLGen Pro. Visual Studio allows you to auto-generate a strongly-typed DataSet that mirrors your database’s schema. As you might expect, this approach takes existing data objects and extends them, specifying the columns and data types explicitly. To do this, you simply click the option to add a data source to your project, point it to your database, and then add the tables, views, and stored procedures that you want to use. Visual Studio will generate an XSD file (Figure 1.10) that will contain descriptions of all the database objects and their properties. In Figure 1.11, you can see the Northwind database schema as viewed through the DataSet Designer in Visual Studio. Visual Studio will even add the relationships between the tables automatically.
Figure 1.11. A look at the schema of a strongly typed DataSet 8
Introduction
Figure 1.12. A peek inside a strongly-typed DataSet definition file
To utilize this DataSet you can simply use a TableAdapter, which is an object that is used to retrieve data from the database in order to fill the DataSet. If you open the XSD file in Notepad, you can see the underlying structure of the DataSet (Figure 1.12). Notice that SQL statements are included in the description of this DataSet. Each table description contains the specific SQL statement necessary to perform a SELECT, INSERT, UPDATE or DELETE action. Although both the LLBLGen Pro method and Visual Studio's strongly-typed DataSet objects are solutions that provide a data access layer, they are fundamentally different approaches. Let’s take a look at some of the reasons why LLBLGen Pro is the more usable of the two methods. Strongly-typed DataSet advantages: Using strongly-typed DataSet objects is a good technique that implements a number of our best practices. First, you can quickly and easily reap the rewards of using strongly-typed objects and have assistance in creating your SQL statements and stored procedures. The strongly-typed DataSet also automates the process of consuming the data in your C# code; thus, both the query-generating and consuming features save time. The custom DataSet can also be referenced across your projects, becoming a handy and reusable data access layer. A key advantage of this method over LLBLGen Pro is that you do not need any other third-party software: this functionality is built into Visual Studio. Strongly-typed DataSet disadvantages: Users of strongly-typed DataSet objects encounter several setbacks, all of which LLBLGen Pro helps address: • Query limitation: This disadvantage stems from the fact that SQL statements used by strongly-typed DataSet objects are created when the DataSet is designed, not when it is used. With LLBLGen Pro objects, however, the SQL code is generated as the object is used. You can find the SQL statements by looking inside the DataSet's XSD file, but in looking through the code that LLBLGen Pro produces you will find no SQL statements anywhere. The LLBLGen Pro dynamic query engine for SQL Server creates the statements only when the data is retrieved from the database. Because the SQL statements are all created when the DataSet is defined, if you need a new query that has not been defined in advance, you will always need to redesign your DataSet by adding the new query before you can consume it elsewhere in your code. With LLBLGen you can create and consume the query in your C# code by using the LLBLGen Pro framework—all without changing the underlying data layer. You would only need to change your data layer (by regenerating your code) if you made changes to your schema or you add a new type of database object. • Difficulty propagating schema changes: When you do make changes to your schema, strongly-typed DataSet objects do not provide a built-in method for propagating schema changes to your DataSet definition and updating your SQL statements. You would need to delete the corresponding DataSet table,re-add it from the database, and then re-create all of the custom SQL queries that you had defined for that table (since only one query is added by default). Fundamentally, there is no built-in way for Visual Studio to refresh your DataSet schema with changes from your database's schema. If you forget to drop and add each table that changed, or refresh the list of columns that have new names or data types, your code will mysteriously begin 9
Chapter 1
•
•
•
•
to throw exceptions. LLBLGen Pro makes it easy to bring those changes to your code without losing any of your customization. In Chapter 12, you will see how LLBLGen Pro automatically scans your entire database and automatically makes the necessary changes to your project for you. Limited data layer functionality: In order to traverse many-to-many relationships between tables, DataTable objects require you to use intermediate tables. LLBLGen Pro allows you to traverse relationships between Entities more easily than you can using DataTable objects. You actually can traverse a many-to-many relationship by skipping over the intermediate table—an action that is not possible with a strongly-typed DataSet. No help with business logic layer: A strongly-typed DataSet does not help you with your business logic layer. LLBLGen Pro can optionally create a business logic layer shell that saves you the trouble of creating one yourself. We will use this method in our walk-through. Requirement of adapters in order to get data: In order to retrieve data for your strongly-typed DataSet, you must first create a TableAdapter object. If you use the self-servicing template when generating LLBLGen Pro code, you do not need any extra object like a .NET TableAdapter to retrieve data. It is as if your DataRow objects fill themselves with data automatically-and this means you have to write fewer lines of code. No support for custom SQL queries: Even with the Visual Studio automated tools, you will still need to get your hands dirty writing SQL: some queries are too complex to use Query Manager. The LLBLGen Pro framework allows you to generate complex queries and replace the logic of many stored procedures without writing any SQL yourself.
These features make LLBLGen Pro a compelling choice for database access. But by now, you may be wondering to yourself, “If O/R Mappers are so great, why isn’t Microsoft using this approach?” In fact, Microsoft is working on it right now. Originally, Microsoft was developing a project they dubbed “ObjectSpaces,” which was a method of representing relational data as objects. This project was eventually assimilated into new projects not yet completed3. One of those projects is named WinFS, which will contain technology that is similar to O/R Mappers. Quentin Clark, the project management team leader for WinFS, stated in his blog, “We are in the process of building-out the next version of ADO.NET to have new features that provide a data model, object-relational mapping, and flexible query infrastructure. The new data model is about entities, and the WinFS data model of Item types is built on that model.”4 WinFS has been under development since 2002 and is currently in Beta 1. The final release date for WinFS is uncertain, but it will certainly be after the next version of Windows (Vista) is released. The other main project from Microsoft is named LINQ, which stands for Language INtegrated Queries. According to the Microsoft Developer Network LINQ is “a codename for a set of extensions to the .NET Framework that encompass language-integrated query, set, and transform operations. It extends C# and Visual Basic with native language syntax for queries and provides class libraries to take advantage of these capabilities.”5 LINQ will allow the developer to use queries written in C# statements against many different kinds of objects loaded into memory. The scope of this project is much wider than sorting relational data: LINQ commands can sort data regardless of its source. At the moment, it is impossible for the developer to accomplish this alone. Most of the time sorting and filtering operations are relegated to the database. LINQ will allow many of those operations to occur in the C# code where the .NET code is running instead of in the database where SQL Server is running. LINQ will work with LLBLGen Pro generated code and therefore will be of great benefit to the developer when it is finally released. In addition to LINQ, Microsoft is creating a project specifically aimed at relational data, called DLINQ. DLINQ is an O/R mapper for Microsoft that will generate classes corresponding to your database’s tables and will use lazy loading (or “deferred query execution,” in Microsoft’s terminology) to dynamically generate SQL to retrieve, modify and delete data. The DLINQ project is simply a light-weight, Microsoft-built O/R Mapper. At the 2005 Microsoft Professional Developers Conference, Microsoft gave a demo of the project to attendees. Because the DLINQ project is still in development, it currently offers far fewer features than LLBLGen Pro and will probably require several more versions before it has a similar set of features to what LLBLGen Pro has today. The good news is that you do not have to wait for Microsoft to release WinFS, LINQ, or DLINQ in order to take advantage of O/R Mapping. LLBLGen Pro is already available and has proved to be a mature and stable product. 3) See http://msdn.microsoft.com/data/objectspaces.aspx. 4) See http://blogs.msdn.com/winfs/archive/2005/08/29/457624.aspx. The scope of WinFS is larger than just O/R Mapping, but its inclusion in WinFS demonstrates that it is a useful technique for handling relational data programmatically. 5) http://msdn.microsoft.com/netframework/future/linq/ 10
Introduction
N-Tier Application Design Programmers who create any kind of large programming project know the importance of keeping code organized. NTier application design helps you separate your code into logical layers. Generally, you would have a data-access layer, a business logic layer, and a user interface layer; some developers use more layers and some use less. Here are the details about what each layer would contain in this three layer design: Data access layer: This section contains all the code necessary to execute SELECT, INSERT, UPDATE and DELETE statements in the database, access views, run stored procedures, and convert between database data types and the data types of your programming language. Business logic layer: This section would contain code that implements the rules of your business. This is above and beyond simple database access. For instance, code in this layer might validate the various parts of an order before the order is saved into a database or define the process for handling an order once it is saved into the database. User interface layer: This layer contains code that puts data into specific controls in order to display it for your users. Hopefully, it contains only what is user-interface specific. In the .NET world, web application controls are different from windows application controls, so this code cannot necessarily be re-used between different kinds of applications. The advantages of the three-tier design are the following: • • •
Using a three tier method, you would be able to change your application to a completely different user interface without losing the other two layers of your code. Business rules are saved in a central location. If you decide to change the process for validating an order, you only have to change it in one place. If you change your database (from SQL Server to Oracle, for example), you would theoretically be able to change only the data access layer and not the rest of your application. Unfortunately, in most of the situations this is not a practical reality, as database-specific code has a way of creeping up to the user interface layer.
While these methods are acknowledged best practices, most developers find that they do not have the time to design the application this way; rules are violated frequently out of the need to just get the product out the door. However, LLBLGen Pro helps accomplish the ideal design in the following ways: • •
• •
LLBLGen Pro will completely auto-generate the data access layer. You do not need to do any hand-coding in this layer. LLBLGen Pro will optionally generate a business logic layer shell for you to get started with your business layer. You do not have to understand the intricacies of inheritance and making custom classes to take advantage of business logic classes; in LLBLGen Pro these classes are intuitively organized, powerful, and infinitely extendable. LLBLGen Pro collections and entity objects are bindable, making them extremely easy to add to .NET controls. This can reduce the size of your UI layer. Changing the database application and the data access layer without affecting the business logic and UI layers is actually possible with LLBLGen Pro. If you migrated your database schema and stored procedures to a new database application, refrained from using any database-specific features (there are not that many) and your schema matched exactly, you could actually re-generate your LLBLGen Pro data access layer and not make any other changes to your application!
Stored Procedures, Inline, and Dynamic SQL To help you understand how LLBLGen Pro works, let's take a look at how database information is integrated into a .NET application. There are several different methods you can use to access data in your SQL Server database from your code. These methods are inline SQL, stored procedures, and dynamic SQL. 11
Chapter 1
Inline SQL: This is the most common method of database access seen in most tutorials, but it is by far the worst method. Inline SQL is illustrated in Example 1.13. 1 2 3 4 5 6
SqlConnection conn = new SqlConnection(connection); SqlDataAdapter adapter = new SqlDataAdapter(); adapter.SelectCommand = new SqlCommand(“SELECT * FROM Orders WHERE OrderID=” + tbInput.Text, conn); adapter.Fill(dataset); return dataset; Example 1.13. Inline SQL example
From a security standpoint, a user could enter any kind of data into an input field and have it concatenated directly into the SQL statement. This presents an alarming security risk. You do not want users executing arbitrary code in your database. Hackers who are familiar with SQL injection attacks6 can alter the above query to do about anything they want it to do. Also, from a maintainability standpoint, if you decide to change the name of the Orders table to Order, you do not have any way to propagate this change to your data layer. While the code will always compile correctly, it will throw an exception when the query executes and the Orders table is not found. Stored procedures: Most security experts recommend using stored procedures exclusively for database access. From a security perspective, stored procedures are immune to SQL injection attacks, since parameters entered into a stored procedure cannot change the query itself. From a maintainability standpoint, you do not have SQL strings inside of your code when you use stored procedures, but you will still have stored procedure names in your code. If you decide to rename a stored procedure, just like in the above example, you will not automatically propagate the stored procedure name change to your code. The code will compile correctly, but will throw an exception when it is executed and the stored procedure is not found. The stored procedure method is also a management challenge due to the difficulty of maintaining a vast library of stored procedures. The previous version of LLBLGen Pro relied on auto-generated stored procedures to carry out the SELECT, INSERT, UPDATE, and DELETE operations. For each table, there were 5 stored procedures generated (one extra for selecting all the records). In essence the auto-generated stored procedures and custom stored procedures create another layer of code that must be maintained. Maintaining code in SQL server can be much more difficult than in Visual Studio for several reasons: the lack of IntelliSense, a difficult debugging environment, cryptic (and unhelpful) error messages, and the inability to organize similar functions into logical objects. Yet another disadvantage of stored procedures is that most C# code that calls a stored procedure returns an untyped DataSet or DataTable. This again creates the problem of not knowing 100% of the time what kind of object will be inside each column and row. Although the code will compile, when executed it may throw exceptions if different types of data are in different places. From a permissions standpoint, stored procedures do offer tighter security. You can allow users access to only the stored procedures, and they would not be able to access the underlying tables to do any other kinds of changes (i.e., change the structure of the table). Setting permissions with dynamic SQL (discussed in the next section) is a little more complex because you need to set SELECT, INSERT, UPDATE, and DELETE permissions individually on the tables themselves. Another commonly cited advantage of stored procedures over other SQL statements is performance. The architecture of SQL Server is believed to speed up stored procedures by precompiling the query when it is created. However, MSDN documentation7 states the following: SQL Server 2000 and SQL Server version 7.0 incorporate a number of changes to statement processing that extend many of the performance benefits of stored procedures to all SQL statements. SQL Server 2000 and SQL Server 7.0 do not save a partially compiled plan for stored procedures when they are created. A stored procedure is compiled at execution time, like 6) For more information about SQL injection, see http://msdn.microsoft.com/msdnmag/issues/04/09/SQLInjection/ or query Google for “SQL injection attacks”. 7) See http://msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8_ar_da_0nxv.asp or query Google for “SQL Server” and “statement processing”. 12
Introduction
any other Transact-SQL statement. SQL Server 2000 and SQL Server 7.0 retain execution plans for all SQL statements in the procedure cache, not just stored procedure execution plans. The database engine uses an efficient algorithm for comparing new Transact-SQL statements with the Transact-SQL statements of existing execution plans. If the database engine determines that a new Transact-SQL statement matches the Transact-SQL statement of an existing execution plan, it reuses the plan. This reduces the relative performance benefit of precompiling stored procedures by extending execution plan reuse to all SQL statements. The newer versions of SQL Server blur the lines between stored procedures and other SQL statements from a performance standpoint. The same query will be cached regardless of whether it is a stored procedure or a SQL statement. SQL statements also have the advantage of varying whereas the logic of a stored procedure is fixed. While changing the structure of the query means that a dynamic query will not take advantage of the caching feature the first time it is executed (but will on subsequent executions), the dynamic query has the capability of altering to fit the exact need. The SQL statement therefore could give the developer an opportunity to solve a problem more efficiently than an existing stored procedure might. Keep in mind, though, that neither method is better than the other all of the time; both techniques have strengths and weaknesses. The main point here is that stored procedures do not always have a strong performance advantage over SQL statements. The last disadvantage of stored procedures is that complex procedures can get very ugly very quickly. The ability to do a complex search with many different optional parameters, optional joins, and a variety of sorting methods would be exceptionally difficult with just one stored procedure. Dynamic SQL: The least common and probably least understood method of database access is Dynamic SQL. With this method, a custom component automatically generates the proper SQL statements for you as you use the object. Many developers do not even know this option exists, and understandably so. If you do not have time to code according to the best practices, you certainly will not have time to write your own component that generates SQL statements automatically. The good news is that you do not have to. The developers of LLBLGen Pro have made the components for you, and their components are smart enough to speak to a number of different databases. You can think of them as universal translators for databases. This frees you up to interface with the components in a generic, non-database-specific manner. If you follow the development of Internet applications, the transition to dynamic SQL will seem quite natural. Just as HTML and JavaScript code in many of today’s web applications is not always written by hand, but instead encapsulated into ASP.NET server controls which generate the correct code automatically, so new objects are generating SQL to speak to databases, freeing the developer from having to write all this extra code manually. As the queries are generated, instead of concatenating SQL statements together, these queries become parameterized, just like stored procedures. This process also makes these statements immune to SQL injection attacks. Thus, with dynamic SQL, developers can get the security advantages of stored procedures without the hassle of maintaining another layer of code in your database. In addition, to help with maintaining your code, LLBLGen Pro generates the equivalent of a master index to all the objects in your database and all of their fields, saving you from having to hard-code strings in your code with table names, stored procedure names, and field names. These enumerators and other LLBLGen Pro objects allow you to consume data in a way that will not compile when the schema changes and you have refreshed the data access layer; therefore LLBLGen Pro helps prevent schema changes from causing unanticipated exceptions in your code when it is executed. Complex querying is also much easier with dynamic SQL than stored procedures. A dynamic query can grow to include joins where necessary or can alternatively use a variety of branching logic, optional parameters, and any possible sorting option in the same method. You have near-infinite flexibility with dynamic SQL, limited only by your own ingenuity and creativity. For database access, dynamic SQL is powerful, under-used and often misunderstood as a method of talking to your database. By the end of this book, you will be an expert user of this type of access and you will enjoy all of the advantages of this method over less efficient inline SQL and stored procedures methods.
Stored procedures do not always have a strong performance advantage over SQL statements.
13
Chapter 1
Data Type Conversion Another problem that arises when talking to databases involves the conversion of database values into their .NET counterparts. In the database world, a column of type integer can contain a whole number (like 5) or be null (if the schema allows). In previous versions of the .NET framework an integer would throw an exception if its value was null (or unassigned). When you read values from a database, you would have to account for the possibility of a null value in your class library. This accounting could take a lot of extra code to handle the type conversions and creates more opportunities for mistakes. In version 2.0 of the .NET framework, Microsoft addresses this difficulty with the addition of nullable types8. These types make it much easier to handle null values in .NET than previous versions. LLBLGen Pro has used its own solution to the nullable dilemma from the beginning of the .NET framework in order to shield the developer from having to manage this complexity. In LLBLGen Pro, the SQL data is automatically converted to .NET objects in the Data Access Layer with default values. A null integer would become a .NET integer with a value of 0. For many situations, the minute difference between null and zero is unimportant. However, for those occasions when you actually care about whether the original value was zero or null, you can call a specific function that will tell you this value. With LLBLGen Pro, if you need to assess nulls, you can handle them, but nulls will not keep you from getting your project done.
Visual Studio Advantages With every version of Visual Studio, Microsoft adds new features that often become essential for developers. Here are some of the best features from previous versions and the 2005 version: • • • •
IntelliSense: Once you use IntelliSense you will never want to go back! If you have never used it, think of it as automatic spell checking, grammar checking and word completion. With IntelliSense, you simply write better code, in less time, and with fewer mistakes. Debugging: Visual Studio has some of the best debugging tools around to help you figure out what is going wrong with your code and to see what is happening step-by-step. Design-time binding to your custom objects: You will probably want to use the the built-in binding support to set up your controls visually. With the advantage of a DataGridView you can add columns based on the properties of the objects the control will contain before you even have to write a line of code. New! One-click Deployment: Now, you can deploy your application to a file share or FTP site inside Visual Studio. When the application is run, if there is a newer version available, your users will be prompted to install the update. One-click deployment gives windows applications almost as much ease of deployment as web applications!
Summary For summary, here are the reasons the combination of LLBLGen Pro, SQL Server 2005, Visual Studio 2005, and the associated techniques is so powerful: • • •
N-Tier design: Using N-Tier design is an effective way to break your code into layers that accomplish different tasks and is easily reusable. Customizable business layer templates: LLBLGen Pro generates templates for you to customize with you own code, so you do not have to create your own files, classes, or namespaces; you can start customizing immediately. LLBLGen Pro also generates classes to add your own custom validation logic as well. Reusable core code: If you decide to switch to a web application, you can use the same generated classes and business logic. All you need to do is change the UI-specific code. This book will show you how to develop so that the maximum amount of your custom code is reusable.
8) For more information about using nullable types in C#, see http://msdn.microsoft.com/vcsharp/2005/overview/language/nullabletypes/. 14
Introduction
•
• • • • •
•
•
•
• •
No SQL or stored procedures required: You can write a complete database application without using ANY SQL statements or stored procedures. This way, your database does not become cluttered with millions of stored procedures. Best of all your code and logic is all in one place, not scattered throughout your classes and your database. Dynamic SQL generation: As discussed above, dynamic SQL is an efficient alternative to stored procedures and inline SQL. No data type conversion hassles: You do not have to worry about converting between SQL data types and .NET data types, or checking for nulls. The generated code does this for you automatically, though you have the option of specifically checking for nulls if you need to do so. Consistent and bug-free code: If twelve different programmers make your data-access code, they are probably not going to be 100% consistent with naming conventions and structure, not to mention humans are guaranteed to make mistakes. Generated code is 100% consistent and reliable. Strongly-typed code that allows fewer errors: Almost all the code generated for you is strongly-typed, which means that errors are discovered when the code is compiled—before it is run. Lightweight and powerful code: While it is possible to generate strongly-typed DataSet objects in Visual Studio, LLBLGen Pro code is smaller, cleaner, and infinitely more powerful. Like DataSet objects, all generated Entity and EntityCollection objects are bindable. And with new features in Visual Studio .NET 2005, you do more binding visually. Easier and less error-prone schema changes: Schema changes are a hassle because it takes time to figure out how the changes will affect existing code and logic, both inside and outside the database. With the O/R Mapper, changes to the schema are updated into the generated code, and you immediately see where breaking changes have occurred. Database independent code: LLBLGen Pro abstracts your database from your application, so you are not locked into one database application. This flexibility means that you could switch to a completely different database with minimal to no impact on your custom code. If you stick to functionality common to all databases, you will not need to change a thing. Visual workspace: Much of the work in Visual Studio can be done visually, without having to code by hand. The visual nature of these tools saves you time and frustration, and the results are much more predictable. Visual Studio .NET 2005 adds even more visual features, and we will show you how to take advantage of this new functionality. Powerful pre-built controls: With every new version of Visual Studio, the controls get better and better. As you learn how to use Visual Studio 2005’s bindable objects and DataGridView controls, you will be able to present your data more professionally and efficiently. Easy deployment: The newest version of Visual Studio has useful deployment options that make a Windows application as easy to deliver to clients as a web application. Client machines check for new versions of the software automatically, greatly reducing the burden on the developer.
In the next chapter, we will take a look at what an O/R Mapper is and how you use it to generate your data access layer and business layer.
15
The O/R Mapper
2
The O/R Mapper “Sharpening the ax will not delay the work of cutting the wood.” Chinese proverb Chapter Goals • Prepare the database. • Get acquainted with LLBLGen Pro object types. • Create a new LLBLGen Pro project and scan your schema. • Add database objects and configure them. • Discuss code generation options. • Generate your data access layer and business logic layer. As mentioned earlier, it is beyond the scope of this book to help you in designing a schema from scratch. But there are things that you can change in your schema to make sure you get the maximum possible benefit from the O/R Mapper and spend the least amount of time writing custom code, which we will cover in the next section.
Preparing the database Here are the tasks you need to perform on your schema before you begin development on your application: Clean up your database. Delete unused stored procedures, views, and tables. You do not have to add these items to your generated framework, but having to scroll through an unnecessarily long list is not a good use of your time. Tidy up your database beforehand for maximum efficiency. Give your tables non-prefixed, singular names based on their real-world counterparts: Do not use names like “LD5_223” or “TBL_Orders”. Name the table with order data Order, not Orders, as you want to be using an OrderEntity, not an OrdersEntity. Your code will be much more readable if your tables are named well. Put Primary Keys on all your tables: An O/R Mapper cannot find the row if it does not know its uniquely identifying information. Even if it is just an auto-numbering ID field, go ahead and give your tables primary keys. Remember not to overlook fields that are unique and might serve as a natural primary key, as you want to prevent duplicate data wherever possible with good schema design. Define all your relationships in advance: An O/R Mapper can detect the relationships between tables and let you traverse these relationships quickly and easily, but only if you define the relationships in your database. For maximum effectiveness make sure that all of your table relationships are defined before using the O/R Mapper. If your schema is already set and you cannot change it, you can tell LLBLGen Pro about the relationship even though it does not exist in the database. However, if at all possible, it is highly recommended to define all relationships in your database ahead of time. Normalize to your heart’s content: Most of the time when you design a schema, you have to balance normalization and usability. If you split everything off in separate tables to keep from repeating data, nobody is going to want to use your database because they will not be able to find any of the information! You do not want users to have to pull five tables together just to figure out a customer’s email address! However, when you use an O/R Mapper, it does not matter whether you have 5 tables or 50, they are all generated into their own objects, and, most importantly, they are connected together with relationships. Instead of having a meaningless integer when you read a column that contains a foreign key, you can move across to the row in that table to look up the name without any 17
Chapter 2 Contact CountryRegion PK,I2 I1
CustomerAddress
CountryRegionCode Name ModifiedDate
PK,FK3,I2 PK,FK1,I2
CustomerID AddressID
FK2 I1
AddressTypeID rowguid ModifiedDate
StateProvince PK,I4
StateProvinceID
I3 FK1,I3
StateProvinceCode CountryRegionCode IsOnlyStateProvinceFlag Name TerritoryID rowguid ModifiedDate
I1 FK2 I2
AddressType AddressTypeID
I1 I2
Name rowguid ModifiedDate
Address
Product PK,I4
ProductID
I1 I2
Name ProductNumber MakeFlag FinishedGoodsFlag Color SafetyStockLevel ReorderPoint StandardCost ListPrice Size SizeUnitMeasureCode WeightUnitMeasureCode Weight DaysToManufacture ProductLine Class Style ProductSubcategoryID ProductModelID SellStartDate SellEndDate DiscontinuedDate rowguid ModifiedDate
FK3 FK4
FK2 FK1
I3
PK,I3
I2
PK,I3
PK,I3
AddressID
I2 I2 I2 FK1,U1,I2 I2 I1
AddressLine 1 AddressLine 2 City StateProvinceID PostalCode rowguid ModifiedDate
FK7,U1 FK3 FK8,U2 FK9 FK1 FK2 FK4 FK5 FK6
I1
SalesOrderID
PK,I3
CustomerID
RevisionNumber OrderDate DueDate ShipDate Status OnlineOrderFlag SalesOrderNumber PurchaseOrderNumber AccountNumber CustomerID ContactID SalesPersonID TerritoryID BillToAddressID ShipToAddressID ShipMethodID CreditCardID CreditCardApprovalCode CurrencyRateID SubTotal TaxAmt Freight TotalDue Comment rowguid ModifiedDate
FK1,U1 I1
TerritoryID AccountNumber CustomerType rowguid ModifiedDate
SpecialOfferProduct SpecialOfferID ProductID
I1
rowguid ModifiedDate
I2
PK,I2
U1
Individual PK,FK2,I1
CustomerID
U2 I1
FK1 ContactID U4,U3,U2,U1 Demographics ModifiedDate
PK,I2
I1
SpecialOfferID Description DiscountPct Type Category StartDate EndDate MinQty MaxQty rowguid ModifiedDate
ContactID NameStyle Title FirstName MiddleName LastName Suffix EmailAddress EmailPromotion Phone PasswordHash PasswordSalt AdditionalContactInfo rowguid ModifiedDate
Employee ShipMethod PK,I3
ShipMethodID
I1
Name ShipBase ShipRate rowguid ModifiedDate
I2
PK,I4
EmployeeID
I2 FK2 I1 FK1,U1
NationalIDNumber ContactID LoginID ManagerID Title BirthDate MaritalStatus Gender HireDate SalariedFlag VacationHours SickLeaveHours CurrentFlag rowguid ModifiedDate
EmployeeAddress
SpecialOffer
PK,FK2,I2 PK,FK1,U1,I2
Customer
SalesOrderHeader
PK,FK1,I2 PK,FK2,I2
EmployeeID AddressID
I1
rowguid ModifiedDate
I3
SalesOrderDetail PK,FK1,I2 PK,I2
FK2,U1 FK2
I1
SalesOrderID SalesOrderDetailID CarrierTrackingNumber OrderQty ProductID SpecialOfferID UnitPrice UnitPriceDiscount LineTotal rowguid ModifiedDate
Figure 2.1. AdventureWorks schema
extra hassle. If you tend to be conservative with your schema, consider doing more normalization than you might ordinarily do. Remember that nothing can fix bad database design: Instead of assigning auto-numbering ID fields to every table, think through the data that the table will contain and use primary keys that will ensure you do not end up with duplicate data. A little foresight will save you from having to spend days cleaning up data due to poor schema design.
Our schema ... AdventureWorks Once you have finalized your schema, you are ready to use LLBLGen Pro to create your data access and business logic layers. For the purposes of this book, we will be using the AdventureWorks example database included with SQL Server 2005. Microsoft finally replaced the Northwind database with an example that is more up-to-date and has much better schema design! We will only be working with a few of the tables from this database—not the entire database. Figure 2.1 is a diagram of the tables we will be using in our example application.
LLBLGen Pro Objects Before we use LLBLGen Pro, let’s take a look at what this tool will provide us. Here are the classes we will be getting automatically as we scan our schema and generate our code. Entities: For every table you add to your project, LLBLGen Pro will generate a specific entity class to represent that table. Each instantiated entity corresponds to one row in a specific table in your database. For example, in the AdventureWorks schema, LLBLGen Pro would create an object type called EmployeeEntity. One EmployeeEntity object 18
The O/R Mapper
would represent one row in the Employee table. We can create a new row, retrieve an existing row, update a row, or delete it by interacting with this EmployeeEntity object. Each column in the Employee row is exposed as a property of the EmployeeEntity object. LLBLGen Pro refers to these properties as fields. Collections: For every entity class, LLBLGen Pro also creates an entity collection. An entity collection contains entity objects, just like a table contains rows. From the Employee table in the AdventureWorks database, LLBLGen would create an EmployeeCollection that holds EmployeeEntity objects. These collection classes eliminate the need for DataTable objects and are both strongly-typed and bindable. Instead of running a stored procedure and getting a DataTable, you can create criteria using the LLBLGen Pro framework and retrieve a collection of entities that match your criteria. Entity relationships: Built into each entity are all relationships in the database involving that table. These relationships help you navigate between related tables. Using an EmployeeEntity, you can immediately retrieve the related ContactEntity (using the 1:n relationship between the Employee table and the Contact table) or a AddressCollection (using the m:n relationship through the EmployeeAddress table). You will either get a single entity or an entity collection depending on the type of relationship. This built-in property saves you the trouble of navigating to that table and filtering out the unnecessary rows yourself. Typed views: Views in the database can be wrapped as strongly-typed DataTable objects. This means that LLBLGen Pro will create a new class that inherits from a .NET DataTable that will specifically define the contents of every column in that view. Typed views are read-only. A new feature in the latest version of LLBLGen Pro also allows you to add a view from the database as an entity as well as a typed view. We will discuss the differences between these two methods in later chapters. Typed lists: Typed lists are the only objects created by LLBLGen Pro that do not correspond one-to-one with database objects. When you generate your code, you have the option of creating your own strongly-typed lists of columns from either one table or multiple tables. We could, for example, create a list from the Employee and Contact tables, but only use the columns BirthDate, FirstName, and LastName. We could add the criteria “Gender = F”, and fill our typed list with only these rows. Typed lists are handy when you need very specific information that does not necessarily correspond to a single table/entity or you only want to grab a subset of information for a given set of tables. Like typed views, typed lists are read-only. Stored procedure caller classes: Stored procedures that you select will be wrapped in a layer of code, making them easy to access. This can make the migration to LLBLGen Pro much easier, as you can gradually wean yourself off unnecessary stored procedures without having to migrate all at once. While the parameters of the stored procedure are strongly-typed, remember that the result set is still an untyped DataSet. Despite this disadvantage, if you only used LLBLGen Pro to expose your stored procedures in your data access layer with a consistent naming scheme, it would still save you hours of development time. Now that we have a good idea of what items we can have generated with LLBLGen Pro, let’s look at the schema scanning and code generating process.
Scanning the Schema Scanning your database schema is relatively straightforward. The designers of LLBLGen Pro helped make this process a point-and-click affair. Simply open LLBLGen Pro and go to File > New Project. (Figure 2.2) Fill in the name of the project (“AW.Data”), your name, and a location to store the file. We recommend saving the file in My Documents\Visual Studio 2005\Projects\AW.Data since this location is also where you will be generating your code. Select SQL Server 7/2000/2005 Driver (SqlClient) as the database driver. Enter the name of your SQL Server and click Connect. Select the AdventureWorks Database. Leave the checkboxes under the “Elements to read from the database schemas” heading as they are, with Tables, View, and Stored Procedures selected. Click Create to start the process of reading your database’s schema.
19
Chapter 2
Figure 2.2. New LLBLGen Pro project options
During this process, the application is reading your schema and saving the information into your project file. Once you have finished this scan, you do not have to be connected to your database in order to complete the code generation process. The only time you will need to connect to the database again is if you make schema changes and need to regenerate your code. Working in this disconnected manner is both convenient and expedient. Once the scanning process has completed, you should see a screen like the one in Figure 2.3. From here we will add objects from the database and configure them.
Figure 2.3. LLBLGen Pro main project screen 20
The O/R Mapper
Figure 2.4. Adding entities to the project
Creating Entities The first set of items we need to add to our project are entities. We now need to tell LLBLGen Pro exactly which tables from our database we would like to use. (Figure 2.4) Right-click on Entities in the left bar, and select Add New Entities Mapped on Tables from Catalogs(s). The Designer lets you select from a list of all the tables in your database. At this step, you will generally want to add all the tables you think you might ever access. Only leave a table out if you have a specific reason not to include it or you know for certain you will never use it. When you select tables, LLBLGen Pro will examine all the relationships between the tables and create methods to navigate between the entities. The more tables you add, the more relationships LLBLGen Pro will find—including relationships you had not anticipated! For this walk-through, select only the tables listed in Table 2.5. Adding tables is as simple as checking the boxes (Figure 2.6) and clicking Add to project when you are done. You will see the tables we just added under the Entities node (Figure 2.7). Table Names
Table Names
Address
Individual
AddressType
Product
Contact
SalesOrderDetail
CountryRegion
SalesOrderHeader
Customer
ShipMethod
CustomerAddress
SpecialOffer
Employee
SpecialOfferProduct
EmployeeAddress
StateProvince
Table 2.5. List of tables to add as entities from the AdventureWorks database
21
Chapter 2
Figure 2.6. Table selection screen
Figure 2.7. Newly added entities
Entity Options Now let’s take a look at all of the options that you can configure for every entity in your project. 22
The O/R Mapper
Figure 2.8. Entity edit screen
Right-click on the Address entity and select Edit/Properties. From the entity edit screen (Figure 2.8) you will be able to set specific names for each property and relationship. Because each column in the table becomes an entity property with the same name, as long as you named your database fields well, you should not need to change anything here. This list will also tell you the database type of each column and the corresponding .NET type that the column will become in your class. LLBLGen Pro also detects whether or not the column is read-only, the length of the field, and whether or not it can be null. Through this crosschecking process, the data layer can catch some errors even before the data is saved in the database. Select the Fields on relations tab. (Figure 2.9) This screen shows a list of all the detected relationships for this entity and the name of the property that you use to access the related entity or entity collection. For each relationship, you will see the name of the relationship (field name), the two tables that the relationship exists between, and the type of relationship (1:1, 1:n, m:1, m:n). For this entity there are 14 different relationships! The name given to this field name/property is based on the name of the table on the other side of the relationship. For the first relationship in the list, the name is StateProvince. Notice that the 4th relationship in this list, ContactCollectionViaSalesOrderHeader, is a many-to-many relationship that we probably would not have created on our own. It traverses the SalesOrderHeader table and allows us to retrieve a collection of Contacts directly from an Address. These extra relationships are helpful additions that do not cost you any extra time, but that might come in handy later when you are trying to solve a specific problem. Select the Relations tab. The Relations tab allows us to look specifically at the relationship objects themselves (Figure 2.10). Having our relationships already defined in our database certainly makes our job easier since we do not have to define them manually. 23
Chapter 2
Figure 2.9. Entity relationships mapped as fields setup screen
This tab will be important if you want to use a particular relationship that does not exist in your database's schema; you can manually add new relationships here by clicking Add new custom 1:1/1:n/m:1 relation or Add new custom m:n relation.
Figure 2.10. Entity relationships setup screen 24
The O/R Mapper
Figure 2.11. Fields on related fields edit screen
Select the Fields on related fields tab. This tab allows you to have a field from a related entity visible as if it were part of this entity. For example, in the Address table, we have a foreign key that points to a row in the StateProvince table. Normally, if we want to know the name of the state or province we have to look it up. However, if we map the name of state or province as a field in our Address entity, we will have easy access to the value without traversing a relationship to another entity (Note: When using fields mapped to related fields it is important to use Prefetching, which is discussed in Chapter 10). Add the following fields on related fields to the Address entity from the StateProvince table by clicking on Add new, and selecting the correct field from the Mapped on field list: StateProvinceCode, Name, and CountryRegionCode (Figure 2.11). Also, make the following changes to the employee entity: From the main project screen, right-click on the Employee entity, and select Edit/Properties. Select the Fields on relations tab. Change the Employee field name to “Manager”. Change the Employee_ field name to “Manages”.
25
Chapter 2
Figure 2.12. Typed view selection screen
Adding Views Now, we will choose the views in our database to add to our project. For now, we will be working with one view primarily. As we mentioned earlier, with LLBLGen Pro, you can consume a view in two ways. First, you can add a view as a typed view. From the main project screen, right-click on Typed Views and select Add New Typed Views from Catalog. We are presented with a list of all the views in our AdventureWorks database (Figure 2.12) . Select vIndividualCustomer and click Add to project. Right-click on the vIndividualCustomer that was just added and select Edit/Properties. (Figure 2.13) The only items you will customize with a typed view are the name of typed view and the field names. If these items were poorly named in the database, correct them here. Change the name of the view to CustomerView. The other way of using a typed view is to add it as an entity. By adding it as an entity, you will end up with a collection instead of a typed DataTable. You also have the capability of adding relationships to other entities, which will make filtering rows easier. Let’s add the same view as an entity as well. Right-click on Entities and select Add New Entities Mapped on Views From Catalog(s) (Figure 2.14). Select the vIndividualCustomer view and select Add to project. Right-click on the new vIndividualCustomer entity and select Edit/Properties. Change the name of the entity to CustomerViewRelated. 26
The O/R Mapper
Figure 2.13. Typed view field names edit screen
On the Entity Fields tab, select the CustomerID row, and check the boxes for Is Readonly and Is part of the primary key (Figure 2.15). On the Relations tab, click the Add new custom 1:1/1:n/m:1 relation (Figure 2.l6). Select Primary key side, and select SalesOrderHeader as the related table. Make sure both the Primary Key field name and the Foreign Key field name are displaying CustomerID. Adding the relation will make it possible to retrieve a row in this view directly from an order in the SalesOrderHeader table.
Figure 2.14. Adding a view as an entity 27
Chapter 2
Figure 2.15. Entity field configuration screen for view mapped as an entity
Figure 2.16. Adding a relationship to a view mapped as an entity
Creating Typed Lists Now we will create a typed list. From the main project screen, right-click on Typed Lists and select New Typed List. Enter the name “CustomerList” and click OK. 28
The O/R Mapper
Figure 2.17. Typed list configuration screen
(Figure 2.17) You are presented with a list of all the entities in your project. From the list of entities, you can choose one or more entities to add to your typed list. You will then edit the entire list of columns from all the tables you have selected in order to include only the columns that you want to appear in your typed list. „ Select the Customer entity, and click the Add>> button. Table.Field Name
Field Alias
Customer.CustomerID
(same)
Contact.Title
(same)
Contact.FirstName
(same)
Contact.MiddleName
(same)
Contact.LastName
(same)
Contact.Suffix
(same)
Contact.Phone
(same)
Contact.EmailAddress
(same)
Contact.EmailPromotion
(same)
AddressType.Name
AddressType
Address.AddressLine1
(same)
Address.AddressLine2
(same)
Address.City
(same)
StateProvince.Name
StateProvinceName
Address.PostalCode
(same)
CountryRegion.Name
CountryRegionName
Table 2.18. List of columns in CustomerList typed list 29
Chapter 2
You can only add entities to the list that are related. Now that you have chosen one entity, the other unrelated entities in will be grayed-out. These tables will define the source from which we will select the columns we want. In the order that they are listed, add the following tables: Individual, Contact, CustomerAddress, Address Type, Address, StateProvince, CountryRegion. Now we can select the specific columns that we are interested in. We will be choosing columns in such a way that the result will be similiar to the columns in the vIndividualCustomer view. Click on the Fields mapped on entity fields tab, and select the columns according to Table 2.18, making name changes where indicated. As you check the box next to the list of available field names, they are added to the collection at the bottom of the form (Figure 2.19). These name changes are necessary since there are multiple fields with the column name “name”, and each field name in the typed list must be unique.
Figure 2.19. Typed list field selection screen
That is all there is to creating a typed list. Our effort will produce a strongly-typed DataTable with these 16 columns. We can create any criteria that we like when we use this list, but the rows that are returned will always contain these 16 columns. Because we know the exact schema of the result set, a typed list gives us an advantage over a stored procedure. No surprises at run-time!
Adding Stored Procedures Now we will add a stored procedure to the project. From the main project screen, right-click on Retrieval Stored Procedure Calls and select Add Stored Procedure Calls. Select UsbGetEmployeeManagers and UsbGetManagerEmployees and click Add to Project. Right-click on UsbGetEmployeeManagers and select Edit/Properties. (Figure 2.20) In the edit screen of a stored procedure, you can change the stored procedure’s name or the names of the parameters. Again, if these names were poorly labeled in the database, take time to correct them here. Note that changing parameter names here only changes how the stored procedure is referenced in your generated code. Changing the name in your LLBLGen Pro project does not actually change the stored procedure’s name in your database.
30
The O/R Mapper
Figure 2.20. Stored procedure parameter list
We have finished adding database objects to our LLBLGen Pro project and now we are ready to generate our code. Before doing so, let’s look at the different code generation options available.
Self Servicing vs. Adapter LLBLGen Pro offers two different template groups to select when generating code: Self-Servicing and Adapter. In the Self-Servicing template group, entity objects are responsible for their own persistence (saving their own changes to the database). The entity class itself contains logic to know which fields were updated and where to find the database, table, and row necessary to persist itself. All you need to do is call the Save() method. As you navigate the relationships of the objects, the data is retrieved from the database as it as needed, without your explicit direction. This is referred to as lazy loading. Lazy loading is useful because it abstracts the specific database call from the upper layers of code. However, when it is used incorrectly data access can execute in a grossly inefficient manner (many small queries to your database instead of one large query). If you pay attention to how the data is being retrieved as you use the object you can use a process called prefetching to consolidate database queries and increase performance. Prefetching is discussed in Chapter 10. Using the Self-Servicing template group is similar to an office environment where coworkers are empowered and expected to know what needs to be done to do their individual jobs. You do not need to tell them the all the details. Just point them in the right direction, and they will work out the details on their own. The Self-Servicing template is strongly entity-focused. You can see what using the self-servicing code is like when fetching an order in Example 2.21. 1 ContactEntity MyContact = new ContactEntity(34); 2 MyContact.FirstName = tbFirstName.Text; 3 MyContact.Save(); Example 2.21. Self-servicing code example
In the above example, as soon as the entity in instantiated in Line 1, all the data is fetched from the database and loaded into the object. In Line 2, we can immediately begin using or updating the entity’s properties. To save the changes back to the database, we simple call the Save() method in Line 3. The Adapter template group uses an object called a DataAccessAdapter to interact with the database. Instead of database queries happening behind the scenes automatically, every database transaction will be explicitly called through the DataAccessAdapter object. In this way, the DataAccessAdapter is similar to the built-in TableAdapter object in .NET. With this adapter, you have more control over the database connection and can choose when to open it, how long it remains open, and when to close it. 31
Chapter 2
The Adapter template group is more like a classroom, where the teacher is firmly in control and tells the students exactly what to do. Students are not empowered to simply “take care of themselves”. The students follow the teacher's specific instructions. The Adapter template group has the advantage of fine-grained control over every database query. In addition, it also supports multiple database types and multiple databases. If your data needs are complex, the Adapter template group might be your only option. The Adapter template organizes code around an object that provides data services and is independent of the entity objects themselves. In Example 2.22, we use an Adapter code to fetch an order. 1 2 3 4 5
DataAccessAdapter Adapter = new DataAccessAdapter(); ContactEntity MyContact = new ContactEntity(34); Adapter.FetchEntity(MyContact); MyContact.FirstName = tbFirstName.Text; Adapter.Save(MyContact); Example 2.22. Adapter code example
With Adapter code, we can instantiate the Entity in Line 2, but until we call the FetchEntity() method in Line 3, the data will not be loaded into the object from the database. We must use the DataAccessAdapter to read and update the data (Line 5). If none of the templates meet your needs, you also have the ability to create your own templates or modify the existing templates to include your custom code.9 To summarize, here is an overview of each pre-built template: Self-Servicing • Bundles persistence inside entity objects. • Allows data to load itself automatically as it is needed without explicit commands. • Does not require extra objects to perform data access. • Only works for a single database type and a single catalog. • Can be used easily and intuitively. Adapter • Exposes persistence as a service. • Allows finer database control. Each database query is explicit. • Can target multiple databases and catalogs. • Requires an extra object and a few more lines of code.
One Class vs. Two Classes (Self-Servicing only)
If you use the Self-Servicing template, another decision you will need to make is whether to use a one-class scenario or two class scenario. In a general scenario (one class), only one set of entity classes are created. If you need to extend the generated framework, you can place your custom code inside of the generated classes between special markers to ensure that it will not be overwritten if the code is regenerated. Outside of using special markers, your only other option would be simply to create your own set of custom classes that use the generated framework. You might create an AddressManager class that consumes all the entities associated with placing an order (Figure 2.23). A better option in certain situations is to use a two-class scenario. In this case, two sets of entity classes are created. The first type is known as the base type and is exactly the same as the entity class in the one-class scenario, but with a different name. The extra entity class is an empty class that inherits from the base class. You can use this shell as an instant business layer for all of your custom code. This class is kept completely separate from the generated code and will never be overwritten when the code is regenerated, so you will not lose your changes. Figure 2.24 is an 9) While template modification is outside the scope of this book, LLBLGen Pro customers can freely download the LLBLGen Pro Software Development Kit (SDK) from Solutions Design to get tools and instructions for creating templates and adding custom code to existing templates. 32
The O/R Mapper
Figure 2.23. General scenario (one class) diagram
example of a two class scenario. In this setup, you can place your business logic inside the AddressEntity object, instead of writing your own “manager� classes. Entity classes inherit all the functionality of the base classes, but are never overwritten when the code is regenerated. The two class scenario is highly recommended to fulfill the Ntier best practice in the least amount of time.
Figure 2.24. Two Class scenario diagram 33
Chapter 2
Generating the Code We are now ready to generate the code for our selected database objects. Click on the Project menu and select Generate. The code generation screen with the 4 main options that you will set for your project is shown in Figure 2.25. In the Generator configuration to use option, select Self Servicing, two class scenario (Full) VS.NET 2005. The Full option in the Two Class Scenario will generate what is necessary to start the project the first time. It will create both the base classes (data layer) and the inherited classes (business layer). After generating the first time, you would select Servicing, two class scenario (Base Classes Only) for subsequent code generation tasks. Regenerating your code is covered in Chapter 12. Whenever you regenerate your code, it is always advised that you make a backup copy of your code or check it in to source control before you start, just in case you make the wrong selection. You will never lose your custom code if you make a backup beforehand. In the Template set to use option, select C# template set for SQLServer. In the Root namespace option, enter “AW.DATA”. The namespace decision is important: you cannot change this option without regenerating your code! The namespace is so heavily embedded in your classes, it would take you days to change it manually. Choose a logical namespace that fits in well with your company’s namespace conventions.
Figure 2.25. Generator configuration screen 34
The O/R Mapper
Figure 2.26. Generator results summary
In the Destination root folder option, choose a location to save the generated code. (Recommended: “My Documents/Visual Studio/Projects/AW.Data”) Be sure that you use an empty folder, as LLBLGen Pro will create many new folders and files. Using the same folder for two different LLBLGen Pro projects is not possible without overwriting some of the other project’s data. The default location is My Documents/LLBLGen Pro Projects. A better option may be to save your project in the default folders of Visual Studio (My Documents/Visual Studio/Projects) within another folder specifying the name of your particular project. Click Start generator to begin the code generation process. You will receive a screen like Figure 2.26 when the process is finished, summarizing the generation tasks that were completed. Each file that was created by the program is listed here. In return for the small amount of configuration, we get a lot of useful code! You can now save your LLBLGen Pro project and close the application. When you navigate to the folder where you created your project, you should see something like Figure 2.27. We will take a look at how to use the specific output files in the next chapter. For now, you have successfully generated your first LLBLGen Pro project. Now, take a break, get a fresh cup of coffee, and congratulate yourself! You have just saved yourself weeks, if not months of development time!
Further Suggestions • •
Check your newly generated files into source control or find an easy way to back up the files. You can always right-click on the folder and select Send To > Compressed (Zipped) Folder to save the code in a zipped file. Add the date to the zipped file's name so you can remember when the copy was made. Inside LLBLGen Pro, go to File > Preferences (Figure 2.28), and review the preferences you can configure for the LLBLGen Pro project. Note the Preferred generator configuration option which will allow you to choose a default template for the code that you generate. Set it to SelfServicing, two class scenario (Base classes only). Also, I recommend changing the Preferred project folder and the Preferred destination root folder to My Documents\Visual Studio 2005\Projects. 35
Chapter 2
Figure 2.27. Generator output
•
Even if your database is well organized with clear table and columns names, you will still need to spend time naming the relationships between your entities according to their function (like we did with the Employee entity’s relationships). Because LLBLGen Pro detects a large number of relationships, some of the default names will not be intuitive.
Figure 2.28. LLBLGen Pro preferences screen
In the next chapter, we will begin setting up our Visual Studio solution and project files.
36
Solution Setup
3
Solution Setup “Plans are only good intentions unless they immediately degenerate into hard work.” Peter Drucker (1909-2005) Chapter Goals • Create your Visual Studio solution and project files. • Add the necessary references to compile your code. • Configure the App.config file. • Build your solution for the first time. • Acquaint yourself better with the files and folders of your generated code. Before we begin, we will need to set up all the major project files for our application. The organization and structure of a normal Visual Studio project can be confusing, so we will walk through each part step-by-step.
Creating the solution Creating solutions and projects is easy, but mistakes in these initial stages could cause major frustrations later. Take time to ensure that you understand each step and that you save items with the correct names in the correct directories.
Figure 3.1. New project creation screen
Open Visual Studio .NET 2005 and go to File > New > Project. (Figure 3.1) Choose Visual C# > Windows on the left, and Windows Application on the right. 37
Chapter 3
In the Name field, enter “AW.Win”. Keep the default location, inside My Documents/Visual Studio 2005/Projects. Make sure Create directory for solution is checked. Enter “AW” as the solution name. For those new to Visual Studio, a solution is your master file. Each solution can contain multiple projects. A project can be a C# Class Library, C# Windows Application, or a C# Web Application. In our case, we will have two main projects: AW.Data, the generated data/business layer, which is a C# Class Library Project; and AW.Win, a C# Windows Application project. You should now have the solution and your first project loaded. Inside Visual Studio’s Solution Explorer window, you should see a list similar to Figure 3.2.
Figure 3.2. Newly created AW solution and AW.Win project
Now we need to add the AW.Data project containing our LLBLGen Pro code to our solution. Go to File > Add > Existing Project. Navigate to the folder containing your generated code, and click Open.
Figure 3.3. AW solution, AW.Data and AW.Win project.
Looking at Solution Explorer again, you should see a solution tree similar to Figure 3.3.
38
Solution Setup
Adding references Now we need to add some references to the AW.Win project to enable use of our LLBLGen Pro generated code. First, we need to add a reference to the AW.Data project. This will enable us to use the output of the AW.Data class library. In Solution Explorer on the AW.Win project, right-click References and select Add > Reference. (Figure 3.4) Select the Projects tab, and highlight AW.Data. Click OK.
Figure 3.4. Add reference screen
Also we will need to add a reference to some common LLBLGen Pro libraries. These libraries, which were installed when you installed LLBLGen Pro, contain classes you will use as you make your dynamic queries. . If another developer will be using this solution, he or she either needs to install LLBLGen Pro on his or her individual machine, or have these DLL files available. Right-click References again and select Add > Reference. On the .NET tab, add LLBLGen Pro .NET 2.0 ORM Support Classes library (SD.LLBLGen.Pro.ORMSupportClasses.NET20.dll) and LLBLGen Pro .NET 2.0 Dynamic Query Engine for Sql Server (7/2000/2005/MSDE (SD.LLBLGen.Pro.DQE.SqlServer.NET20.dll). Your AW.Win project’s references should now look like Figure 3.5.
Figure 3.5. AW.Win project references 39
Chapter 3
App.config settings When LLBLGen Pro needs to read information from the database, it uses a connection string to figure out where to find the database and which credentials to use. It is always a best practice to put this string in exactly one place in your application. That way, you can change it easily should the need arise (and it will). The connection string is normally placed into a special XML configuration file called the App.config file. If you look in your AW.Data project, you will see one that LLBLGen Pro created for you. <?xml version="1.0"?> <configuration> <appSettings> <add key="Main.ConnectionString" value="data source=jchancellor2-nb;initial catalog=AdventureWorks; integrated security=SSPI;persist security info=False"/> </appSettings> </configuration> Example 3.6. App.config file contents
Â&#x201E; Double-click on the App.config file in the AW.Data project to open it for editing (Example 3.6).
Figure 3.7. AW.Win project with App.config file 40
Solution Setup
Notice the Main.ConnectionString key. You can change the name of the key that the class library references in the property settings of your LLBLGen Pro project when you generate your code. The value part of the key contains the connection string. Whenever possible, using integrated security is preferable to using a SQL username and password. One reason for this decision is that you do not want to put any usernames or passwords in plain text where someone could easily steal your credentials and gain access to your database. With integrated security, the connection string reveals little useful information to a malicious user. By default in a .NET Windows application, our program will execute under the context of the Windows user who is logged in to the computer. When connecting to the database using integrated security, Windows passes along this context to the database for authentication. This means that for every user who runs our Windows program, either the individual’s Windows account or a security group to which the user belongs will require permission to use the tables in the SQL database that our program accesses. Your unique situation will determine what kind of security setup you use. Go ahead and change your connection string if you need to use different credentials than the ones that you used to scan your schema. After making any changes, we can now copy the App.config file from the AW.Data project to the AW.Win project. Every project that references the AW.Data project will need an App.config file available with the connection string inside. In Solution Explorer, right-click on the App.config file in the AW.Data project and select Copy. Right-click on the AW.Win project and select Paste. Now Solution Explorer should look like Figure 3.7. Now that we have made all the necessary changes to our solution, let’s compile it for the first time. Right-click on your solution in Solution Explorer and select Build. You should receive a “Build Successful” message in the status bar and the Output window (Figure 3.8).
Figure 3.8. Output window showing a successful build
LLBLGen Pro Project Overview Let’s take a detailed look at the AW.Data project, and see what each folder contains. It is important to know the general layout of the generated code so that you know where to add your custom logic which folders you will normally need to backup in order to avoid losing your work. Also, to prevent LLBLGen Pro from overwriting your work when you generate, you can always set the “read-only” flag of particular files or folders to true. You can freely modify any of your LLBLGen Pro-generated code, but we will review the most common locations where you will add your customizations. While all the code in this class is initially generated by the tool, the icon will note which folders and files you will normally overwrite each time code is generated and which will contain most of your custom code and never be overwritten. 41
Chapter 3
PROJECT OUTPUT KEY = Overwritten by the generator each time, rarely changed by hand. = Initially generated once, but heavily developer customized.
FOLDERS CollectionClasses: This folder contains the classes of all your strongly-typed collections. You will not modify anything inside this folder. DaoClasses: This folder contains objects that manage database-related options behind the scenes. For instance, these classes take entity objects and use the dynamic query engine to make the SQL statements, as well as perform the actual query. EntityBaseClasses: These are the data-layer entity objects. All of these entities have a suffix of “-base” in their names. You will not ever call these objects directly or change anything in this code. EntityClasses: This folder contains your business object shells. You will spend most of your time here extending these classes. If you want to save your work, be sure to make a copy of this folder! FactoryClasses: These classes contain factories which will help you create criteria for queries and objects that will define how you want items sorted. We will be using these classes, but not modifying them. HelperClasses: These classes are called by the data layer. These classes create connections to the database, provide transaction support, and define the default values of types, among other actions. If you want to modify how your connection string is read when a database connection is opened, you can modify DBUtils.cs. You can also modify TypeDefaultValue.cs, if you would like to specify different default values for .NET types. For instance, if you would rather have a null integer return the minimum possible integer value instead of 0, you can make that change here. If you do make changes here, remember to back them up, and to set the read-only flag to true, as these files are normally overwritten by the generator. RelationClasses: These classes contain definitions for all of the relations that exist between each entity. You will never modify these classes. StoredProcedureCallerClasses: These classes expose the stored procedures that we selected. They are defined in only two files: ActionProcedures.cs and RetrievalProcedures.cs. If you want more procedures accessible in your code, do not add them here manually! It is better to add them to your LLBLGen Pro project and regenerate your code. TypedListClasses: This folder contains all the typed lists we added to our project. Every typed list will have its own class in its own file. Again, if you would like a new one, make it in your LLBLGen Pro project. TypedViewClasses: Like the typed lists, typed views are inside this folder—one per file. Add new views in your LLBLGen project. ValidatorClasses: Validator classes are shells that LLBLGen Pro generates to help you implement field validation. These classes allow you to accept or reject changes when the properties of your entities are changed. Each entity has its own validator class in its own file. If you choose to take advantage of field validation, these files will be heavily modified.
FILES There are two more files in the root of your AW.Data project that you should be aware of: AssemblyInfo.cs: If you want to add more information about your assembly, such as your company’s name or version information, you can add it to this file. ConstantsEnums.cs: This file contains an index of every field in every table/entity, typed list, and typed view in your project. Instead of typing in a static string as you work with your objects, you need to always use these enumerators. This index is regenerated when your code is regenerated. If you consistently reference this index, you will notice when your schema changes. In the next chapter, we will begin building the first form of our Windows application. 42
MDI Parent Form
4
MDI Parent Form “When I was a kid my parents moved a lot, but I always found them.” Rodney Dangerfield (1921 - 2004) Chapter Goals • Use an MDI parent form to launch other forms in your application. • Add three blank child forms. • Add menus and event handlers to launch the correct form when clicked. • Configure a menu item to list all open child forms. In this chapter, we will put together the shell of our database application. There are many different approaches to building a graphic user interface, and the .NET framework gives a user interface designer many options. We will be sticking with an approach that is simple and extendable.
MDI Parent Our application will eventually have many forms to search and input data, so we will use the multiple-document interface features of Visual Studio. The first form that we will create will hold all of the other forms we will design in later chapters. This main form is known as the MDI parent. Because we want to give good, descriptive names to all of our objects in the project, we need to delete the form that was created for us and add our own form that will become the new MDI parent. In the AW.Win project, delete Form1.cs. Right-click on the AW.Win project and select Add > New Item. Choose Windows Form and give it the name frmMain.cs. Open the Program.cs file. Change the Main method to the call our new frmMain form when the application starts (Example 4.1). 1 static void Main() 2 { Application.EnableVisualStyles(); 3 Application.SetCompatibleTextRenderingDefault(false); 4 Application.Run(new frmMain()); 5 6 } Example 4.1. Updated Main() method
The Main() method in the Program.cs file specifies which form will start when the application launches. Line 5 is the only line that needs to change. We also want to change some of the properties of the frmMain object to make the form a little more attractive. 43
Chapter 4
While selected on frmMain, look at the Properties Window, and change the following properties of the form to the values listed in Table 4.2. Property
Value
IsMdiContainer
True
StartPosition
CenterScreen
Text
Adventure Works
WindowState
Maximized
Table 4.2. Properties to set for form frmMain
Setting the IsMdiContainer to true will allow us to add other forms to this form as MDI children. As you set this property, you will notice the background and border change slightly to indicate that it is now a container for other forms (Figure 4.3).
Figure 4.3. Form frmMain with IsMdiContainer set to true
Now we will build the menu structure for our application. Drag a MenuStrip object on the form and add menus and sub-menus to give you the structure in Figure 4.4. The “&” symbol will underline the next letter in the menu name and allow users to press alt + that letter to activate the item.
Figure 4.4. Menu structure for frmMain
Now frmMain should look like Figure 4.5. 44
MDI Parent Form
Figure 4.5. Form frmMain with menus
Select the MenuItem you created that is labeled Window. Verify that the name of this object is windowToolStripMenuItem. Select the MenuStrip object (the entire menu object that contains all the other menus), and set the MdiWindowListItem property to the windowToolStripMenuItem. By setting the MdiWindowListItem property, any MDI child forms of frmMain will automatically be added to the Windows menu and removed when the child form is closed. In previous versions of Visual Studio, you would have to code these actions by hand. Now we will create an event handler that will close the application when the user clicks the Exit menu. Double-click the Exit menu object. Visual Studio should create the event handler for the menu’s Click event, and switch to code view. Add the code in Line 3 to the exitToolStripMenuItem event handler (Example 4.6). private void exitToolStripMenuItem_Click(object sender, EventArgs e) { this.Close(); }
1 2 3 4
Example 4.6. Menu exitToolStripMenuItem’s Click event handler
Now we have a fully functioning MDI parent form that we can use as a launching pad for our other forms. Let’s compile and test the application to see how it functions. Press F5 to compile and launch the application. At this point, the only actions you can perform in the application are to browse the menus and close the program. It is not much, but it is a great start! Go ahead and exit the application, and we will add a few more forms before we finish this chapter. Property
Value
FormBorderStyle
FixedToolWindow
ShowInTaskbar
False
StartPosition
CenterParent
Text
(frmOrderSearch) Order Search (frmOrderEdit) Order Edit (frmCustomers) Customer List (frmOrganization) Adventure Works Organization (frmVacationBonus) Vacation Bonus Utility
Table 4.7. Properties to change on 4 new child forms 45
Chapter 4
Add four new Windows forms named frmOrderSearch, frmOrderEdit, frmCustomers, frmOrganization, and frmVacationBonus. Set the properties listed in Table 4.7 for all four of the new windows you just created. Now we will create event handlers for the appropriate menus and launch the corresponding form. Switch to the design view of frmMain and double-click on all the menus to automatically create the Click event handlers for each one. Add this section of code from Example 4.8 below the code you added in Example 4.6. Remember that double-clicking on the menu will both create the menu’s Click event handler and associate that event handler with the menu item in the menu strip . If you add the code manually in code view, make sure you also set the menu’s Click event to point to the correct event handler. Otherwise, the event handler will not run when you click the menu. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
public void LaunchChildForm(Form ChildForm) { ChildForm.MdiParent = this; ChildForm.WindowState = FormWindowState.Normal; ChildForm.Show(); } private void ordersToolStripMenuItem_Click (object sender, EventArgs e) { LaunchChildForm(new frmOrderSearch()); } private void customersToolStripMenuItem_Click (object sender, EventArgs e) { LaunchChildForm( new frmCustomers()); } private void organizationToolStripMenuItem_Click (object sender, EventArgs e) { LaunchChildForm(new frmOrganization()); } private void vacationBonusUtilityToolStripMenuItem_Click( object sender, EventArgs e) { LaunchChildForm(new frmVacationBonus()); } Example 4.8. MenuItem’s Click event handlers and LaunchChildForm() subroutine
In the code above we simply created event handlers for our three menu options. Each handler instantiates the correct form and calls a subroutine, passing it the newly created form. In Line 3, the subroutine sets the MDIParent property to the current form (in this case frmMain), makes sure the WindowState property of the form is Normal (not Maximized or Minimized), and then displays it to the user. When you compile and run the application now, you should be able to bring up each one of the blank forms and get a list of all the open windows under the Window menu (Figure 4.9). But what happens if you select the same menu item more than once? We will make one final improvement to this parent form before we move on. The way our code is written in Example 4.8, Lines 18-23, if a user selects a particular 46
MDI Parent Form
Figure 4.9. Form frmMain with working menus
menu option multiple times, we will create multiple copies of the form. Let’s rewrite the subroutine so that it checks to see if any forms of that type are already open, and if so, brings the existing form to the front. Modify the LaunchChildForm() subroutine we created in Example 4.8 to match Example 4.10. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
public void LaunchChildForm(Form ChildForm) { bool FormAlreadyExists = false; foreach (Form myForm in this.MdiChildren) { if (myForm.GetType() == ChildForm.GetType()) { FormAlreadyExists = true; ChildForm = myForm; break; } } if (FormAlreadyExists == true) { ChildForm.BringToFront(); } else { ChildForm.MdiParent = this; ChildForm.WindowState = FormWindowState.Normal; ChildForm.Show(); } } Example 4.10. Updated LaunchChildForm() subroutine
47
Chapter 4
In Lines 4-12 we loop through all the forms in the MDIChildren array, searching for any that are the same type as the form that was passed to us. If we find one, we set a flag to true, assign the existing form to our form variable and exit the loop. In Line 13, if the flag has been set, we bring the child form to the front. If the flag is false, we assign the form to be a child of the this form, set the WindowState property, and display it to the user. In this chapter we set up the basic shell of our application. In the next chapter, we will begin configuring some simple forms to display the results of stored procedures, typed views, and typed lists.
48
Simple Forms
5
Simple Forms “The secret to creativity is knowing how to hide your sources. Albert Einstein (1879 - 1955) Chapter Goals • Execute two stored procedures and display the results. • Extend the business logic layer with a simple query. • Learn to use SortExpression and SortClause objects. • Recreate the logic of the stored procedure using entity objects. • Consume a view as a typed view and as an entity. • Use Relation objects to make joins between tables. • Format DataGridView columns. • Learn to use a typed list. We are now ready to use our LLBLGen Pro classes and create a few functional forms. We will begin with a stored procedure since it is one of the easiest database objects to use and understand.
Calling a Stored Procedure In our first form, we will be constructing an organizational chart based on the information in the Employee and Contact tables. We will start out by using two stored procedures to fetch data and display the results in a TreeView control. Later, we will try a different method of retrieving the same information. First we need to setup the form where we will be displaying the data. Open the design view of frmOrganization.cs. In the properties of the form, set the Width to “300” and the Height to “600”. Layout the form according to Figure 5.1 and Table 5.2.
Figure 5.1. Form frmOrganization control layout (cropped)
49
Chapter 5
Control Type
Name
Properties to set
1 ComboBox
cbEmployee
2 Button
btnSearch
Text: Search
3 Panel
pnlTop
Dock: Top (Send to back)
4 TreeView
tvOrganization
Dock: Fill (Send to Front)
Table 5.2. Form frmOrganization controls to configure
The stored procedures that we will be using are uspGetEmployeeManagers and uspGetManagerEmployees that we added in Chapter 2. Figures 5.3 and 5.4 show the sample output for those stored procedures based on entering a parameter of “50”.
Figure 5.3. Stored Procedure uspGetEmployeeManagers results (Input: EmployeeID = 158)
Figure 5.4. Stored Procedure uspGetManagerEmployees results (Input: EmployeeID = 158)
We would like to take these results and display them in the TreeView control. Notice that in both cases, we have to take a flat table and convert it into a meaningful hierarchy. This will involve some recursion, which can make the code difficult to read. Try not to get bogged down in the recursion, but instead focus on how a stored procedure is accessed and the data is used. Switch to code view, and add the code in Example 5.5 to the top of the class to import namespaces. 1 2 3 4 5
using using using using using
AW.Data; AW.Data.EntityClasses; AW.Data.CollectionClasses; AW.Data.HelperClasses; SD.LLBLGen.Pro.ORMSupportClasses;
Example 5.5. Namespaces to import on form frmOrganization
We will use these namespaces throughout the code in this book, so do not forget to import them on other forms. The AW.Data namespace contains all the generated classes. The AW.Data.EntityClasses contains all the tables we have mapped as entities. The AW.Data.CollectionClasses contains all the collection classes that hold specific types of entities. The AW.Data.HelperClasses namespace allows us to write natural language queries. And finally, the OrmSupportClasses namespace contains common LLBLGen Pro objects. Now, add the subroutine in Example 5.6 to frmOrganizations in code view. 50
Simple Forms
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
private void CreateNode(DataRow row, TreeNodeCollection Nodes) { TreeNode Node = new TreeNode(); Node.Text = row["LastName"] + ", " + row["FirstName"] + " [" + row["EmployeeID"] + "]"; Node.Tag = row["EmployeeID"]; Nodes.Add(Node); DataRow Children = GetChildRow(row, row.Table); if (Children != null) CreateNode(Children, Nodes[0].Nodes); } private DataRow GetChildRow(DataRow row, DataTable table) { if (!(table.Rows.IndexOf(row) - 1 < 0)) { return table.Rows[table.Rows.IndexOf(row) - 1]; } else return null; } Example 5.6. Private methods CreateNode() and GetChildRow()
The CreateNode() method takes a DataRow and a TreeNodeCollection. It creates a node based on the information in the row and adds it to the Nodes collection. In Line 8, we call the GetChildRow() method to get the next item in the list, and in Lines 9-10, call the CreateNode() recursively on the child collection of the newly created node. The GetChildRow() method finds the index of the provided row and returns the next one in the table. We are assuming the rows will be in order from lowest in the hierarchy to highest, and we will work in reverse order from the highest. Â&#x201E; Also, add the private method in Example 5.7. 1 private TreeNode FindEmployeeRecursive( int EmployeeID, TreeNodeCollection Nodes) 2 3 { TreeNode FoundNode = null; 4 foreach (TreeNode Node in Nodes) 5 { 6 if (FoundNode != null) 7 return FoundNode; 8 if (Convert.ToInt32(Node.Tag) == EmployeeID) 9 { 10 FoundNode = Node; 11 break; 12 } 13 else 14 { 15 if (Node.Nodes.Count > 0) 16 { 17 FoundNode = FindEmployeeRecursive( 18 EmployeeID, Node.Nodes); 19 } 20 } 21 51
Chapter 5
22 23 24 }
} return FoundNode; Example 5.7. Private method FindEmployeeRecursive()
The FindEmployeeRecursive() method searches through the TreeNodeCollection provided and finds the node that corresponds to the EmployeeID provided. Finally, create an event handler for the btnSearch button by double-clicking it in design view. Add the code in
Example 5.8 to the button’s Click event handler. 1 private void btnSearch_Click(object sender, EventArgs e) 2 { tvOrganization.Nodes.Clear(); 3 TreeNode MasterNode = new TreeNode(); 4 int EmployeeID = Int32.Parse(this.cbEmployee.Text); 5 DataTable Managers = AW.Data.StoredProcedureCallerClasses. 6 RetrievalProcedures.UspGetEmployeeManagers(EmployeeID); 7 DataTable Managees = AW.Data.StoredProcedureCallerClasses. 8 RetrievalProcedures.UspGetManagerEmployees(EmployeeID); 9 if (Managers.Rows.Count > 0) 10 { 11 TreeNodeCollection ManagersCol = new TreeNode().Nodes; 12 CreateNode(Managers.Rows[Managers.Rows.Count - 1], 13 ManagersCol); 14 TreeNode CEONode = new TreeNode(); 15 DataRow CEORow = Managers.Rows[Managers.Rows.Count - 1]; 16 17 CEONode.Text = CEORow["ManagerLastName"] + ", " 18 + CEORow["ManagerFirstName"] 19 + " [" + CEORow["ManagerID"] + "]"; 20 CEONode.Tag = CEORow["ManagerID"].ToString(); 21 CEONode.Nodes.Add(ManagersCol[0]); 22 MasterNode = CEONode; 23 } 24 foreach (DataRow row in Managees.Rows) 25 { 26 TreeNode Manager = FindEmployeeRecursive( 27 Convert.ToInt32(row["ManagerID"]) 28 , MasterNode.Nodes); 29 TreeNode Employee = new TreeNode(); 30 Employee.Text = row["LastName"] + ", " 31 + row["FirstName"] + " [" + row["EmployeeID"] + "]"; 32 Employee.Tag = row["EmployeeID"]; 33 Manager.Nodes.Add(Employee); 34 } 35 tvOrganization.Nodes.Add(MasterNode); 36 tvOrganization.Nodes[0].ExpandAll(); 37 } Example 5.8. Button btnSearch Click event handler
52
Simple Forms
When the Search button is clicked, we clear out all the nodes in the tree view. In Line 4, we create a TreeNode to hold the nodes we will be creating. In Line 5, we create an integer from the information that was entered in the ComboBox. In Lines 6-9, we call both stored procedures and catch the results in DataTable objects. Note that each stored procedure call is one line of code! LLBLGen Pro takes the work out of wrapping the stored procedure in code ourselves, and you have the benefit of IntelliSense as you type the name of the procedure. In this way, you do not have to worry about entering the name as a hard-coded string yourself, and possibly typing it wrong. With the results in hand, if we have more than one manager returned, we start for the last row (the highest manager), and call the CreateNode() method, which will recursively add the subordinate managers. Because the stored procedure does not actually return the highest manager (the C.E.O) as a separate row, we have to add it ourselves in Lines 15-21, if we would like it displayed. In Line 24, we loop through all the rows in the results of the other stored procedures that return everyone below our EmployeeID. For each row, we find the manager node for that employee by calling FindEmployeeRecursive() method in Lines 26-28, and add a new employee node to the Nodes property of the manager node. To finish, we add add the main node to the TreeView control in Line 36, and expand all the nodes in Line 37. Compile and run the application (press F5). Select Reports > Organization from the menu. You should be able to enter “158” in the ComboBox, click Search, and get results like that in Figure 5.9.
Figure 5.9. Form frmOrganization displaying stored procedure results
The hardest part of that code was not accessing the stored procedure, but formatting it for the TreeView! Now with each EmployeeID that is entered, we get a path to the top of the organization and a path to the bottom. Note that this code will not work on EmployeeID 109, the CEO, due to the way the stored procedure returns managers. But enter anyone else and you will find his or her position within the company. Let’s review the strengths and weaknesses of this particular approach. First, it was incredibly easy to use the stored procedure logic and pass it a parameter. LLBLGen Pro automatically detects the parameters for you—more logic you do not have to code yourself. Unfortunately, the results of the stored procedure are a generic DataTable. Our underlying problem is that we do not have any clues in our data layer that reveal the schema of the results to us. That is why we needed to give an example of the results to you, so you can see the names of the columns and know the structure of the data we are processing. IntelliSense cannot help in this situation. This causes us to have to put in around 13 hard-coded strings of column names in our code (several examples in Line 17-20 of Example 5.8) in order to access the correct values from the DataTable. These strings contribute to the brittleness of our code and greatly increase our maintenance costs in the future. We have no way of knowing if the “ManagerID” column will be there or not until we run our code. In addition, we have no easy way of updating this code should our stored procedure change. If the columns do not exist or their names change, the application will still compile and the fields will be blank at best or throw exceptions at worst. The other disadvantage of this approach is that the stored procedure can only return a flat table of data to us, even through the data that the results represent has a hierarchical structure of parent and child rows. We thus have to do extra work in order to reconstruct the data into the structure it originally had in the database. Let’s attempt a different solution using some of the built in features of the generated framework.
53
Chapter 5
Alternative Approach Using Entities and Relationships Since the tables themselves have relationships that make the data hierarchical, it follows that by using the entity objects (which correspond to the tables), we will find it easier to construct the hierarchical results that we need. When we added the Employee table as an entity, LLBLGen Pro also added the relationship between the ManagerID of one employee row and the EmployeeID of another employee row. We named the relationship Manager that provides a manager row from an employee row, and the other relationship Manages, when we have a employee row and want a collection of all the employees managed by that manager. Let’s change the code to use entity objects to populate data in the form instead of the stored procedures. Comment out ALL the code in the previous example, to keep the approaches separate. You might also surround the code with a “#region Stored Procedure” and “#endregion” so you can collapse it out of sight. Add the private methods in Example 5.10. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
private TreeNode FindLowestNode(TreeNode Node) { if (Node.Nodes.Count > 0) return FindLowestNode(Node.Nodes[0]); else return Node; } private TreeNode MakeNode(EmployeeEntity Employee) { TreeNode MyNode = new TreeNode(Employee.Contact.LastName + ", " + Employee.Contact.FirstName + " [" + Employee.EmployeeId.ToString() + " ]"); MyNode.Tag = Employee.EmployeeId; return MyNode; } Example 5.10. Private methods FindLowestNode() and MakeNode()
The FindLowestNode() method finds the lowest child node by looking at the first node in each collection. This is helpful when adding managers. The MakeNode() method takes an EmployeeEntity, and creates a node with the name of the employee displayed and the EmployeeID saved in the Tag property. Also add these two private methods in Example 5.11. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 54
private TreeNode GetManagersRecursive(EmployeeEntity Employee) { TreeNode EmployeeNode = MakeNode(Employee); if (Employee.ManagerId != 0) { TreeNode ManagerNode = GetManagersRecursive(Employee.Manager); FindLowestNode(ManagerNode).Nodes.Add(EmployeeNode); return ManagerNode; } return EmployeeNode; } private TreeNode GetEmployeesRecursive(EmployeeEntity Employee) { TreeNode EmployeeNode = MakeNode(Employee);
Simple Forms
15 16 17 18 19 20 21 22 23 }
if (Employee.Manages.Count > 0) { foreach (EmployeeEntity Subordinate in Employee.Manages) { EmployeeNode.Nodes.Add(GetEmployeesRecursive(Subordinate)); } } return EmployeeNode;
Example 5.11. Private methods GetManagersRecursive() and GetEmployeesRecursive()
The GetManagersRecursive() method finds all the managers for a given employee entity, adding each one as a node. The GetEmployeesRecursive() method finds and adds all the employees that are managed by a given employee. Both are recursive, and will navigate all the way up to the CEO, and all the way down to the bottom. Before we can run this code, we have to add a new Click event handler for btnSearch. Â&#x201E; Add the code in Example 5.12 to the btnSearch Click event handler. 1 private void btnSearch_Click(object sender, EventArgs e) 2 { tvOrganization.Nodes.Clear(); 3 TreeNode MasterNode; 4 EmployeeEntity Employee = new EmployeeEntity( 5 Int32.Parse(cbEmployee.Text)); 6 TreeNode EmployeeNode = MakeNode(Employee); 7 if (Employee.Manages.Count > 0) 8 { 9 foreach (EmployeeEntity Subordinate in Employee.Manages) 10 { 11 EmployeeNode.Nodes.Add(GetEmployeesRecursive(Subordinate)); 12 } 13 } 14 if (Employee.ManagerId != 0) 15 { 16 TreeNode ManagersNode = GetManagersRecursive(Employee.Manager); 17 FindLowestNode(ManagersNode).Nodes.Add(EmployeeNode); 18 MasterNode = ManagersNode; 19 } 20 else 21 { 22 MasterNode = EmployeeNode; 23 24 } 25 MasterNode.ExpandAll(); 26 tvOrganization.Nodes.Add(MasterNode); 27 } Example 5.12. Modified button btnSearch Click event handler
We begin by clearing out the nodes in Line 3. In Line 4, we create a node that will serve as the root of all the other nodes. In Line 5, we create an instance of the employee entity with the ID that was entered into the ComboBox. Then we loop through all the subordinate employees, calling the GetEmployeesRecursive() method on each employee entity and adding the resulting node to our node collection in Lines 8-14. In Lines 15-20, we do the same for each manager. 55
Chapter 5
If there are no managers, our employee entity becomes the root node in Lines 21-24. Then we expand all the nodes, and add the root node to the TreeView control. Let’s give it a try! Compile and run the application, and select Reports > Organization. Try entering a few EmployeeIDs like 12, 50, and 109. Now that we are using the entity objects instead of the stored procedure we can enter the CEO’s ID (109), and get a complete list of everyone in the company (Figure 5.13).
Figure 5.13. Form frmOrganization results using employee entities and relationships
Changing our code to use the entity objects was not too difficult. Like the previous solution, the hardest part of this was the recursive methods, although they were simplified this time since we are using the same basic object throughout (the EmployeeEntity). Looking back through our code, we have eliminated all hard-coded strings. Instead, everything is being checked at compile-time, and the code will be much easier to maintain. If we decide to change our schema, breaking changes will be recognized at compile time after quickly refreshing our data layer (discussed in Chapter 12). As another benefit, we do not need either of the two stored procedures anymore. For those of you who are familiar with dynamic SQL and lazy loading, you know that the current code as it stands right now is grossly inefficient in the way it accesses the database and executes queries. We will revisit this example in Chapter 10, when we talk about performance. For now, we are happy that it is working. This solution can be complex because the information we need to display in the form is scattered between the employee row and the contact row. Accessing both rows is easy with our generated framework since all of the relationships are added automatically; simply using Employee.Contact.FirstName will retrieve the correct row and field from the Contacts table. Crossing over to this new row and table, however, results in another query executing on the database. If we merged the rows together in a view, we could cut down on the number of queries we would have to execute. We will demonstrate using a view shortly. Before we finish, let’s make the selection of employees easier by populating the ComboBox with a list of employees. To accomplish this task we will need to add our first extension to the business logic layer. Open the EmployeeEntity.cs file in the AW.Data project in the EntityClasses folder. 56
Simple Forms
Import the AW.Data.HelperClasses namespace at the top of the class with the other namespaces. Find the section marked Custom Entity Code and insert the code in Example 5.14. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
public static EmployeeCollection GetEmployees() { RelationCollection Relations = new RelationCollection(); Relations.Add(EmployeeEntity.Relations.ContactEntityUsingContactId); ISortExpression LastFirstAlpha = (ContactFields.LastName | SortOperator.Ascending) & (ContactFields.FirstName | SortOperator.Ascending); EmployeeCollection Employees = new EmployeeCollection(); Employees.GetMulti(null,0, LastFirstAlpha,Relations); return Employees; } public string EmployeeDisplayName { get { return this.Contact.LastName + ", " + this.Contact.FirstName; } } Example 5.14. Class EmployeeEntityI’s GetEmployees() method and EmployeeDisplayName property
The GetEmployees() method is marked as public and static, which indicates that it is available outside the class and without instantiating a member of the class. This will make it easy to use this method later in our form. The GetEmployees() method is basically a search method that returns all employees in the database sorted first by last name and then by first name. In Line 3 we have to create an object known as a RelationCollection., which is a class from the ORMSupportClasses library that contains relations necessary to perform other logic. In this method, we want to sort by using a field in another table. To do so, we need to add the ContactEntityUsingContactID relation from the EmployeeEntity class, which we do in Lines 4-5. Every relation an entity contains is available inside each entity’s Relations collection. We choose the correct one based on which related fields we will be accessing later. These relations will become the JOIN statements in SQL when the query is executed. Next we create a SortExpression object in Line 5. SortExpression objects simply tell the database how to sort the data. We use the ISortExpression interface, give a name to the object, and, on the other side of the equal sign, define our sorting expression. We use field objects to specify which field we want to sort by. Note that field objects are different objects than the field indexes that we have used earlier. Field objects are located in the HelperClasses namespace, while field indexes are enumerators available in the root namespace of the generated code. We select the proper field, type a “|”, and then the correct SortOperator enumerator indicating the direction of the sort. We can specify multiple sorts by using “&”. The entire expression is saved in the LastFirstAlpha SortExpression object. Next, in Line 8, we create a collection of employees. We fill the collection in Line 9, by calling the GetMulti() method and entering null for the predicate (we will discuss predicates in the next chapter), 0 for the maximum number of items to return, our SortCollection object, and our RelationCollection object. If we do not provide a predicate, we will receive all rows in the database. In Lines 12-18, the EmployeeDisplayName property will make it easier to bind a collection of employee objects to a control, by exposing the name of the related contact in a single property. This property will keep us from having to concatenate the fields ourselves on the form. Go back to the frmOrganization form in the AW.Win project. In the form Load event, add the following code in Example 5.15.
57
Chapter 5
1 private void frmOrganization_Load(object sender, EventArgs e) 2 { cbEmployee.DataSource = EmployeeEntity.GetEmployees(); 3 cbEmployee.DisplayMember = "EmployeeDisplayName"; 4 cbEmployee.ValueMember = EmployeeFieldIndex.EmployeeId.ToString(); 5 6 } Example 5.15. Form frmOrganization Load event handler
In the Load event of the form, we called the GetEmployees() method, and bind it to the ComboBox. Then we set the DisplayMember and ValueMember to display the correct properties. Notice that we use the field index in Line 5 instead of a string, so we will be notified if the schema changes. Remember that the property in Line 4 is the calculated extension property of our entity that we added in Example 5.14 and is not a real property in our database. Therefore, the name of the property will not be in our field index. Change Line 5-6 of Example 5.12 to the following: EmployeeEntity Employee = new EmployeeEntity( Convert.ToInt32(cbEmployee.SelectedValue)); This change will ensure that we get the EmployeeID from the correct location in the control now that we are populating the control with the employee names. Run the application and open the form. You will get the results in Figure 5.16.
Figure 5.16. Form frmOrganization with working ComboBox control
Now that we have extended the business logic layer, the form is more intuitive to use! We will use the principle of adding logic to the business layer as often as we can to make the code in our user interface as brief as possible. Now we can move on to demonstrating how to work with a view.
Using Typed Views In Chapter 2, we added a view named vIndividualCustomer to our LLBLGen Pro project. This view is designed to make it easier to access customer data by presenting data from the Customer, Individual, Contact, CustomerAddress, AddressType, StateProvince, and CountryRegion tables into one flat table. Before we can use the typed view, we need to setup our frmCustomers form. Open the design view of frmCustomers.cs. In the properties of the form, set the Width to “800” and the Height to “400”. Drag a DataGidView to the form and change the name to “dgvResults”. Set the Dock property of dgvResults to “Fill”. 58
Simple Forms
Your form should now resemble Figure 5.17.
Figure 5.17. Form frmCustomers with blank DataGridView control
Double-click on the form’s title bar (not the DataGridView) to have Visual Studio automatically create the form’s Load event handler. Import the namespaces in Example 5.5. Also import the AW.Data.TypedViewClasses namespace as well, as this contains all of our typed view classes. Add the code in Example 5.18 to the form’s Load event. private void frmCustomers_Load(object sender, EventArgs e) 1 2 { CustomerViewTypedView Customers = new CustomerViewTypedView(); 3 Customers.Fill(); 4 dgvResults.DataSource = Customers; 5 6 } Example 5.18. Form frmCustomer Load event handler
The code in this form is brief. We simply instantiate the typed view class, then call the Fill() method, with no parameters. This code will retrieve all the rows in the view. The results are then bound to the DataGridView. Compile and run the application, and select Reports > Customers from the menu.
Figure 5.19. Form frmCustomer typed view results 59
Chapter 5
Figure 5.20. Adding a CustomerViewTypedView typed DataTable from the Toolbox
You should get results like Figure 5.19. Not bad for only 3 lines of code! Try clicking on the column header and you will notice that you can sort the items in the grid. Sorting the columns is a nice feature to have, and it did not cost us any extra work. A disadvantage to these 3 lines of code, however, is that our columns are being created automatically and are not taking advantage of the space. Some columns are too large for the data they contain and some are too small. Normally, we would not want to show every column in the view but select a subset of columns to display. We need to set up our columns in advance instead of creating them automatically. If we were working with a stored procedure,
Figure 5.21. Form frmCustomerâ&#x20AC;&#x2122;s component tray with CustomerViewTypedView instance
setting up columns in advance would be difficult as we would have no idea of the contents of a generic DataTable until run-time. With this typed list, however, we are working with a typed DataTable, meaning that we know exactly which columns are in the view right now, and can use them to help set up our DataGridView. Â&#x201E; In design view of frmCustomers, look in the Toolbox (normally on the left side of the screen). Locate the CustomerViewTypedView DataTable (Figure 5.20).
Figure 5.22. Setting the DataSource property of DataGridView dgvResults 60
Simple Forms
Click to hightlight the CustomerViewTypedView object (your icon will change into a gear) and then click anywhere in your form. The item will be added to the component tray (Figure 5.21). When you have a data item like this sitting in the component tray, you can use it to setup your controls visually. Change the DataSource property of your DataGridView to point to customerViewTypedView1 (Figure 5.22). Now you should see a lot of columns in your DataGridView!
Figure 5.23. Columns left in DataGridView dgvResults
Right-click on the DataGridView and select Edit Columns. Remove columns until you are left with those in Figure 5.23. Configure the properties of each column according to Table 5.24. ColumnName
HeaderText
AutoSizeMode
CustomerId
ID
AllCells
FirstName
First
AllCells
LastName
Last
AllCells
EmailAddress
Fill
StateProvinceName
State
AllCells
CountryRegionName
Country
AllCells
DefaultCellStyle
Table 5.24. DataGridView dgvResults columns to configure
Compile and run the application, selecting Reports > Customers. The grid should now look like Figure 5.25. Using the view from our database in order to display the results was easy; the most amount of time we spent was in configuring the columns to be displayed. Having strongly-typed results, however, allowed us to perform this configuration visually, saving as much time as possible and yet still returning predictable results. We are left, however, with one problem that might not be readily apparent. We have already mentioned how hard-coded strings in your code are fragile and unmaintainable. Although we have not created any of these directly, as Visual Studio was creating the code necessary to display our DataGridView, Visual Studio created some on our behalf. Luckily, we can do a search to find these maintainability nightmares before they cause us trouble in the future. Go to Edit > Find and Replace > Find in Files (Ctrl + Shift + F). Enter criteria as shown in Figure 5.26, and press Find All. 61
Chapter 5
Figure 5.25. Form frmCustomer results with formatted DataGridView columns
You should find 6 occurrences in the frmCustomers.Designer.cs file (one for each of the columns in the DataGridView). Â&#x201E; Double click on the search result to automatically open the frmCustomers.Designer.cs file. Â&#x201E; Change every DataPropertyName value that contains a string to the corresponding field index. For example: this.customerIdDataGridViewTextBoxColumn.DataPropertyName = "CustomerId"; ... should be changed to ... this.customerIdDataGridViewTextBoxColumn.DataPropertyName = AW.Data.CustomerViewFieldIndex.CustomerId.ToString();
Figure 5.26. Find in Files criteria to locate hard-coded strings 62
Simple Forms
If we use our field indexes instead of empty strings, we will be alerted later if the schema changes, which would change the field index, and keep our code from compiling. Visual Studio has no way of knowing the field indexes are available, so we will have to code changes like this by hand. Making these minor changes may seem like a hassle now, but will make our code much more maintainable for the future. From here forward, after you configure DataGridView columns visually, you should do a search like the one above and change any hard-coded strings that you find. After making the changes to each column’s DataPropertyName, compile and run the application again. Open the form to make sure the DataGridView still displays correctly. Now we will take a look at using a view as an entity.
Using Views as Entities In Chapter 2, we added the same view from our database to our project using two different methods. The first was just as a typed view. We also added it as an entity under the name CustomerViewRelated. Added as an entity, a view can contain relations (one of which we supplied in Chapter 2). We will use one of these relations now. Go back to frmCustomers in code view. Modify the frmCustomers Load event handler with the code in Example 5.27. 1 private void frmCustomers_Load(object sender, EventArgs e) 2 { SalesOrderHeaderEntity Order = new SalesOrderHeaderEntity(46117); 3 CustomerViewRelatedEntity CustomerView = Order.CustomerViewRelated; 4 CustomerViewRelatedCollection CustomerCollection = 5 new CustomerViewRelatedCollection(); 6 CustomerCollection.Add(CustomerView); 7 dgvResults.DataSource = CustomerCollection; 8 9 } Example 5.27. Form frmCustomers modified Load event handler using CustomerViewRelatedEntity
In this example, we use the view from the database as an entity (like a table) instead of as a typed view. Since an entity can have relationships, we first instantiate an order entity in Line 3. We then retrieve the corresponding CustomerViewRelatedEntity, using the field CustomerViewRelated. Because we added this relationship between the current view and the SalesOrderHeader table in Chapter 2, we save the time of having to narrow down the rows in the view ourselves. We only have a single entity, however, so in order to display the data in the DataGridView, we create a collection in Line 5-6, and add the entity to it in Line 7. Then we bind the collection to the DataGridView in Line 8. Compile and run the application. Opening the form from the menu should give you the result in Figure 5.28.
Figure 5.28. Form frmCustomer results using CustomerViewRelatedEntity
This example is not the most useful, since we hard-coded the OrderID! It does serve as an example of why you might choose to add a view as an entity instead of a typed view. The relationships make it easy to navigate between the tables and filter the results. You also have the ability to use this view just like you would a normal entity to make INSERTS, UPDATES, and DELETES, with changes going to the correct tables that make up the view (editing, saving 63
Chapter 5
and deleting entities is discussed in later chapters). A view entity is different from a typed view whose results are readonly. If you have relationships that you could define from the columns of your view, adding the view as as an entity would maximize your options. On the other hand, if you know the item will just be a stand-alone, read only object, then adding it as a typed view would be the best solution. Next we will put our typed list to work and display it on a form.
Using Typed Lists In Chapter 2, we added a typed list named CustomerList to our LLBLGen Pro project that almost exactly mirrors results of the view we have been using. We will now use this typed list instead of the view. Â&#x201E; In the frmCustomers form in code view, import the namespace AW.Data.TypedListClasses at the top of your code. Â&#x201E; Modify the frmCustomers Load event handler with the code in Example 5.29. 1 private void frmCustomers_Load(object sender, EventArgs e) 2 { CustomerListTypedList Customers = new CustomerListTypedList(); 3 Customers.Fill(); 4 dgvResults.DataSource = Customers; 5 6 } 7 Example 5.29. Form frmCustomer modified Load event handler using CustomerListTypedList
Here we instantiate the typed list, call the Fill() method, and bind it to the DataGridView. The typed list returns a strongly typed DataTable. Like a typed view, a typed list is read-only. Running the application and displaying the form will give you the exact same results as Figure 5.22. Why, then, would one use a typed list instead of a typed view or a view added as an entity? A typed list is an object we defined in our LLBLGen Pro project, and does NOT correspond to any real object in our database. With a typed list, we have the benefits of a strongly-typed set of results without the hassle of creating a view in the database. Unlike a view that is added as an entity however, no relationships with other entities are possible.
Summary We covered quite a few topics in this chapter. First, we used several stored procedures to make a hierarchy of managers and employees in the company. We discussed the disadvantages of using the stored procedure and achieved the same results using entity objects and navigating through the relationships between them. We also wrote our first dynamic query that used a field in another table via a relationship and sorted the data from the database. Then we worked with a view of customers that consolidates information from several tables. We explored the difference between a view added as a typed view versus a view added as an entity. We also examined a typed list that mirrored our view almost exactly but was merely an object we created in our LLBLGen Pro project. In the next chapter, we will create a search form that will run some complex queries in our database and learn to work with predicate objects and predicate expressions.
64
Search Form
6
Search Form “I find that a great part of the information I have was acquired by looking up something and finding something else on the way.” Franklin P. Adams (1881-1960) Chapter Goals • Design the user interface for a search page. • Write a search method that takes many kinds of optional parameters. • Learn to work with Predicate and PredicateExpression objects. • Use Relation objects to make joins between tables. • Enable auto-completion on ComboBox controls. We used simple dynamic queries in the last chapter to retrieve records from the database. In this chapter we will write a more complex query and design a new form to display the results.
Order Search Form In this chapter, we will create a form that will allow a user to search through the SalesOrderHeader table using a variety of parameters. We want to give the user maximum flexibility while keeping our code as clean, compact, and reusable as possible. Before we can set up the form, we need to add an extension to the business logic layer in order to make binding easier on the DataGridView we will be using. Open the SalesOrderHeaderEntity.cs file in the AW.Data project. Add the code in Example 6.1 inside the region marked for custom entity code. Right-click on AW.Data and select Build. 1 2 3 4 5 6 7 8 9 10 11 12
public string CustomerLastName { get { return this.CustomerViewRelated.LastName; } } public string CustomerFirstName { get { return this.CustomerViewRelated.FirstName; } } public string CustomerCity { get { return this.CustomerViewRelated.City; } } public string CustomerState { get { return this.CustomerViewRelated.StateProvinceName; } } public string CustomerCountry { get { return this.CustomerViewRelated.CountryRegionName; } } public string CustomerZip { get { return this.CustomerViewRelated.PostalCode; } } Example 6.1. Class SalesOrderHeaderEntity new public properties 65
Chapter 6
Figure 6.2. Form frmOrderSearch TableLayoutPanel controls column and row measurements
Instead of having to retrieve the related customer record for each order manually in the GUI, we are exposing the information here as a property of the order business object itself. That way, when we bind this record to the DataGridView, we will be able to bind to these properties. We also could have added these properties in our LLBLGen Pro project as fields on related entities and they would already be present in our class. We will do this when we regenerate our code in Chapter 12. Now we can setup the Order Search form and its controls. Open the frmOrderSearch.cs file in design view. Set the form’s Width to “700” and the Height to “600”. Change the FormBorderStyleProperty to SizableToolWindow. We are now going to use some of the Windows controls that are new in .NET 2.0. One of these is called the TableLayoutPanel. It is meant to function like tables in HTML. You can set the widths and heights of the columns and rows with percentages or absolute values. If the values are percentages, the rows will grow and shrink as the container is resized. If you are familiar with HTML tables, TableLayoutPanel controls are easy to use and understand. We will be adding two of them to our form. Figure 6.2 is a diagram that will help you design your layout tables with the correct widths. Be sure to set both of the TableLayoutPanel controls’ Dock property to Fill for the areas in which they reside. Add two TableLayoutPanel controls to your form. Configure the Columns and Rows collection of each to match Figure 6.2. Lay out the form to resemble Figure 6.3. Create and name your controls according to the chart in Table 6.4. (Hint: Set the Dock property of every control to Fill to have it fill the cell in which it resides. Also set the TextAlign property of the Labels to MiddleRight or MiddleCenter, depending on your preference.) On the DataGridView, set the properties to the values listed in Table 6.5. Drag a SalesOrderHeaderCollection object from the Toolbox to the form. Set the DataSource of the DataGridView to the new SalesOrderHeaderCollection object in the component tray. Configure the columns for dgvResults to include only the columns in Table 6.6. 66
Search Form
Figure 6.3. Form frmOrderSearch control layout
Now that we have configured the controls on the form, we can begin adding the logic necessary to perform the searching. For maximum reuse, we will be adding most of the logic to the business logic layer. Go back to the SalesOrderHeaderEntity.cs file in the AW.Data project. Control Type
Name
Properties to set
1
DateTimePicker
dtpDateFrom
Checked = False Format = Short ShowCheckBox = True
2
DateTimePicker
dtpDateTo
Checked = False Format = Short ShowCheckBox = True
3
TextBox
tbFirstName
4
TextBox
tbLastName
5
TextBox
tbOrderID
6
TextBox
tbCity
7
ComboBox
cbState
8
ComboBox
cbCountry
9
Textbox
tbZip
10 Button
btnSearch
Text = “Search”
11 DataGridView
dgvResults
Dock = Fill
Table 6.4. Form frmOrderSearch controls to configure 67
Chapter 6
Property
Value
AllowUserToAddRows
False
AllowUserToDeleteRows
False
AllowUserToResizeColumns
False
AllowUserToResizeRows
False
BackgroundColor
White
ColumnHeaderHeightSizeMode Autosize ReadOnly
True
RowHeadersVisible
False
SelectionMode
FullRowSelect
Table 6.5. DataGridView dgvResults properties to configure ColumnName
HeaderText
AutoSizeMode
SalesOrderId
ID
AllCells
SalesOrderNumber
#
AllCells
OrderDate
Date
AllCells
CustomerLastName
Last
DisplayedCells
CustomerFirstName
First
DisplayedCells
CustomerCity
City
DisplayedCells
CustomerState
State
AllCells
CustomerZip
Zip
AllCells
CustomerCountry
Country
AllCells
TotalDue
Total
Fill
DefaultCellStyle
DataGridViewCellStyle { Format=d }
DataGridViewCellStyle { Format=C2 }
Table 6.6. DataGridView dgvResults columns to configure
Import the AW.Data.HelperClasses namespace at the top of the class. Add the new method from Example 6.7 in the custom entity code region underneath the code you added in Example 6.1. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 68
public static SalesOrderHeaderCollection GetSalesOrderHeaderCollection ( DateTime FromDate, DateTime ToDate, string FirstName, string LastName, int OrderID, string OrderNumber, string CityName, string StateName, string CountryName, string Zip ) { RelationCollection Relations = new RelationCollection();
Search Form
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68
IPredicateExpression Filter = new PredicateExpression(); if ( (FirstName != "") | (LastName != "") | (CityName != "") | (StateName != "") | (CountryName != "") | (Zip != "") ) { Relations.Add( SalesOrderHeaderEntity.Relations. CustomerViewRelatedEntityUsingCustomerId); } if (FromDate != DateTime.MinValue) { Filter.Add( SalesOrderHeaderFields.OrderDate >= FromDate); } if (ToDate != DateTime.MinValue) { Filter.Add( SalesOrderHeaderFields.OrderDate <= ToDate); } if (FirstName != "") { Filter.Add( CustomerViewRelatedFields.FirstName % FirstName); } if (LastName != "") { Filter.Add( CustomerViewRelatedFields.LastName % LastName); } if (CityName != "") { Filter.Add( CustomerViewRelatedFields.City % CityName); } if (StateName != "") { Filter.Add( CustomerViewRelatedFields.StateProvinceName == StateName); } if (CountryName != "") { Filter.Add( CustomerViewRelatedFields.CountryRegionName == CountryName); } if (Zip != "") { Filter.Add( CustomerViewRelatedFields.PostalCode == Zip); 69
Chapter 6
69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 }
} if (OrderID != 0) { Filter.Add( SalesOrderHeaderFields.SalesOrderId == OrderID); } if (OrderNumber != "") { Filter.Add( SalesOrderHeaderFields.SalesOrderNumber == OrderNumber); } ISortExpression Sort = new SortExpression(); Sort.Add( SalesOrderHeaderFields.OrderDate | SortOperator.Ascending); SalesOrderHeaderCollection Orders = new SalesOrderHeaderCollection(); Orders.SupportsSorting = true; Orders.GetMulti(Filter, 100, Sort, Relations); return Orders;
Example 6.7. Class SalesOrderHeaderEntity public method GetSalesOrderHeaderCollection()
This method is long, but it is very powerful. Let’s take a look at what it does step-by-step. Just like our last method, this one is available outside the class without instantiating it by using the keywords “public static.” Those keywords make the method convenient to invoke in our GUI. The method takes 10 parameters, all of which are optional and will help us narrow down the orders that we are looking for. In Lines 3-12, we define the parameters this method will take. In Lines, 15-16, we create the main objects we will use to make our query. First, we create a RelationCollection that will be needed if we have to search by any parameters that will require a join. Also, we create the PredicateExpression that will hold all of our criteria. A Predicate Expression is an object-oriented representation of a WHERE clause in SQL syntax. It helps filter down the rows that we are looking for. A Predicate Expression object contains Predicate objects, which represent individual expressions that each evaluate to true or false. In other words, a Predicate object is a single criterion that we use filter our results. In Lines 17-24, we check to see if any of the parameters entered will require the join to the CustomerViewRelatedEntity, and, if so,we add the join in Lines 26-28. In Line 30, we check to see if a DateTime value was supplied by comparing it to the minimum possible DateTime value. If a real value was supplied, we create the criterion in Lines 32-33. Notice that criterion are enclosed in parentheses. We use the SalesOrderHeaderFields object from the AW.Data.HelperClasses namespace, the comparison operator “>=” indicating greater than or equal to, and the FromDate parameter that is supplied when the method is called. In Lines 35-39, creating a predicate for the ToDate parameter is the essentially the same process undertaken earlier except we specify “<=” to indicate less than or equal to. In Lines 40-44, we check to see if the FirstName parameter is an empty string, and, if not, create a predicate using “%” to indicate LIKE in SQL syntax. The last name and city predicates are created in the same way. For the state, country, and postal code, we look for an exact match. We also create predicates for the SalesOrderID and the SalesOrderNumber. You have seen the SortExpression before in Line 80. We are sorting by date. In Line 83, we create the SalesOrderHeaderCollection object that will hold the results. In Line 84, we enable sorting in the collection so that users can click the DataGridView columns. And finally, in Line 85, we execute the query by calling the GetMulti() method. We include the PredicateExpression, the SortExpression, and the RelationCollection. The number 100 indicates that we want a maximum of 100 records. It is a good idea to put an upper limit on the number of items to return, just in case the criteria are so broad that you end up with a large number of rows. In this case, we will never end up with more than 100 rows, and, hopefully, if the user entered good criteria, we will end up with much less. 70
Search Form
Pat yourself on the back! That took a lot of work! But because we wisely added this code to our business logic layer we can reuse it. Now, the code we need to add in the user interface to use this logic is very short. Open the code view for frmOrderSearch in the AW.Win project. Import the namespaces from Example 5.5 at the top of class. Add the code from Example 6.8 to the btnSearch Click event handler. 1 private void btnSearch_Click(object sender, EventArgs e) 2 { DateTime DateFrom = DateTime.MinValue; 3 if (dtpDateFrom.Checked) 4 DateFrom = dtpDateFrom.Value; 5 DateTime DateTo = DateTime.MinValue; 6 if (dtpDateTo.Checked) 7 DateTo = dtpDateTo.Value; 8 int OrderID = 0; 9 string OrderNumber = ""; 10 if (tbOrderID.Text != "") 11 { 12 try 13 { 14 OrderID = Convert.ToInt32(tbOrderID.Text); 15 } 16 17 catch 18 { 19 OrderNumber = tbOrderID.Text; 20 } 21 } 22 this.dgvResults.DataSource = 23 SalesOrderHeaderEntity.GetSalesOrderHeaderCollection( 24 DateFrom, 25 DateTo, 26 tbFirstName.Text, 27 tbLastName.Text, 28 OrderID, 29 OrderNumber, 30 tbCity.Text, 31 cbState.Text, 32 cbCountry.Text, 33 tbZip.Text); 34 } Example 6.8. Button btnSearch Click event handler
We first create DateTime variables and assign them the minimum date value. We only assign the value of the DateTimePicker control to the variable if the check box has been clicked. Looking at the Checked property of the DateTimePicker, we know if the user wants to use that particular date criterion or not. In Lines 11-21, we try to convert the text entered in the tbOrderID control to an integer. If it is an integer, we add it to OrderID variable. If not, we assume is an SalesOrderNumber and add it to the OrderNumber variable. In this way, we can use one input box, but search for two different columns in the database. Other than these actions, we basically pass the text from the controls directly to the method we just created in the SalesOrderHeaderEntity class. Now that have put all of our hard work into the business logic layer, our GUI code could not be easier. 71
Chapter 6
Figure 6.9. Form frmOrderSearch search results
Â&#x201E; Now build and run the application. You should be able to search with different criteria and get results back in the DataGridView (Figure 6.9). Notice that the headers are clickable and allow you to sort the results. Bound columns in the DataGridView give us this sorting capability. Also note that because we configured the columns in advance, they are now well-spaced with user-friendly column names. Before we finish this section, let's add auto-completion to the state and country lists. With this feature, when the user begins typing, we will give them suggestions based on the items in the database already. To implement this, we first need to construct a method for retrieving lists of countries and states in the business logic layer. Â&#x201E; Add the code in Example 6.10 to the CountryRegionEntity.cs file in the AW.Data project in the region marked for custom entity code. Also, import the AW.Data.HelperClasses namespace at the top of the class. 1 public static CountryRegionCollection GetCountryRegionCollection() 2 { ISortExpression Sort = new SortExpression(); 3 Sort.Add(CountryRegionFields.Name | SortOperator.Ascending); 4 CountryRegionCollection Countries = new CountryRegionCollection(); 5 Countries.GetMulti(null, 0, Sort); 6 return Countries; 7 8 } Example 6.10. Class CountryRegionEntity public method GetCountryRegionCollection() 72
Search Form
Add the code in Example 6.11 to the StateProvinceEntity.cs file in the AW.Data project in the region marked for custom entity code. Also, import the AW.Data.HelperClasses namespace at the top of the class. 1 public static StateProvinceCollection GetStateProvinceCollection() 2 { ISortExpression Sort = new SortExpression(); 3 Sort.Add(StateProvinceFields.Name | SortOperator.Ascending); 4 StateProvinceCollection States = new StateProvinceCollection(); 5 States.GetMulti(null, 0, Sort); 6 return States; 7 8 } Example 6.11. Class StateProvinceEntity public method GetStateProvinceCollection()
Change the AutoComplete mode of the cbState and cbCountry controls to SuggestAppend and the AutoCompleteSource to ListItems. Add the code in Example 6.12 to the form’s Load event handler. 1 private void frmOrderSearch_Load(object sender, EventArgs e) 2 { this.cbCountry.DataSource = 3 CountryRegionEntity.GetCountryRegionCollection(); 4 this.cbCountry.DisplayMember = 5 CountryRegionFieldIndex.Name.ToString(); 6 this.cbCountry.ValueMember = 7 CountryRegionFieldIndex.CountryRegionCode.ToString(); 8 this.cbCountry.Text = ""; 9 this.cbState.DataSource = 10 StateProvinceEntity.GetStateProvinceCollection(); 11 this.cbState.DisplayMember = 12 StateProvinceFieldIndex.Name.ToString(); 13 this.cbState.ValueMember = 14 StateProvinceFieldIndex.StateProvinceId.ToString(); 15 this.cbState.Text = ""; 16 17 } Example 6.12. Form frmOrderSearch Load event handler
Now when you build and run the application, you are given suggested options for the city and state ComboBox controls (Figure 6.13). Our end users will certainly appreciate the extra help. When dynamic queries are as easy to make as they are with our generated framework, we can quickly and reliably add user-friendly features like autocompletion that we might not ordinarily have enough time to implement. In the next chapter, we will create a form to create, edit, save, and delete entity objects in the database.
Figure 6.13. ComboBox controls cbCountry and cbState with working auto-completion 73
Edit Form
7
Edit Form “God, grant me the serenity to accept the things I cannot change, the courage to change the things I can, and the wisdom to know the difference.” Reinhold Niebuhr (1892-1971) Chapter Goals • Extend entity classes with properties that will perform lookups on related tables. • Lay out the Order edit form and use ToolStrip and ToolStripButton controls to add a professional look. • Write the logic for editing and saving an order. • Write logic for creating a new order and deleting an existing order. So far we have read lots of information from the database and learned to how to make specific queries to find exactly what we are looking for. Even so, we have only scratched the surface of what our robust business logic and data layer will do.
Entity Extensions The next form that we will create will allow us to edit the properties of an order. Before we can lay out the form, we need to further extend our entity classes. We are going to add a few properties to the business layer that will help as we bind controls in our Order Edit form. When we pull up a row from the SalesOrderHeader table, we will also display the related rows from the SalesOrderDetail table. These rows tell us the actual items ordered. In order to be able to display the name of the product that was ordered, we have to take the ProductID from the row in the SalesOrderDetail table and look it up in the Product table. Instead of looking up the product in the GUI, we will extend the SalesOrderDetailEntity object to provide the name of the product as a new property. Open the SalesOrderDetailEntity class inside AW.Data and add the code in Example 7.1. 1 public string ProductName 2 { get {return this.SpecialOfferProduct.Product.Name; } 3 4 } Example 7.1. Class SalesOrderDetailEntity public property ProductName
Now, we can reference the product name from SalesOrderDetail row without looking it up in GUI. Open the ContactEntity class inside AW.Data and add the code in Example 7.2.
75
Chapter 7
1 public string DisplayName 2 { get { return this.LastName + ", " + this.FirstName; } 3 4 } Example 7.2. Class ContactEntity public property DisplayName
This code simply exposes the first and last names of the employee as a single property. Having only a single property to display helps make binding quick and easy. Now, we have one more property to add. Open the CustomerViewRelatedEntity class inside AW.Data and add the code in Example 7.3. 1 public string DisplayName 2 { get { return this.LastName + ", " + this.FirstName; } 3 4 } Example 7.3. Class CustomerViewRelatedEntity public property DisplayName
The next extensions to add to our classes are lists of items that we will use to populate list boxes in the GUI. We will require a list of shipping methods for the form. We will add a method to grab a collection of these objects easily in the business logic layer. Open the ShipMethodEntity class inside AW.Data and add the code in Example 7.4. Also, import the AW.Data.HelperClasses namespace at the top of the class. Right-click on AW.Data and select Build. 1 public static ShipMethodCollection GetShipMethodCollection() 2 { ISortExpression Sort = new SortExpression(); 3 Sort.Add(ShipMethodFields.ShipRate| SortOperator.Ascending); 4 ShipMethodCollection Methods = new ShipMethodCollection(); 5 Methods.GetMulti(null, 0, Sort); 6 return Methods; 7 8 } Example 7.4. Class ShipMethodEntity public method GetShipMethodCollection()
This method returns all rows in the ShipMethod table sorted by ShipRate. Since we do not specify a Predicate object in line 19 (we used null instead), we will grab every row in the table.
Form Layout Now we can begin designing our Order Edit form. Open the frmOrderEdit form in design mode. First, drag one ToolStrip and one SalesOrderDetailCollection object to your form. Size the form to a Width of “650” and a Height of “400”. Layout two TableLayoutPanels according to Figure 7.5.
76
Edit Form
Figure 7.5. Form frmOrderEdit TableLayoutPanel controls column and row measurements
Drag controls to the form, name them, and set properties according to Figure 7.6 and Table 7.7. (Note: See Appendix 1 for instructions on locating the icons used in the ToolStripButton controls.)
Figure 7.6. Form frmOrderEdit control layout Control Type
Name
Properties to Set
1
ToolStripButton
tsbSave
DisplayStyle = ImageAndText Text = “Save”
2
ToolStripButton
tsbClose
DisplayStyle = Text Text = “Close”
3
ToolStripButton
tsbDelete
DisplayStyle = ImageAndText Text = “Delete” 77
Chapter 7
Control Type
Name
Properties to Set
4
DateTimePicker
dtpOrderDate
Format = Short
5
DateTimePicker
dtpDueDate
Format = Short
6
DateTimePicker
dtpShipDate
Format = Short ShowCheckBox = True
7
CheckBox
cbOnlineOrder
Text = “Online Order”
8
TextBox
tbPurchaseOrder
9
ComboBox
cbShipMethod
DropDownStyle = DropDownList
10
TextBox
tbCustomer
Enabled = False
11
TextBox
tbContact
Enabled = False
12
TextBox
tbSubtotal
13
TextBox
tbTax
TextAlign = Right
14
TextBox
tbFreight
TextAlign = Right
15
Label
lblTotal
TextAlign = MiddleRight
16
DataGrid
dgvDetail
BackgroundColor = White AllowUserToAddRows = False AllowUserToDeleteRows = False AllowUserToResizeColumns = False AllowUserToResizeRows = False DataSource = salesOrderDetailCollection1 Dock = Fill ColumnHeaderHeightSizeMode = Autosize ReadOnly = True RowHeadersVisible = False SelectionMode = FullRowSelect
Table 7.7. Form frmOrderEdit controls to configure
Setup the columns for dgvDetails according to Table 7.8. ColumnName
HeaderText
AutoSizeMode
DefaultCellStyle
ProductName
Product
AllCells
OrderQty
#
AllCells
UnitPrice
Price/Unit
AllCells
DefaultCellStyle: Format/C2
UnitPriceDiscount
Discount/Unit
AllCells
DefaultCellStyle: Format/C2
LineTotal
Total
Fill
DefaultCellStyle: Format/C2
Table 7.8. DataGridView dgvDetails columns to configure
Now we need to add a member variable to the form to hold the order entity we are displaying on the form and we need to create a new constructor for the form that will take an order entity as a parameter. Switch the frmOrderEdit to code view and add the highlighted lines from Example 7.9. Be sure to import the common AW.Data namespaces in Example 5.5.
78
Edit Form
1 namespace AW.Win 2 { public partial class frmOrderEdit : Form 3 { 4 SalesOrderHeaderEntity _order; 5 public frmOrderEdit() 6 { 7 InitializeComponent(); 8 } 9 public frmOrderEdit(SalesOrderHeaderEntity Order) 10 { 11 InitializeComponent(); 12 _order = Order; 13 } 14 ... Example 7.9. Form frmOrderEdit Load event handler
In Line 5, we declare the _order variable that will hold the order entity the form will display. Lines 10-14 define the new constructor, which will take an order entity as a parameter. Notice that we always need to call the InitializeComponent() method in Line 12 if we want all the controls to operate correctly on our form. Â&#x201E; Create a Load event handler for the form and add the event handler code and other private methods from Example 7.10. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
private void frmOrderEdit_Load(object sender, EventArgs e) { PopulateListBoxes(); PopulateOrderData(); PopulateOrderDetailData(); } private void PopulateListBoxes() { cbShipMethod.DataSource = ShipMethodEntity.GetShipMethodCollection(); cbShipMethod.DisplayMember = ShipMethodFieldIndex.Name.ToString(); cbShipMethod.ValueMember = ShipMethodFieldIndex.ShipMethodId.ToString(); } private void PopulateOrderData() { tbPurchaseOrder.Text = _order.PurchaseOrderNumber; tbSubtotal.Text = _order.SubTotal.ToString("N2"); tbTax.Text = _order.TaxAmt.ToString("N2"); tbFreight.Text = _order.Freight.ToString("N2"); lblTotal.Text = _order.TotalDue.ToString("N2"); tbContact.Text = _order.Contact.DisplayName; tbCustomer.Text = _order.CustomerViewRelated.DisplayName; cbOnlineOrder.Checked = _order.OnlineOrderFlag; dtpOrderDate.Value = _order.OrderDate; dtpDueDate.Value = _order.DueDate; if (_order.ShipDate != DateTime.MinValue) dtpShipDate.Value = _order.ShipDate; else dtpShipDate.Checked = false; cbShipMethod.SelectedValue = _order.ShipMethodId; } 79
Chapter 7
31 private void PopulateOrderDetailData() 32 { this.dgvDetails.DataSource = _order.SalesOrderDetail; 33 34 } Example 7.10. Form frmOrderEdit Load event handler and related subroutines
In Lines 3-5, we call all the subroutines that will perform all the work in the form. In the PopulateListBoxes() subroutine, we fill the ship method ComboBox control. We use the field index to supply the DisplayMember and ValueMember properties. In the PopulateOrderData() subroutine, we assign all the values from the order entity to the controls on the form. Note that in Line 16, we call the ToString() method and supply “N2” as a format string, indicating that we would like two decimal places in the number as it is written into a string. Setting dates in Lines 23-24 is easy, but because the ShipDate field can be null in Line 25 we have to check to see if the date supplied in our entity is a real value and either assign it to the control or set the Checked property of the control to false. Finally we need to configure frmOrderSearch so that our new form is called when an row in the search results is double-clicked. Open the frmOrderSearch form in design mode. Create a handler for the dgvResults control’s CellContentDoubleClick event and add the code in Example 7.11. 1 private void dgvResults_CellContentDoubleClick( object sender, DataGridViewCellEventArgs e) 2 3 { SalesOrderHeaderEntity Order = 4 dgvResults.Rows[e.RowIndex].DataBoundItem 5 as SalesOrderHeaderEntity; 6 frmOrderEdit frm = new frmOrderEdit(Order); 7 ((frmMain)this.MdiParent).LaunchChildForm(frm); 8 9 } Example 7.11. Form frmOrderSearch DataGridView dgvResults CellContentDoubleClick event handler
The only trick to this code is catching the entity. It is exposed in a row’s DataBoundItem property as a generic object. You will need to cast it back to the original entity, and then you can use it without having to retrieve it from the database again.
Figure 7.12. Form frmOrderEdit displaying an order 80
Edit Form
Now compile and run the application. You should get something like Figure 7.12 when you search for orders and double-click on a row in the results. At this point, we have the data displayed on the control in the form, but we have not written the saving logic yet. Let’s do that now.
Saving Entity Data To be able to save an order, we have to create event handlers for the save ToolStripButton control’s Click event handler. We will also add the code for the close button as well. Add the code in Example 7.13 to the form. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38
private void tsbSave_Click(object sender, EventArgs e) { if (SaveData()) { this.Close(); } } private bool SaveData() { try { _order.Freight = Convert.ToDecimal(tbFreight.Text); _order.PurchaseOrderNumber = tbPurchaseOrder.Text; _order.SubTotal = Convert.ToDecimal(tbSubtotal.Text); _order.TaxAmt = Convert.ToDecimal(tbTax.Text); _order.OnlineOrderFlag = cbOnlineOrder.Checked; _order.OrderDate = dtpOrderDate.Value; _order.DueDate = dtpDueDate.Value; if (dtpShipDate.Checked) _order.ShipDate = dtpShipDate.Value; else _order.SetNewFieldValue( (int)SalesOrderHeaderFieldIndex.ShipDate, null); _order.ShipMethodId = Convert.ToInt32(cbShipMethod.SelectedValue); _order.Save(); return true; } catch (Exception err) { MessageBox.Show(err.Message); return false; } } private void tsbClose_Click(object sender, EventArgs e) { _order.Refetch(); this.Close(); } Example 7.13. Form frmOrderEdit saving and closing logic 81
Chapter 7
Aside from converting objects from strings to decimals, there is not much to the saving process. The only tricky part is Lines 19-23. We check to see if the dtpShipDate controls is unchecked, and, if so, we set the field to null using the SetNewFieldValue() method in Lines 22-23. After gathering the information back to the properties of our entity, we simple call the Save() method to persist the information back to the database. In Line 34 we define the event handler for the tsbClose Click event. We call the Refetch() method, which will fetch a fresh copy of the data from the database, clearing out any changes that might have been made to the entity but were not saved. Build and run the application. Now you should be able to search for an order and save changes back to the database. Note that we are not performing any extra validation on the user interface—yet. Having a form without validation is not good; therefore, we will show you how to add validation in the next chapter. For now, let’s add a way to create a new order.
Creating New Entities Adding the ability to create a new order will only take a few more lines of code. Open the frmOrderSearch form in design mode. Drag a new ContextMenu to the form. It should be named contextMenuStrip1. Create a new menu item with the text “New Order”. It should be named newOrderToolStripMenuItem. Set the ContextMenu of dgvResults to contextMenuStrip1 Create a Click event handler for newOrderToolStripMenuItem by double-clicking on it and add the code in Example 7.14. 1 private void newOrderToolStripMenuItem_Click(object sender, EventArgs e) 2 { SalesOrderHeaderEntity Order = new SalesOrderHeaderEntity(); 3 Order.CustomerId = 17018; 4 Order.ContactId = 4975; 5 Order.BillToAddressId = 14810; 6 Order.ShipToAddressId = 14810; 7 frmOrderEdit frm = new frmOrderEdit(Order); 8 ((frmMain)this.MdiParent).LaunchChildForm(frm); 9 10 } Example 7.14. Form frmOrderSearch menu newOrderToolStripMenuItem Click event handler
To create a new entity, we simply call the default constructor in Line 3. In our case, we are also setting a few properties that are required values but not yet supported in our form in Lines 4-7. Of course, you would not do this in a real application, but for now, it is the only way we will be allowed to save the entity we create. If we do not supply values and try to save the new entity, we will get an error because those required fields are null. In Line 8, we then pass this empty entity to the form. Now we need to alter our code slightly to handle the possibility of an unsaved entity being passed to the form. Go back to frmOrderEdit in code view, and alter the form’s Load event to match the code in Example 7.15. 1 private void frmOrderEdit_Load(object sender, EventArgs e) 2 { PopulateListBoxes(); 3 if (_order.IsNew == false) 4 { 5 82
Edit Form
6 7 8 9 }
PopulateOrderData(); PopulateOrderDetailData(); }
Example 7.15. Form frmOrderEdit modified Load event handler
Now, we simply check the OrdersEntity object’s IsNew property to see if it is a new, unsaved object. If it is, then we will bypass loading the data into the controls since there is nothing to load. We do not need to modify the procedure for saving the data, however. The same code will work for updating an existing record or inserting a new one. Build and run the application. You should now be able to create a new order and save it to the database (Figure 7.16). We are still not using any validation logic, so it is easy to create an unhandled exception (by entering non-numbers in the Freight field, for example). If you fill in all the fields with valid values, however, you will be able to save the entity.
Figure 7.16. Form frmOrderEdit displaying new, unsaved order
Deleting an Entity The last feature we need to add is the ability to delete an order. Add the code in Example 7.17 to the tsbDelete button’s Click event handler. 1 private void tsbDelete_Click(object sender, EventArgs e) 2 { if (MessageBox.Show( 3 "Are you sure you want to delete this order?", 4 83
Chapter 7
5 6 7 8 9 10 11 }
"Confirm delete", MessageBoxButtons.OKCancel, MessageBoxIcon.Warning) == DialogResult.OK) { _order.Delete(); this.Close(); }
Example 7.17. Button tsbDelete Click event handler
In Lines 3-6, we display a MessageBox to the user, confirming that he or she does intend to delete the order. If the DialogResult property is Yes, we perform the deletion. Deleting an entity is as simple as calling the Delete() method. After deletion, the form is closed. Â&#x201E; Build and run the application. When deleting an order, you should get a warning box like Figure 7.18 when you click the Delete button.
Figure 7.18. Form frmOrderEdit delete warning
With only a little bit of code, we have displayed, edited, and saved entity data. We have also added the capability of creating and deleting orders. In the next chapter, we will finally add validation to the form and provide more robust exception handling.
84
Validation
8
Validation “He who knows nothing doubts nothing.” Italian Proverb Chapter Goals • Discuss field validation. • Extend the generated field validation classes. • Create a utility class for validation-related functions in Windows forms. • Handle field validation in the GUI with ErrorProvider objects. • Discuss entity validation. • Create classes to perform entity validation. We have built this application rapidly, and, while the core logic of the application is working, we have not been taking the necessary precautions to validate the data that will be entering the system. Implementing validation is possible at many different layers of the application architecture. The lowest level of validation is in the schema of our database. As we define primary keys, constraints, and how fields and tables are organized, we ensure that improperly formatted or duplicate data cannot be entered into the database. These are important measures to take: a properly designed schema can save time by eliminating problems before they happen. Regardless of the logic in the business or GUI layer, “bad” data cannot be entered if the schema does not allow it. Since LLBLGen Pro scans your schema, it knows what kinds of data the database will allow, and knows what kinds of values the database will accept even before trying to save it. LLBLGen Pro is aware of the data type of a column, if the field is a calculated value or autogenerated number (and consequently read-only), and the maximum length of the field. Outside of the database, we can specify additional logic in the business layer that is beyond the design of the database schema. LLBLGen Pro gives us two options for validating object data: field validation and entity validation. We will discuss each kind of validation in detail and give examples of how to use it in this application.
Field Validation LLBLGen Pro automatically creates validation classes that are called each time an entity’s properties are set. These classes make it easy to perform additional logic to determine whether or not a property change is allowed to take place. For example, let’s say that in our SalesOrderHeader table, we want to ensure that when the PurchaseOrderNumber field is set, what is entered matches a valid purchase order number format, which for the purposes of this exercise we will define as having between 4 and 8 characters. We will use a simple regular expression to make this determination. Regular expressions are especially helpful when performing validation and you can find large libraries of examples online.10 In the SalesOrderHeaderValidator class in the ValidatorClasses folder from the AW.Data project, enter this code from Example 8.1 in the section marked for custom entity code. Note that Lines 1-2 need to come before the Validate() method. 10) See, for example, www.regexlib.com. 85
Chapter 8
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
public static string PurchaseOrderError = "Purchase order number must be 4 - 8 characters."; public virtual bool Validate(int fieldIndex, object value) { bool Validated = true; string PurchaseOrderValidator ="^.{4,8}$"; switch ((SalesOrderHeaderFieldIndex)fieldIndex) { case SalesOrderHeaderFieldIndex.PurchaseOrderNumber: System.Text.RegularExpressions.Regex RegExVal = new System.Text.RegularExpressions.Regex( PurchaseOrderValidator); if (RegExVal.IsMatch((string)value) == false) Validated = false; break; default: break; } return Validated; } Example 8.1. Class SalesOrderHeaderValidator Validate() method
In Line 1-2, we declare public strings that define the error messages to display to the user if order validation fails. Putting it here keeps us from having to retype it in the GUI. Later, we can move this string out of the code completely and load it into a resource file to allow for proper globalization. For now, though, we will simply leave it here. In Line 3, The Validate() method is called whenever a property is set (not when it is saved). You are passed the specific field that is being set and the value of what the field is being set to. Normally, this function simply returns true, and the set value becomes the new property’s value. However, you can add logic for each field you want to examine. In our case, we use the switch construction in Line 7 so it will be easy to go back later and add more field validation branches. If the PurchaseOrderNumber field is set to a value that is not a match for our regular expression in Line 10, our function will return false, and the change will not be applied to our entity. Be sure to note that until we add logic for the other fields, all changes are automatically allowed and will persist to the database when the Save() method is called. In this way, you can add validation if you desire, but your application works even if you do not. Now we will add more logic to our class libraries that will help us perform GUI validation more easily. Create a new class file named Utility.cs in the AW.Data project. In Solution Explorer, right-click on References, and add a reference to System.Windows.Forms. Inside the new class, add the following code from Example 8.2. 1 2 3 4 5 6 7 8 9 10 11 12 86
using using using using using using
System; System.Collections.Generic; System.Text; System.Data; System.Windows.Forms; SD.LLBLGen.Pro.ORMSupportClasses;
namespace AW.Data.WinForms { public class Validation { public static bool ValidatePropertyAssignment<T>(
Validation
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 }
Control ControltoValidate, int FieldToValidate, T Value, string ErrorMessage, ErrorProvider myError, EntityBase Entity) { bool Validated = true; try { IValidator Validator = Entity.Validator; if (Value.Equals(Entity.GetCurrentFieldValue( FieldToValidate)) == false && Validator.Validate(FieldToValidate, Value) == false) { myError.SetError(ControltoValidate, ErrorMessage); Validated = false; } else { myError.SetError(ControltoValidate, ""); } } catch (Exception err) { myError.SetError(ControltoValidate, err.Message); Validated = false; } return Validated; } }
Example 8.2. Utility.cs supplemental method ValidatePropertyAssignment()
To use the ValidatePropertyAssignment() function, you specify the type of the property you are assigning, the actual Windows form control that you are using to display the data, the field of the entity you are attempting to set, the value you are setting the property to, the error message that will be displayed if the validation fails, the error provider control that displays the error, and the entity object whose properties you are changing. The ValidatePropertyAssignment() method will help us perform validation in our GUI, and we will be referencing this method from any form in which we allow user input. Because this function is specific to Windows forms, we have included it in the namespace NW.Data.WinForm in Line 8. We declare it to be a public static function in Line 12, so we do not have to instantiate the class in order to use it. Basically, this function will help us figure out if a property can be changed to a certain value. If so, it will set the property to that value, and if not, it will set an error on the control for us. We are going to take advantage of the new generics feature in .NET 2.0. This feature allows us to not specify exactly which type of object we will be using until we actually use the function. In this case, it is not absolutely necessary to use a generic, but it will follow the recommendation of using strongly-typed objects wherever possible. If we try to compare objects that are not the same type, we will get a compile-time error. Now that we are implementing field validation, we can use the Validate() method to check and see if a particular value will pass field validation before we actually set the value. The Validate() method is called automatically when an entityâ&#x20AC;&#x2122;s property is changed, but by accessing the Validator property of the entity in Line 22, we can call the Validate() method directly in Line 25. We pass the field and the new value to the Validate() method, and it returns true or false depending on whether or not the change is allowed. Also as part of our evaluation, we check in Lines 23-24 to see if the new value is different than the current value. If the values are equivalent (meaning the value has not changed), we 87
Chapter 8
don’t attempt validation. This will keep validation from failing as a result of simply opening an entity with “bad” data, preventing us from dumping a validation problem on a user who did not originally cause the error. If the values are different and the new value fails validation, the method activates the error provider on the control with the supplied error message in Line 27, and sets the method result to false in Line 28. Any other condition will result in the error being cleared from the control in Line 32. We also implement a try/catch structure to catch unexpected errors when assigning the property. In Line 37, these unexpected errors will also result in assigning the error provider to the control and displaying the error message. We will add one more method to this class to help us with validation. This new method will search for any errors in any controls on the form. Also inside Utility.cs, in the AW.Data.Winforms namespace, in the Validation class, add the ValidateForm() method in Example 8.3. 1 public static bool ValidateForm( Control mycontrol, ErrorProvider MyError) 2 3 { bool IsValid = true; 4 foreach (Control ChildControl in mycontrol.Controls) 5 { 6 if (MyError.GetError(ChildControl) != "") 7 { 8 IsValid = false; 9 break; 10 } 11 if (ChildControl.Controls.Count > 0) 12 { 13 IsValid = ValidateForm(ChildControl, MyError); 14 if (IsValid == false) 15 break; 16 } 17 } 18 return IsValid; 19 20 } Example 8.3. Utility.cs supplemental method ValidateForm()
Here we have a function called ValidateForm() that simply searches recursively through all the controls on the form to look for an error. It returns true if there are no errors, and false if any exist. To figure out if a control has an error, you call the GetError() method of the ErrorProvider and pass the control that you are asking about. An empty string indicates there is no error. In this application, the .NET built-in method for performing this function was inadequate. We will see why as we implement these validation methods in our form. First drag an ErrorProvidor to your frmOrderEdit. It will sit in the component tray. (Figure 8.4) Change the name to MyError.
Figure 8.4. Form frmOrderEdit component tray with ErrorProvider control myError
Create a TextChanged event handler on frmOrderEdit for the tbPurchaseOrder TextBox and add the code from Example 8.5. Also import the AW.Data.ValidatorClasses namespace at the top of the class. 88
Validation
1 private void tbPurchaseOrder_TextChanged(object sender, EventArgs e) 2 { AW.Data.WinForms.Validation.ValidatePropertyAssignment<string> 3 (tbPurchaseOrder, 4 (int)SalesOrderHeaderFieldIndex.PurchaseOrderNumber, 5 tbPurchaseOrder.Text, 6 SalesOrderHeaderValidator.PurchaseOrderError, 7 MyError, 8 _order); 9 10 } Example 8.5. Textbox tbPurchaseOrder TextChanged event handler
We have implemented an event handler that fires every time the control changes. In this way, the user will receive instant feedback as to whether or not the value is acceptable. You could optionally implement a Validating event handler, which fires when the user tries to move to the next control. It moves focus back to the control until the value is acceptable. One caveat of using the Validating event, however, is that if the submitted value fails validation, you must set the cancel event argument (e.cancel) to true. Setting this value will tell the .NET framework that this control has failed validation. By default, all controls have a CausesValidation property set to true. This means that if a particular control is failing validation, none of your other controls will work unless you set their CausesValidation properties to false, For ToolStripButton controls, however, there is no CausesValidation property to change. This complicates validation slightly as the ToolStripButton Click events do fire, but you cannot perform all the actions you can normally perform if there are no validation errors (like closing a form). In the end, we will be using the ValueChanged event to validate the controlâ&#x20AC;&#x2122;s contents instead of using the .NET validation features and we will therefore need to look for any errors before saving the objects. Notice that in each case, however, because we put almost all of our logic in the class library, we only have to call one function to implement validation for a particular field. Once again, proper planning means that an absolute minimum of code goes into the GUI. The only remaining action to perform on our form is to modify the subroutine that saves the data. We want to check for errors before saving and notify the user if any errors exist. Â&#x201E; Modify the Click event for the tsbSave button so that it matches Example 8.6. 1 private void tsbSave_Click(object sender, EventArgs e) 2 { if (AW.Data.WinForms.Validation.ValidateForm(this, MyError)) 3 { 4 if (SaveData()) 5 { 6 this.Close(); 7 } 8 } 9 else 10 { 11 MessageBox.Show( 12 "Please correct errors before saving.", 13 14 "Error", 15 MessageBoxButtons.OK, 16 MessageBoxIcon.Error); 17 } 18 } Example 8.6. Button tsbSave modified Click event handler 89
Chapter 8
Now in the tsbSave button’s Click event handler, we call the ValidateForm() function which will return “true” if there are no controls with errors before saving the data. If ValidateForm() returns false, we display a MessageBox that notifies the user that he or she needs to fix errors before trying to save. When you build and run the application, you should see an error like that in Figure 8.7 if you enter data that fails validation.
Figure 8.7. Form frmOrderEdit with ErrorProvider control activated
Attempting to click the Save button with invalid data will result in a dialog box like that in Figure 8.8.
Figure 8.8. Form frmOrderEdit MessageBox when saving a form with errors
90
Validation
To summarize, field validation occurs when the property of an entity is set. You can use the classes that are generated for you and easily add your own logic. Field validation looks at properties individually when assessing their validity. Now let’s take a look at entity validation.
Entity Validation Entity validation validates properties collectively, rather than individually. Entity validation occurs just before the entity is saved. In the case of our SalesOrderHeaderEntity, we might want to make sure that the ShipDate is not earlier than the OrderDate. Entity validation is more complex than simply checking each property individually as a value is assigned, and instead involves comparing two properties before they are saved. While field validation classes are created for us automatically, we will need to create entity validation classes ourselves. Thankfully, creating these classes is not difficult. With a small amount of code, we will create a new class that inherits from IentityValidator. The structure of this class is fairly simple and straightforward. Let’s create a new entity validation class to validate SalesOrderHeaderEntity objects. It is a good idea to place the code for the entity validation class inside the business logic layer entity class that it will be validating. Create a new class file named SalesOrderHeaderEntityValidator.cs and add the code in Example 8.9. Be sure to import the AW.Data.EntityClasses, System, and SD.SD.LLBLGen.Pro.ORMSupportClasses namespaces at the top of the file. 1 namespace AW.Data.EntityValidators 2 { public class SalesOrderHeaderEntityValidator : IEntityValidator 3 { 4 public string ErrorMessage = ""; 5 private bool Validated = true; 6 public bool Validate(object containingEntity) 7 { 8 SalesOrderHeaderEntity order = 9 (SalesOrderHeaderEntity)containingEntity; 10 11 if (order.ShipDate != DateTime.MinValue && 12 order.ShipDate < order.OrderDate) 13 { 14 ErrorMessage += 15 "Ship Date must be equal to/greater " + 16 "than the Order Date. "; 17 Validated = false; 18 } 19 if (order.DueDate < order.OrderDate) 20 { 21 ErrorMessage += 22 "Due Date must be null or greater than " + 23 "the Order Date. "; 24 Validated = false; 25 } 26 return Validated; 27 } 28 } 29 } Example 8.9. Class SalesOrderHeaderEntityValidator
91
Chapter 8
In Line 1, we define the namespace where this class will appear. In Line 3, we define SalesOrderHeaderEntityValidator as the name of the class and indicate that it inherits from IEntityValidator. We only need to implement one method, Validate(), that takes an entity object and returns a Boolean value telling us whether or not the item passed validation. In Line 5, we define a public string that will be used to hold error messages. In Lines 9-10, we cast the object that is passed to us as an SalesOrderHeaderEntity object. Now we can use any kind of logic that we like to verify the order. In Line 11, we check to see if the ShipDate field is not null AND less than the OrderDate. If so, we add to the ErrorMessage string detailing exactly why the entity failed validation, and then we set the Validated boolean to false. In Line 19, we also check to make sure the DueDate field is not less than the OrderDate field, as this outcome would not make sense either. An important thing to note about entity validator objects is that you have to explicitly use them when you instantiate the entity if you want the validator to function. An entity does not know to use the validator by default. If we want an entity class to always use a validator, we can add the validator to every constructor in the entity class. With every constructor using the validator, the entity will always have validation no matter how it is created. Â&#x201E; Modify the first constructor in the SalesOrderHeaderEntity class (SalesOrderHeaderEntity.cs in AW.Data project) to match Example 8.10. 1 public SalesOrderHeaderEntity():base( new PropertyDescriptorFactory(), new SalesOrderHeaderEntityFactory()) 2 3 { EntityValidators.SalesOrderHeaderEntityValidator val = 4 new AW.Data.EntityValidators.SalesOrderHeaderEntityValidator(); 5 this.EntityValidatorToUse = val; 6 7 } Example 8.10. Class SalesOrderHeaderEntity modified constructor
Using the validator is as simple as instantiating it (Line 4), and setting the EntityValidatorToUse property of the entity to the newly created object (Line 6). To use this validator for every order, add the above code to every other constructor in the class (there should be six other constructors). Without this step, there may be no validator set (depending on which constructor is used when the the object is created). After updating the constructors, we need to modify the GUI to handle any errors that might occur during validation and display them to the user. Â&#x201E; Modify the SaveData() method in frmOrderEdit to match Example 8.11. Be sure to import the AW.Data.EntityValidators namespace at the top of the class. 1 private bool SaveData() 2 { try 3 { 4 _order.Freight = Convert.ToDecimal(tbFreight.Text); 5 _order.PurchaseOrderNumber = tbPurchaseOrder.Text; 6 _order.SubTotal = Convert.ToDecimal(tbSubtotal.Text); 7 _order.TaxAmt = Convert.ToDecimal(tbTax.Text); 8 _order.OnlineOrderFlag = cbOnlineOrder.Checked; 9 _order.OrderDate = dtpOrderDate.Value; 10 _order.DueDate = dtpDueDate.Value; 11 if (dtpShipDate.Checked) 12 _order.ShipDate = dtpShipDate.Value; 13 else 14 _order.SetNewFieldValue( 15 92
Validation
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 }
(int)SalesOrderHeaderFieldIndex.ShipDate, null); _order.ShipMethodId = Convert.ToInt32(cbShipMethod.SelectedValue); if (_order.Validate()) { _order.Save(); return true; } else { SalesOrderHeaderEntityValidator val = (SalesOrderHeaderEntityValidator) _order.EntityValidatorToUse; MessageBox.Show( val.ErrorMessage, "Error", MessageBoxButtons.OK, MessageBoxIcon.Stop); return false; } } catch (Exception err) { MessageBox.Show(err.Message); return false; }
Example 8.11. Form frmOrderEdit modified SaveData() method
The changes to this method are in Lines 18-34. In Line 18, we call the Validate() method of the entity. If the entity passes validation, we save the entity and return true. If not, in Lines 23-34, we grab the SalesOrderHeaderEntityValidator from the EntityValidatorToUse property of the order entity, and show the error message string to the user in a message box.
Figure 8.12. Form frmOrderEdit MessageBox with failed entity validation 93
Chapter 8
Â&#x201E; Running the application, if you set the ShipDate to a date earlier than the OrderDate, you should see a message like Figure 8.12. Entity validation is useful for comparing fields to check the validity of the data before the entity is saved. You must create you own classes to perform the validation, but you can easily add them to the business logic layer so they are used whenever the entity is used. Now we have implemented both field validation and entity validation for the SalesOrderHeaderEntity class. Although we have only checked a few fields, you can see how the process works for both kinds of validation, and you are more prepared to choose one or the other (or both) methods depending on your needs. You will want to spend time here creating differing kinds of validation routines in your business objects, since it is much easier to validate data before it is saved than to clean out bad data that should have never been entered in the first place. By investing time here, you will have a business object that is powerful, easy to use and difficult to misuse. In the next chapter, we will take a look how to use transactions.
94
Transactions
9
Transactions
“A common mistake people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools.” Douglas Adams (1952-2001), Mostly Harmless Chapter Goals • Discuss transactions and isolation levels. • Learn how to use Transaction objects to perform several database operations together. • Create a form that uses a transaction to change data.
Preparing for the Unexpected Transactions are a way to execute a series of database operations in such a way that if any of the statements generate an error, all of the changes are undone. If all of the changes execute correctly, the changes are committed to the database. Transactions are useful when having a process stop in the middle of a sequence without fully completing it would violate your business rules. For example, if you execute two queries to move money from one account to a temporary account, and then another query to move the money from the temporary account into the new account, you want both transactions to succeed or fail. You would not want the first one to execute and the second one to fail, or you would end up with funds in the wrong account. LLBLGen Pro has robust support for transactions. We will take a look at how transactions are used and some of the options you have when you use them. For our example of transactions, we will be adding bonus hours to employees hired after a certain date. We would like to make an easy method that uses a transaction to add these hours. We will also have the option of specifying a different number of hours depending on whether or not the employee is salaried. Since this process will involve multiple reads/writes to the database, it is ideal for a transaction. First we need to extend our EmployeeEntity class to add this functionality. Add the code in Example 9.1 to the EmployeeEntity.cs file. 1 public static int AddBonusVacationHours( DateTime HireDateAfter, 2 int SalariedHours, 3 int NonSalariedHours) 4 5 { Transaction MyTransaction = new Transaction( 6 System.Data.IsolationLevel.ReadCommitted, "MyTransaction"); 7 int RowsAffected = 0; 8 try 9 { 10 EmployeeCollection Salaried = new EmployeeCollection(); 11 MyTransaction.Add(Salaried); 12 IPredicateExpression SalariedPred = 13 95
Chapter 9
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 }
new PredicateExpression(); SalariedPred.Add( (EmployeeFields.HireDate >= HireDateAfter) & (EmployeeFields.SalariedFlag == true)); Salaried.GetMulti(SalariedPred); foreach (EmployeeEntity Employee in Salaried) { Employee.VacationHours += Convert.ToInt16(SalariedHours); } RowsAffected += Salaried.SaveMulti(); EmployeeCollection NonSalaried = new EmployeeCollection(); MyTransaction.Add(NonSalaried); IPredicateExpression NonSalariedPred = new PredicateExpression(); NonSalariedPred.Add( (EmployeeFields.HireDate >= HireDateAfter) & (EmployeeFields.SalariedFlag == false)); NonSalaried.GetMulti(NonSalariedPred); foreach (EmployeeEntity Employee in NonSalaried) { Employee.VacationHours += Convert.ToInt16(NonSalariedHours); } RowsAffected += NonSalaried.SaveMulti(); MyTransaction.Commit(); } catch (Exception err) { MyTransaction.Rollback(); throw new ApplicationException( "Error adding vacation hours: " + err.Message); } finally { MyTransaction.Dispose(); } return RowsAffected;
Example 9.1. Class EmployeeEntity public method AddBonusVacationHours()
In Lines 1-4, we define this function as a shared public method that takes a DateTime value and two integers. We create a LLBLGen Pro transaction object in Lines 6-7, specifying a name for the transaction and an isolation level. (We will discuss isolation levels in the next section.) Note that at this point the connection to the database is open and available for us. In Line 8 we declare an integer that will keep track of how many rows we will have updated. Notice that we use a try/catch/finally structure for making the changes. This structure is important because we want to perform a rollback if there is any error as well and we also want to ensure that the connection is eventually closed regardless of the outcome of our actions. We create an EmployeeCollection in Line 11 to hold all salaried employees. We add this collection to the transaction in Line 12, indicating to the framework that we want all database operations involving this collection to use the transaction. We create the predicate necessary to pull all salaried employees hired after the date provided in Lines 15-17, and we then fill the collection and loop through all the results, incrementing the number of vacation hours by the amount provided. Next, we then call the SaveMulti() method of the collection to propagate the changes back to the database, catching the number of rows affected with our RowsAffected integer. 96
Transactions
The process is almost exactly the same for the nonsalaried employees. When you call the SaveMulti() method, LLBLGen Pro always performs the updates inside of a transaction, whether you specify a transaction or not. Had we not created our own (one for each collection of employees), the method would perform a total of two separate transactions. Adding the collections to our transaction means that everything happens inside one transaction, where we can control the committing and rollback. After finishing the updates, we call the Commit() method to commit the changes to the database. If any errors occur, we rollback the transaction and throw an exception. In both cases, we will eventually call Dispose() which closes the connection and clears out the transaction’s resources. Now let’s take a look at isolation levels.
Isolation Levels Setting isolation levels help solve concurrency problems. With multiple users accessing the database, problems can occur if two processes attempt to change the same item at the same time. Here are some of the common currency problems and what they mean: Lost Updates: This occurs when two people make changes to the same item, but the last change overwrites the first change, losing all trace of the first change. Non-repeatable reads: This is a little like trying to hit a moving target. You query the database, but every time you read the data, the values are different. This problem can be frustrating when you really need to know what the values are because you are making changes based on those values. It would be like asking for the price of a car at a used car lot and getting a different response every time. When you are trying to decide if you can afford the car or not, it is helpful if the price does not fluctuate! Dirty Reads: This situation is similar to the non-repeatable problem, but more specific. You are working with data and the values that you read are actually uncommitted values from another transaction. This would be like having a price quoted for a car that you want to buy, but later finding out that the manager was only considering that price, not actually committed to it. Phantom Reads: Phantom reads occurs when items that you are working with are inserted or deleted by someone else, causing “phantom” rows to mysteriously appear and disappear in your list. This would be similar to a manager creating an inventory report of cars on the used car lot, but a car being sold after the list was made. When the manager goes to look up more information about the car, it is no longer in the list. Moreover, there might be new cars that appear in the list that were not there before. SQL Server has different isolation levels that solve different levels of concurrency problems. These are summarized in Table 9.2. Isolation Level
Lost Updates
Dirty Reads
Non-repeatable Reads
Phantom Reads
Chaos
Notes Not supported by SQL Server
ReadUncommitted
Yes
Yes
Yes
Yes
Used when speed is more important than accuracy
ReadCommitted (default)
No
Yes
Yes
Yes
Row locking only
RepeatableRead
No
No
No
Yes
Higher isolation, but can still have phantom reads
Serializable
No
No
No
No
Use if you need absolute accuracy and will be making changes to what is read
Snapshot
No
No
No
No
Requires administrative setup. Use if you need absolute accuracy, but only to read the data, not update
Table 9.2. SQL Server isolation levels 97
Chapter 9
Normally, you will just use the default isolation level of ReadCommitted as we did in our example. If you need more isolation, you can choose other levels according to your requirements. Now that we have written our function, let’s set up the form for our utility. On the design view of frmBonusVacation, set the Width to “300” and the Height to “250”. Layout the frmBonusVacation form according to Figure 9.3 and Table 9.4.
Figure 9.3. Form frmBonusVacation control layout Control Type
Name
Properties to Set
1
DateTimePicker
dtpHireDate
2
NumericUpDown
nudSalariedHours
3
NumericUpDown
nudUnsalariedHours
4
Button
btnAdd
Text = “Add Hours”
5
Label
lblResult
AutoSize = False Text = “” TextAlign = MiddleRight
Table 9.4. Form frmBonusVacation controls to configure Double-click on btnAdd to create the Click event handler and add the code in Example 9.5. Be sure to import the
AW.Data.EntityClasses namespace at the beginning of the class. 1 private void btnAdd_Click(object sender, EventArgs e) 2 { lblResult.Text = EmployeeEntity.AddBonusVacationHours( 3 dtpHireDate.Value, 4 Convert.ToInt32(nudSalariedHours.Value), 5 Convert.ToInt32(nudUnsalariedHours.Value)).ToString(); 6 7 } Example 9.5. Button btnAdd Click event handler
In the Click event, we call the AddBonusVacationHours() method, using the values from our controls and assigning the result to the label. Compile and run the application. Enter a date of 1/1/2000 and values of “5” and “3” for salaried and nonsalaried employees respectively. When you
press the Add Hours button, you will get results like Figure 9.6.
98
Transactions
With this change, 73 employees had extra hours added. The changes were all performed in a transaction that would be rolled back in the event of an error. While our generated framework already uses transactions for some operations internally, it was easy to add our objects to a single transaction to perform our changes. In the next chapter, we will take a look at how we can increase the performance of our application by optimizing our code and use multi-threading to make the GUI more responsive.
Figure 9.6. Form frmVactionBonus results
99
Tuning
10
Tuning “Little by little, one travels far.”
J. R. R. Tolkien (1892-1973) Chapter Goals • Use SQL Server Profiler to observe dynamically generated queries as they are executed in the database. • Reduce repetitive queries by prefetching objects. • Use multi-threading to enhance the user experience. • Create a status bar form to display to the user while accessing the database and that allows the user the cancel the operation. • Use a BackgroundWorker object to perform work on a different thread.
Enhancing Performance One general rule of developing applications is, “First make it work. Then make it work faster.” In this example application, we have gone from a schema to a functional Windows application in a very short amount of time. You already have a working prototype and a good foundation for more forms and features. But let’s take a moment or two to look back at our code and make some adjustments to improve the responsiveness of the application and make it operate as efficiently as possible. With LLBLGen Pro, SQL statements are generated and executed for you automatically when you use your classes. While you do not necessarily have to care about the queries that are executing in the background, understanding them can help you get the best performance possible. There are two ways to watch generated SQL queries to see what the LLBLGen Pro framework is actually doing for you. The first way, is to use SQL Server Profiler.
SQL Server Profiler SQL Server Profiler is the more difficult way to watch queries as they are executed in the database—using Profiler requires switching back and forth to the Profiler application. However, Profiler gives you the advantage of seeing every query that is running in the database (and possibly finding queries you were not aware of) and you can copyand-paste the query from Profiler directly into SQL Server Management Studio to run again if you wish. Let’s run a Profiler trace to watch all of the queries that happen in our database while we use our application.
Figure 10.1. SQL Server Profiler new trace 101
Chapter 10
Figure 10.2. Database connection screen First, open SQL Profiler on a computer with SQL Server tools installed. It should be under the Start Menu >
Programs > Microsoft SQL Server 2005 > Performance Tools > SQL Server Profiler (if not, the path to the program will be something like C:\Program Files\Microsoft SQL Server\90\Tools\Binn\PROFILER90.EXE) Go to File > New Trace. (Figure 10.1) Enter your SQL Server information and click OK. (Figure 10.2) In the Trace Properties window select the Events Selection tab. Clear all checkboxes except for RPC:Completed and SQL:BatchCompleted and click Run. (Figure 10.3)
Figure 10.3. Trace Properties, Events Selection tab 102
Tuning
Keep in mind that once you click Run, the trace will actively running on your database, recording all the activity that happens in the database. You would generally not want to leave this running for long periods of time as it can significantly slow down your database. You should see a window like that in Figure 10.4. Any queries that our database executes will appear here.
Figure 10.4. Trace results screen
Now we can run our application and capture the queries that execute in the database. Minimize SQL Profiler, and build and run your AW.Win application. Go to Search > Orders. Enter “Texas” as the state name and click Search.
This trace gives us enough to analyze. Now let’s stop the trace. In SQL Server Profiler, click on the stop button to stop the trace (Figure 10.5).
Figure 10.5. Stop trace button You should now see something like Table 10.6 in your results (several columns have been dropped to save
space).
103
Chapter 10
Query exec sp_reset_connection 1
SELECT [Person].[CountryRegion].[CountryRegionCode], [Person].[CountryRegion].[Name], [Person].[CountryRegion].[ModifiedDate] FROM [Person].[CountryRegion] ORDER BY [Person].[CountryRegion].[Name] ASC exec sp_reset_connection
2
SELECT [Person].[StateProvince].[StateProvinceID] AS [StateProvinceId], [Person].[StateProvince].[StateProvinceCode], [Person].[StateProvince].[CountryRegionCode], [Person].[StateProvince].[IsOnlyStateProvinceFlag], [Person].[StateProvince].[Name], [Person].[StateProvince].[TerritoryID] AS [TerritoryId], [Person].[StateProvince].[rowguid] AS [Rowguid], [Person].[StateProvince].[ModifiedDate] FROM [Person].[StateProvince] ORDER BY [Person].[StateProvince].[Name] ASC exec sp_reset_connection
exec sp_executesql N'SELECT DISTINCT TOP 100 [Sales].[SalesOrderHeader].[SalesOrderID] AS [SalesOrderId], [Sales].[SalesOrderHeader].[RevisionNumber], [Sales].[SalesOrderHeader].[OrderDate], [Sales].[SalesOrderHeader].[DueDate], [Sales].[SalesOrderHeader].[ShipDate], [Sales].[SalesOrderHeader].[Status], [Sales].[SalesOrderHeader].[OnlineOrderFlag], [Sales].[SalesOrderHeader].[SalesOrderNumber], [Sales].[SalesOrderHeader].[PurchaseOrderNumber], [Sales].[SalesOrderHeader].[AccountNumber], [Sales].[SalesOrderHeader].[CustomerID] AS [CustomerId], [Sales].[SalesOrderHeader].[ContactID] AS [ContactId], [Sales].[SalesOrderHeader].[SalesPersonID] AS [SalesPersonId], [Sales].[SalesOrderHeader].[TerritoryID] AS [TerritoryId], [Sales].[SalesOrderHeader].[BillToAddressID] AS [BillToAddressId], [Sales].[SalesOrderHeader].[ShipToAddressID] AS [ShipToAddressId], [Sales].[SalesOrderHeader].[ShipMethodID] AS [ShipMethodId], 3 [Sales].[SalesOrderHeader].[CreditCardID] AS [CreditCardId], [Sales].[SalesOrderHeader].[CreditCardApprovalCode], [Sales].[SalesOrderHeader].[CurrencyRateID] AS [CurrencyRateId], [Sales].[SalesOrderHeader].[SubTotal], [Sales].[SalesOrderHeader].[TaxAmt], [Sales].[SalesOrderHeader].[Freight], [Sales].[SalesOrderHeader].[TotalDue], [Sales].[SalesOrderHeader].[Comment], [Sales].[SalesOrderHeader].[rowguid] AS [Rowguid], [Sales].[SalesOrderHeader].[ModifiedDate] FROM ( [Sales].[vIndividualCustomer] INNER JOIN [Sales].[SalesOrderHeader] ON [Sales].[vIndividualCustomer].[CustomerID]=[Sales].[SalesOrderHeader].[CustomerID]) WHERE ( ( [Sales].[vIndividualCustomer].[StateProvinceName] = @StateProvinceName1)) ORDER BY [Sales].[SalesOrderHeader].[OrderDate] ASC',N'@StateProvinceName1 nvarchar(50)',@StateProvinceName1=N'Texas' exec sp_reset_connection exec sp_executesql N'SELECT [Sales].[vIndividualCustomer].[CustomerID] AS [CustomerId], [Sales].[vIndividualCustomer].[Title], [Sales].[vIndividualCustomer].[FirstName], [Sales].[vIndividualCustomer].[MiddleName], [Sales].[vIndividualCustomer].[LastName], [Sales].[vIndividualCustomer].[Suffix], [Sales].[vIndividualCustomer].[Phone], [Sales].[vIndividualCustomer].[EmailAddress], [Sales].[vIndividualCustomer].[EmailPromotion], [Sales].[vIndividualCustomer].[AddressType], 4 [Sales].[vIndividualCustomer].[AddressLine1], [Sales].[vIndividualCustomer].[AddressLine2], [Sales].[vIndividualCustomer].[City], [Sales].[vIndividualCustomer].[StateProvinceName], [Sales].[vIndividualCustomer].[PostalCode], [Sales].[vIndividualCustomer].[CountryRegionName], [Sales].[vIndividualCustomer].[Demographics] FROM [Sales].[vIndividualCustomer] WHERE ( [Sales].[vIndividualCustomer].[CustomerID] = @CustomerId1)',N'@CustomerId1 int',@CustomerId1=15338 exec sp_reset_connection exec sp_executesql N'SELECT [Sales].[vIndividualCustomer].[CustomerID] AS [CustomerId], [Sales].[vIndividualCustomer].[Title], [Sales].[vIndividualCustomer].[FirstName], [Sales].[vIndividualCustomer].[MiddleName], [Sales].[vIndividualCustomer].[LastName], [Sales].[vIndividualCustomer].[Suffix], [Sales].[vIndividualCustomer].[Phone], [Sales].[vIndividualCustomer].[EmailAddress], [Sales].[vIndividualCustomer].[EmailPromotion], [Sales].[vIndividualCustomer].[AddressType], 5 [Sales].[vIndividualCustomer].[AddressLine1], [Sales].[vIndividualCustomer].[AddressLine2], [Sales].[vIndividualCustomer].[City], [Sales].[vIndividualCustomer].[StateProvinceName], [Sales].[vIndividualCustomer].[PostalCode], [Sales].[vIndividualCustomer].[CountryRegionName], [Sales].[vIndividualCustomer].[Demographics] FROM [Sales].[vIndividualCustomer] WHERE ( [Sales].[vIndividualCustomer].[CustomerID] = @CustomerId1)',N'@CustomerId1 int',@CustomerId1=17018 exec sp_reset_connection exec sp_executesql N'SELECT [Sales].[vIndividualCustomer].[CustomerID] AS [CustomerId], [Sales].[vIndividualCustomer].[Title], [Sales].[vIndividualCustomer].[FirstName], [Sales].[vIndividualCustomer].[MiddleName], [Sales].[vIndividualCustomer].[LastName], [Sales].[vIndividualCustomer].[Suffix], [Sales].[vIndividualCustomer].[Phone], [Sales].[vIndividualCustomer].[EmailAddress], [Sales].[vIndividualCustomer].[EmailPromotion], [Sales].[vIndividualCustomer].[AddressType], 6 [Sales].[vIndividualCustomer].[AddressLine1], [Sales].[vIndividualCustomer].[AddressLine2], [Sales].[vIndividualCustomer].[City], [Sales].[vIndividualCustomer].[StateProvinceName], [Sales].[vIndividualCustomer].[PostalCode], [Sales].[vIndividualCustomer].[CountryRegionName], [Sales].[vIndividualCustomer].[Demographics] FROM [Sales].[vIndividualCustomer] WHERE ( [Sales].[vIndividualCustomer].[CustomerID] = @CustomerId1)',N'@CustomerId1 int',@CustomerId1=15338 exec sp_reset_connection
104
Tuning
exec sp_executesql N'SELECT [Sales].[vIndividualCustomer].[CustomerID] AS [CustomerId], [Sales].[vIndividualCustomer].[Title], [Sales].[vIndividualCustomer].[FirstName], [Sales].[vIndividualCustomer].[MiddleName], [Sales].[vIndividualCustomer].[LastName], [Sales].[vIndividualCustomer].[Suffix], [Sales].[vIndividualCustomer].[Phone], [Sales].[vIndividualCustomer].[EmailAddress], [Sales].[vIndividualCustomer].[EmailPromotion], [Sales].[vIndividualCustomer].[AddressType], 7 [Sales].[vIndividualCustomer].[AddressLine1], [Sales].[vIndividualCustomer].[AddressLine2], [Sales].[vIndividualCustomer].[City], [Sales].[vIndividualCustomer].[StateProvinceName], [Sales].[vIndividualCustomer].[PostalCode], [Sales].[vIndividualCustomer].[CountryRegionName], [Sales].[vIndividualCustomer].[Demographics] FROM [Sales].[vIndividualCustomer] WHERE ( [Sales].[vIndividualCustomer].[CustomerID] = @CustomerId1)',N'@CustomerId1 int',@CustomerId1=13215 exec sp_reset_connection exec sp_executesql N'SELECT [Sales].[vIndividualCustomer].[CustomerID] AS [CustomerId], [Sales].[vIndividualCustomer].[Title], [Sales].[vIndividualCustomer].[FirstName], [Sales].[vIndividualCustomer].[MiddleName], [Sales].[vIndividualCustomer].[LastName], [Sales].[vIndividualCustomer].[Suffix], [Sales].[vIndividualCustomer].[Phone], [Sales].[vIndividualCustomer].[EmailAddress], [Sales].[vIndividualCustomer].[EmailPromotion], [Sales].[vIndividualCustomer].[AddressType], 8 [Sales].[vIndividualCustomer].[AddressLine1], [Sales].[vIndividualCustomer].[AddressLine2], [Sales].[vIndividualCustomer].[City], [Sales].[vIndividualCustomer].[StateProvinceName], [Sales].[vIndividualCustomer].[PostalCode], [Sales].[vIndividualCustomer].[CountryRegionName], [Sales].[vIndividualCustomer].[Demographics] FROM [Sales].[vIndividualCustomer] WHERE ( [Sales].[vIndividualCustomer].[CustomerID] = @CustomerId1)',N'@CustomerId1 int',@CustomerId1=17018
Table 10.6. Trace results
By looking at the trace, you can see the exact SQL statements that LLBLGen Pro generates. The sp_reset_connection stored procedure is one that is called by the .NET SQL client between database calls to clear out temporary items and perform other maintenance functions.11 For our purposes, we will focus on the dynamic SQL queries, which are numbered. The net result of our clicking produced a total of 8 queries. Items 1 and 2 are where we populate the state and country drop down lists. Item 3 is actually where the search is performed based on the criteria we entered, which was only that we wanted orders from the state of Texas. But what are Items 4-8? They appear to be almost exactly the same query with just slightly different criteria. These queries are individual lookups of the corresponding customer view row for each other returned from query #3. As the items are bound to the DataGridView, the CustomerDisplayName property looks up the rows in the related object causing them to pull the information in one-at-a-time due to lazy loading.
LLBLGen Pro Tracing The other way to see the dynamic queries that LLBLGen Pro generates is to use LLBLGen Pro’s built-in tracing. By using the built-in tracing, you can watch the queries that are created without leaving Visual Studio. To turn on tracing, you need to modify the app.config file of the AW.Win32 project. In the app.config file of the AW.Win32 project, add the system.diagnostics section from Example 10.7.
1 <?xml version="1.0"?> 2 <configuration> <appSettings> 3 <add key="Main.ConnectionString" 4 value="data source=jchancellor2-nb;initial catalog=AdventureWorks; 5 integrated security=SSPI;persist security info=False"/> 6 </appSettings> 7 <system.diagnostics> 8 <switches> 9 <add name="SqlServerDQE" value="4" /> 10 </switches> 11 </system.diagnostics> 12 13 </configuration> Example 10.7. AW.Win32 app.config file with LLBLGen Pro’s built-in tracing enabled 11) Find out more about the sp_reset_connection stored procedure and exactly what it does at the SQL Server Developers Network. http://www.sqldev.net/misc/sp_reset_connection.htm. 105
Chapter 10
When tracing is enabled in the app.config file, Visual Studio’s Output window will display each dynamic query as it is created. While using the Output window in Visual Studio gives you more trace information than you might actually care to see and you cannot cut-and-paste the output directly into SQL Management Studio, the LLBLGen Pro tracing is much easier to use. If you do use the built-in tracing, remember to turn it off before you release the application. We’ll be using SQL Server Profiler data for the rest of this chapter. Now that we have some information about the queries that are being generated for us behind the scenes, we can take some steps here to consolidate these lookups into a single query instead of one for each order by using prefetching.
Simple Prefetching Prefetching means grabbing data that you know you are going to need before you access it. Normally, as we discussed in Chapter 2, LLBLGen Pro uses lazy-loading to pull data from the database only when it is needed. Lazy loading is handy as you write your application because you can focus on making the application work, not grabbing exactly the right data from the database. The disadvantage of lazy loading, however, is that depending on how we use the object, we can cause queries to execute in an inefficient manner. Let’s fix that right now. Modify the GetSalesOrderHeaderCollection() method in the SalesOrderHeaderEntity.cs file in the AW.Data project
according to Example 10.8. Lines 6-10 are new. 1 2 3 4 5 6 7 8 9 10 11 12
... ISortExpression Sort = new SortExpression(); Sort.Add( SalesOrderHeaderFields.OrderDate | SortOperator.Ascending); SalesOrderHeaderCollection Orders = new SalesOrderHeaderCollection(); Orders.SupportsSorting = true; IPrefetchPath Prefetch = new PrefetchPath((int)EntityType.SalesOrderHeaderEntity); Prefetch.Add( SalesOrderHeaderEntity.PrefetchPathCustomerViewRelated); Orders.GetMulti(Filter, 100, Sort, Relations, Prefetch); return Orders; } Example 10.8. Class SalesOrderHeaderEntity modified public method GetSalesOrderHeaderCollection() with prefetching
In Line 6-8, we create a PrefetchPath object. This object tells the data access layer which related entities to load when the main entity is loaded. Notice that we have to pass the type of entity we will be using in the constructor. We will use the AW.Data.EntityType enumerator to do this. Next in Line 9-10, we add individual PrefetchPathElement objects that are built into each entity class, which in this example is the PrefetchPathCustomerViewRelated object from the SalesOrderHeaderEntity entity. When we call the GetMulti() method, we now pass the PrefetchPath object as the final parameter. Now when the SalesOrderHeader rows are pulled from the database, all the related customer rows will be pulled in together in one additional, consolidated query. Build and run the application, and perform a trace on the same steps. (To clear the trace window, press CTRL +
SHIFT + DELETE). You should now get results like that in Table 10.9. Query 1
106
SELECT [Person].[CountryRegion].[CountryRegionCode], [Person].[CountryRegion].[Name], [Person].[CountryRegion].[ModifiedDate] FROM [Person].[CountryRegion] ORDER BY [Person].[CountryRegion].[Name] ASC
Tuning
exec sp_reset_connection SELECT [Person].[StateProvince].[StateProvinceID] AS [StateProvinceId], [Person].[StateProvince].[StateProvinceCode], [Person].[StateProvince].[CountryRegionCode], [Person].[StateProvince].[IsOnlyStateProvinceFlag], [Person].[StateProvince].[Name], 2 [Person].[StateProvince].[TerritoryID] AS [TerritoryId], [Person].[StateProvince].[rowguid] AS [Rowguid], [Person].[StateProvince].[ModifiedDate] FROM [Person].[StateProvince] ORDER BY [Person].[StateProvince].[Name] ASC exec sp_reset_connection exec sp_executesql N'SELECT DISTINCT TOP 100 [Sales].[SalesOrderHeader].[SalesOrderID] AS [SalesOrderId], [Sales].[SalesOrderHeader].[RevisionNumber], [Sales].[SalesOrderHeader].[OrderDate], [Sales].[SalesOrderHeader].[DueDate], [Sales].[SalesOrderHeader].[ShipDate], [Sales].[SalesOrderHeader].[Status], [Sales].[SalesOrderHeader].[OnlineOrderFlag], [Sales].[SalesOrderHeader].[SalesOrderNumber], [Sales].[SalesOrderHeader].[PurchaseOrderNumber], [Sales].[SalesOrderHeader].[AccountNumber], [Sales].[SalesOrderHeader].[CustomerID] AS [CustomerId], [Sales].[SalesOrderHeader].[ContactID] AS [ContactId], [Sales].[SalesOrderHeader].[SalesPersonID] AS [SalesPersonId], [Sales].[SalesOrderHeader].[TerritoryID] AS [TerritoryId], [Sales].[SalesOrderHeader].[BillToAddressID] AS [BillToAddressId], [Sales].[SalesOrderHeader].[ShipToAddressID] AS [ShipToAddressId], [Sales].[SalesOrderHeader].[ShipMethodID] AS [ShipMethodId], 3 [Sales].[SalesOrderHeader].[CreditCardID] AS [CreditCardId], [Sales].[SalesOrderHeader].[CreditCardApprovalCode], [Sales].[SalesOrderHeader].[CurrencyRateID] AS [CurrencyRateId], [Sales].[SalesOrderHeader].[SubTotal], [Sales].[SalesOrderHeader].[TaxAmt], [Sales].[SalesOrderHeader].[Freight], [Sales].[SalesOrderHeader].[TotalDue], [Sales].[SalesOrderHeader].[Comment], [Sales].[SalesOrderHeader].[rowguid] AS [Rowguid], [Sales].[SalesOrderHeader].[ModifiedDate] FROM ( [Sales].[vIndividualCustomer] INNER JOIN [Sales].[SalesOrderHeader] ON [Sales].[vIndividualCustomer].[CustomerID]=[Sales].[SalesOrderHeader].[CustomerID]) WHERE ( ( [Sales].[vIndividualCustomer].[StateProvinceName] = @StateProvinceName1)) ORDER BY [Sales].[SalesOrderHeader].[OrderDate] ASC',N'@StateProvinceName1 nvarchar(50)',@StateProvinceName1=N'Texas' exec sp_reset_connection exec sp_executesql N'SELECT [Sales].[vIndividualCustomer].[CustomerID] AS [CustomerId], [Sales].[vIndividualCustomer].[Title], [Sales].[vIndividualCustomer].[FirstName], [Sales].[vIndividualCustomer].[MiddleName], [Sales].[vIndividualCustomer].[LastName], [Sales].[vIndividualCustomer].[Suffix], [Sales].[vIndividualCustomer].[Phone], [Sales].[vIndividualCustomer].[EmailAddress], [Sales].[vIndividualCustomer].[EmailPromotion], [Sales].[vIndividualCustomer].[AddressType], [Sales].[vIndividualCustomer].[AddressLine1], [Sales].[vIndividualCustomer].[AddressLine2], [Sales].[vIndividualCustomer].[City], 4 [Sales].[vIndividualCustomer].[StateProvinceName], [Sales].[vIndividualCustomer].[PostalCode], [Sales].[vIndividualCustomer].[CountryRegionName], [Sales].[vIndividualCustomer].[Demographics] FROM [Sales].[vIndividualCustomer] WHERE ( ( [Sales].[vIndividualCustomer].[CustomerID] IN (@CustomerId1, @CustomerId2, @CustomerId3, @CustomerId4, @CustomerId5)))',N'@CustomerId1 int,@CustomerId2 int,@CustomerId3 int,@CustomerId4 int,@CustomerId5 int',@CustomerId1=15338,@CustomerId2=17018,@CustomerId3=15338,@CustomerId4=13215,@CustomerId5=17018
Table 10.9. Trace results with prefetching
Now when we perform the same steps, only 4 queries execute in the database. The 4th query now returns the same information as queries 4-8 of the previous example (notice the multiple parameters in the WHERE clause). We will always have at least 4 queries, but never more than 4, whereas without prefetching, we would have at least 4 queries, and possibly many more, depending on how many results were found. The key point to remember is if you find yourself looping through a collection of any kind, make sure that you have all the contents in memory already. Otherwise, lazy loading will retrieve the items individually.
Complex Prefetching Let’s look at another more complex example of prefetching. If you run a trace on the use of the frmOrganization form we created in Chapter 5 that retrieves the hierarchy of managers, you will be in for a little surprise. Doing a lookup for Sanchez, Ken will result in 1,162 rows in our SQL profiler! (Make sure you are only counting the queries when the Search button is pressed and not the queries necessary to setup for the form, which at the moment are also inefficient) That is about 582 queries just for the search (discounting the sp_reset_connection calls). This number of queries obviously poses a performance problem. Because our query works recursively, overlooking prefetching here can cause queries to pile up exponentially. Let’s see what we can do to reduce the number of queries. In the frmOrganization.cs file, alter the btnSearch Click event handler to match Example 10.10 (lines that have
changed are highlighted).
107
Chapter 10
1 private void btnSearch_Click(object sender, EventArgs e) 2 { tvOrganization.Nodes.Clear(); 3 TreeNode MasterNode; 4 IPrefetchPath prefetch = new 5 PrefetchPath((int)EntityType.EmployeeEntity); 6 prefetch.Add(EmployeeEntity.PrefetchPathContact); 7 prefetch.Add(EmployeeEntity.PrefetchPathManages) 8 .SubPath.Add(EmployeeEntity.PrefetchPathManages) 9 .SubPath.Add(EmployeeEntity.PrefetchPathManages) 10 .SubPath.Add(EmployeeEntity.PrefetchPathManages) 11 .SubPath.Add(EmployeeEntity.PrefetchPathManages); 12 prefetch[1].SubPath.Add(EmployeeEntity.PrefetchPathContact); 13 prefetch[1].SubPath[0].SubPath.Add( 14 EmployeeEntity.PrefetchPathContact); 15 prefetch[1].SubPath[0].SubPath[0].SubPath.Add( 16 EmployeeEntity.PrefetchPathContact); 17 prefetch[1].SubPath[0].SubPath[0].SubPath[0].SubPath.Add( 18 EmployeeEntity.PrefetchPathContact); 19 prefetch[1].SubPath[0].SubPath[0].SubPath[0].SubPath[0].SubPath.Add( 20 EmployeeEntity.PrefetchPathContact); 21 EmployeeEntity Employee = new 22 EmployeeEntity(Convert.ToInt32(cbEmployee.SelectedValue), prefetch); 23 TreeNode EmployeeNode = MakeNode(Employee); 24 ... Example 10.10. Button btnSearch modified Click event with prefetching
This prefetching example is quite complex, but not too difficult to understand. We first create the PrefetchPath object in Line 5. Then we begin to add paths. For every row in the employee table, we have to add a join to the corresponding contact row AND the related rows in the employee table. In this way, every join has two branches. We go deep enough to exhaust the levels of management in this organization, which happens to be 6. By building the application and running another trace, you will see that the queries for the search are now down to 11. If you created a view that merges rows from the employee and contacts table, you could probably get the total number of queries down to 6. We would still need to write a more efficient query to populate the drop down list on this form as well, but for now this example helpfully illustrates the difference between efficient queries and inefficient queries. Prefetching is one of the most important concepts to understand when you use lazy loading, since subtle changes can make the difference between a single query and a hundred queries.
Multi-threading Now we are going to make our application more responsive by adding multithreading capability. Using multiple execution threads will allow our GUI to continue to respond to the user while processes are executing in the background. In order to implement multithreading, we first need to create a form that we can display while the background process is executing. This form will simply show a moving bar and raise an event if the Cancel button is pressed. We can reuse this form anywhere we would like to implement multithreading. Right-click on AW.Win, and go to Add > New Item. Select Windows form and name the item frmStatusBar.cs. Change properties for frmStatusBar according to Table 10.11. 108
Tuning
Property
Value
ControlBox
False
FormBorderStyle
FixedDialog
Height
100
ShowInTaskbar
False
StartPosition
CenterScreen
Text
Working ...
Width
300
Table 10.11. Form frmStatusBar properties Â&#x201E; Layout frmStatusBar.cs according to Figure 10.12 and Table 10.13.
Figure 10.12. Form frmStatusBar control layout Control Type
Name
Properties to Set
1
ProgressBar
progressBar1
Value = 50 Style = Marquee
2
Button
btnCancel
Text = Cancel
Table 10.13. Form frmStatusBar controls to configure Â&#x201E; Switch to the code view of frmStatusBar and add the necessary code according to Example 10.14.
1 namespace AW.Win 2 { public delegate void CancelButtonEventHandler( 3 object sender, 4 CancelEventArgs e); 5 public partial class frmStatusBar : Form 6 { 7 public event CancelButtonEventHandler CancelButtonClicked; 8 public frmStatusBar() 9 { 10 InitializeComponent(); 11 } 12 private void btnCancel_Click(object sender, EventArgs e) 13 { 14 btnCancel.Enabled = false; 15 CancelEventArgs eventarg = new CancelEventArgs(true); 16 CancelButtonClicked(sender, eventarg); 17 this.Close(); 18 } 19 } 20 109
Chapter 10
21 } Example 10.14. Form frmStatusBar delegate, public event, and button btnCancel Click event handler
In Line 3-5, we declare a public delegate which we will use with our CancelButtonClicked event. We declare the event in Line 8, indicating that we will use the delegate with the event. When btnCancel is clicked, in our Click event handler in lines 13-19 we disable the button, create event arguments, raise the event, and then close the form. In each form where we use multithreading, we will need to catch the CancelButtonClicked event and perform the necessary cancellation to the thread. The purpose of the status bar form is only to raise the event. Now we can begin modifying frmOrderSearch. Drag a BackgroundWorker object from the Toolbox to the frmOrderSearch in design view. Name it searchWorker and change the WorkerSupportsCancellation property to true. Switch to code view, and add the member variable definition from Example 10.15 to the form just under the
class definition. 1 namespace AW.Win 2 { public partial class frmOrderSearch : Form 3 { 4 frmStatusBar _frmStatusBar; 5 DateTime _fromDate; 6 DateTime _toDate; 7 string _firstName; 8 string _lastName; 9 int _orderID; 10 string _orderName; 11 string _cityName; 12 string _state; 13 string _country; 14 string _zip; 15 SalesOrderHeaderCollection _results; 16 public frmOrderSearch() 17 { 18 InitializeComponent(); 19 } 20 Example 10.15. Form frmOrderSearch new member variables Now modify the btnSearch Click event to use the code in Example 10.16. Also add the new event handler in Lines
35-39. 1 private void btnSearch_Click(object sender, EventArgs e) 2 { _fromDate = DateTime.MinValue; 3 if (dtpDateFrom.Checked) 4 _fromDate = dtpDateFrom.Value; 5 _toDate = DateTime.MinValue; 6 if (dtpDateTo.Checked) 7 _toDate = dtpDateTo.Value; 8 _orderID = 0; 9 _orderName = ""; 10 if (tbOrderID.Text != "") 11 110
Tuning
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
{ try { _orderID = Convert.ToInt32(tbOrderID.Text); } catch { _orderName = tbOrderID.Text; } } _firstName = tbFirstName.Text; _lastName = tbLastName.Text; _cityName = tbCity.Text; _state = cbState.Text; _country = cbCountry.Text; _zip = tbZip.Text; btnSearch.Enabled = false; _frmStatusBar = new frmStatusBar(); _frmStatusBar.Show(); _frmStatusBar.CancelButtonClicked += new CancelButtonEventHandler(_frmStatusBar_CancelButtonClicked); searchWorker.RunWorkerAsync(); } void _frmStatusBar_CancelButtonClicked(object sender, CancelEventArgs e) { searchWorker.CancelAsync(); btnSearch.Enabled = true; } Example 10.16. Form frmOrderSearch button btnSearch modified Click event
In lines 3-27, we assign all values from the controls to the private member variables we created in Example 10.14. We take this step so that we can access these values from another thread. In .NET it is not possible to access any GUI controls from a thread other than the main application thread. Consequently, our new thread can do work, it just cannot access or change any controls as it does so. Because we need to read values from the controls and put them in a place where the new thread can safely access them, we added the member variables. In Line 28, we disable btnSearch. Then we create a new instance of frmStatusBar and display to the user in Line 30. In Line 31-32, we define the handler for the CancelButtonClicked event. The handler is the code in Lines 35-39. Then we call the RunWorkerAsync() method of the BackgroundWorker object, which actually launches the new thread. We have not yet defined what that thread will do, but we will add that code in a moment. Also note that in Line 37, we call the CancelAsync() method to terminate the thread when the CancelButtonClicked event is raised by frmStatusBar, indicating that the user clicked the Cancel button. Finally, we reactive btnSearch in Line 38. Now we can add the code that will actually execute in the new thread. Â&#x201E; Create an event handler for the DoWork event for the searchWorker object. This can be done by double-clicking
the DoWork event in the Properties window of the searchWorker object in design mode. Add code to the handler according to Example 10.17. 1 private void searchWorker_DoWork(object sender, DoWorkEventArgs e) 2 { _results = SalesOrderHeaderEntity. 3 GetSalesOrderHeaderCollection( 4 _fromDate, 5 _toDate, 6 111
Chapter 10
7 8 9 10 11 12 13 14 15 }
_firstName, _lastName, _orderID, _orderName, _cityName, _state, _country, _zip);
Example 10.17. BackgroundWorker searchWorker DoWork event handler
Note the warning that is added automatically by Visual Studio. Do not put in any code that reads or manipulates controls in this subroutine! In Lines 3-14, we call the same GetOrders() method that we were using before we added multi-threading. This time, however, we pull the parameters from the private member variables instead of directly from the controls. Now we need to add a little code that will execute when the searchWorker has finished its work. Create a RunWorkerCompleted event handler for searchWorker and add the code from Example 10.18.
1 private void searchWorker_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) 2 3 { if (e.Error != null) 4 { 5 MessageBox.Show(e.Error.Message); 6 } 7 if (_frmStatusBar != null) 8 _frmStatusBar.Close(); 9 btnSearch.Enabled = true; 10 11 dgvResults.DataSource = _results; 12 } Example 10.18. BackgroundWorker searchWorker RunWorkedCompleted event handler
In Lines 4-7, we display to the user any errors that occurred inside the thread. In Lines 8-9, we close frmStatusBar if it has not already been closed. Then we re-enable btnSearch, and bind the results to the DataGridView in order to display them to the user. Now when you perform a search, you should get a status bar that appears on the screen like Figure 10.19. The new BackgroundWorker object in .NET 2.0 makes multithreading easy to implement, but you still need to be careful when writing your code. As long as you remember to not put any code that accesses controls in the DoWork event handler of the BackgroundWorker object, you will not get any exceptions. As simple as it seems, having a moving status bar significantly adds to what a user experiences from your application. When possible, avoid making a user sit through long process without the ability to cancel it. Status bars and cancel buttons make the user experience much more friendly by making the application feel more responsive to the user’s direction.
Code Cleaning Before we move on to the next chapter, let’s do another search through our code to look for lurking strings of property names in DataGridView columns that the compiler cannot check. You will remember that we did this before in Chapter 5 (Figure 5.26). 112
Tuning
Figure 10.19. Form frmOrderSearch with multithreading
Go to Edit > Find and Replace > Find in Files (Ctrl + Shift + F). Enter criteria as shown in Figure 5.26. Search for ... .DataPropertyName = “ ... in the Entire Solution, with a Regular expression, in files of type “*.*”. By now, you will have strings from the DataGridView controls in frmOrderEdit and frmOrderSearch to change. For each match, substitute the corresponding field index (like we did before) instead of the string name. This code cleanup is a vital step before the schema changes and debugging that we will do in Chapter 12.
113
Deployment
11
Deployment “Art is never finished, only abandoned.” Leonardo da Vinci (1452-1519) Chapter Goals • Use ClickOnce to publish the application. • Create Strong Name keys to sign our assemblies. • Use a certificate to sign the ClickOnce manifest. • Successfully publish the application. Now we will work on the deployment of our application. We may not be completely finished yet, but we can go ahead and configure our project so it can be deployed easily. As we build newer versions, the deployment process will become as easy as clicking a few buttons.
Measure Twice, ClickOnce We will be taking advantage of the new ClickOnce deployment feature of Visual Studio .NET 2005. ClickOnce is a breakthrough for Windows applications that makes them surprisingly easy to install and update. This feature addresses a major advantage of web applications over Windows applications. Although you still need a client computer running a Windows operating system, ClickOnce allows you to benefit from the richness of a client application without the normal headaches associated with deploying and updating it. In order to get a true one-click installation experience, you need to configure some security settings in Visual Studio so that your users will not get several prompts and security warnings. But before we start making configuration changes inside Visual Studio, we will need to get a certificate.
Figure 11.1. Application installation publisher warning 115
Chapter 11
Figure 11.2. Strong name key creation screen
Certificates Certificates are encryption keys that are associated with an identity issued from a specific entity. Anyone can be a certificate authority (CA), which is simply an entity that issues certificates. You will need a certificate from a CA to sign the ClickOnce manifest in Visual Studio. This certificate will give information about you, the publisher, to users who install your software. If your users already trust the CA that issued your certificate, the application will install seamlessly. If there is no certificate, or if the certificate is from an untrusted CA, users will get a message like Figure 11.1 when they install the software. We all know that users are prompted with security warnings all the time and will probably install the software anyway; hopefully, though, you will deploy your application in the “proper” manner, neither frightening your users needlessly nor contributing to their security callousness. In a Windows environment where your users are a member of your domain, you can get a certificate from your domain’s CA. This domain CA will be trusted by all the members of the domain automatically and, best of all, if you already have a Windows sever running certificate services, you do not have to pay anything for the privilege of getting a certificate. You can find more information about configuring certificate services in a Windows environment at http://technet.microsoft.com. On the other hand, if you will be distributing this software to a larger audience, you will definitely want to get a certificate from a trusted third party. You can get a free certificate from http://www.cacert.org. Once we find a certificate, we can do the rest of our configuration inside Visual Studio.
Figure 11.3. Project AW.Data signing settings 116
Deployment
Strong Name Key The second item you will need is a pair of keys that you can use to give your assemblies a strong name. A strong name is like a globally unique identifier that will distinguish your assembly from every other. Visual Studio includes a utility that you can use to create a new set of keys. Let’s do that now. Right-click on AW.Data, and select Properties. Click on Signing in the left bar. Check Sign the assembly. Under the Choose a strong name key file drop down list, select <New...>. Enter a name for the file, and optionally, create a password for the key. Press OK. (Figure 11.2) The key you just created should now be selected. Press CTRL+S to save these settings for your project. Right-click on AW.Win, and select Properties. Click on Signing, check Sign the assembly and select Browse to find
the key you just created. (Figure 10.3) If you have a certificate, select Sign the ClickOnce manifests and select your certificate. Optionally, you can also
create a test certificate here as well. (Figure 10.4) Press CTRL + S to save these settings.
ClickOnce Configuration Now we are ready to configure the ClickOnce settings. While still on the Properties of AW.Win, select Publish on the left bar. (Figure 11.5) Enter an FTP site, HTTP site,
or universal naming convention address (UNC) for a file share on your network. Usually, an HTTP site is the best choice. Make sure you have read/write permission in place before you attempt to publish or the process will fail. If the URL where you will post the files is different from where you are currently publishing, enter a different location in the Installation URL field.
Figure 11.4. Project AW.Win signing settings
117
Chapter 11
Figure 11.5. Project AW.Win publish settings Click Application Files. (Figure 11.6)
Here you will see all of the files that your application contains when compiled. If you select the Show all files option, you will see a few more that are normally hidden. You should not need to change anything here, but as your application grows more complex, you may find yourself changing these settings. Click OK to close the Application Files window. Click Prerequisites. (Figure 11.7)
Here you specify which packages need to be present in order to install your application. The .NET framework 2.0 is listed here by default, with the option to download the files from Microsoft’s website if needed. This is a good idea
Figure 11.6. ClickOnce application Files settings 118
Deployment
Figure 11.7. ClickOnce prerequisites settings
unless you have a local area network and your Internet bandwidth is limited. In that case, you might put the .NET framework inside your LAN and add the location here. Click OK to close the Prerequisites window. Click Updates. (Figure 11.8)
The Updates menu specifies how and when your application will look for newer versions. The defaults are sufficient for most users, but if you have a specific need you can modify these settings. Click OK to close the Updates window. Click Options. (Figure 11.9)
Here you can configure miscellaneous publishing options. You can enter your company’s information. Visual Studio can optionally generate an HTML file for you that allows users to click on a button in a web page to install your application. You can specify the name of the HTML file here. Check Automatically generate deployment web page after every publish. This will ensure that the HTML file is always
present and always up-to-date.
Figure 11.8. ClickOnce update settings 119
Chapter 11
Figure 11.9. ClickOnce options screen
Also from this dialog box, you can elect to have the application load immediately upon installation. This can eliminate yet another click that a user would normally have to make. Â&#x201E; Click OK to close the Options window.
Figure 11.10. Project AW.Win ClickOnce auto-generated web page 120
Deployment
Deployment Now that we have configured all the publishing options, click Publish Now to build and publish the AW.Win application. You will receive a notice in the status bar that will tell you if the publish was successful. If you elected to have the HTML file open upon publication, you will now see a web page pop up like in Figure 11.10. Clicking on the Install button will install the application. Installation could not be easier! Before we install the application, though, let’s take a look at what Visual Studio generated for us behind the scenes. If you open the publish location that you specified, you will see something like Figures 11.11 and 11.12.
Figure 11.11. Project AW.Win main application deployment folder
Notice that in the main application folder, you get an HTML file, a setup.exe file, and several manifest files. Visual Studio organizes each version in its own folder by version and build number. This makes it easy to keep a history. Inside each version folder are all the necessary executables and DLL files.
Figure 11.12. Project AW.Win version-specific folder
Installation Now let’s install the application to test the deployment settings. You can install the application on your development machine or test it on another client machine. Click on the Install button in the HTML file to install AW.Win. If you handled certificates and code-signing
correctly, you should not be prompted with any other questions. You will see a status window open as in Figure 11.13. 121
Chapter 11
Figure 11.13. ClickOnce installation status bar
If there are no unexpected errors, the application should install and execute (if you configured it that way). If the connection between your machine and the publish location is fast (like over a local area network), the process is quite speedy.
Figure 11.14. Project AW.Win installation location
The application will appear inside the Add/Remove Programs control panel, but where do the files actually reside? Strangely enough, the files are not in C:\Program Files as you might expect. In Figure 11.14, you will see where the AW.Win application installed on a test machine. It goes into the Profile folder of the currently logged on user in C:\Documents and Settings, inside the Local Settings folder which is hidden, and in a folder called Apps. Now you have successfully installed the AW.Win application. If you also checked the option to make it available offline, you will have a shortcut in your Start menu to launch the application at a later time. From now on, you can go back to the Properties of your AW.Win project (Figure 11.5) and click Publish Now to build and deploy the latest changes to the same location. When a new version of the software is available, your users will be prompted to install it automatically when the application opens or closes (depending on the configuration). In the next chapter, we will look at how to make changes to the database and migrate them into your application.
122
Regenerating Code
12
Regenerating Code “It is a bad plan that admits of no modification.” Publilius Syrus (1st Century B.C.) Chapter Goals • Learn the cycle of regenerating LLBLGen Pro code without losing custom changes. • Make minor changes to database field names or relationships and propagate changes to your application. • Make major changes to the table names and propagate changes to your application.
Inevitable Changes Throughout the development of our application, we have focused on speed. We want to get a working application as quickly as possible and at the same time create a good framework to further developer the application, writing as much reusable code as possible. Now we are going to show you how to make drastic changes the the schema of your database and propagate those changes to your application. You may be asking, “why didn’t we just plan better and make those changes at the beginning before we started?” Ideally, the kinds of changes we are about to discuss would be made at the beginning of a project. But, since ideal situations are rare, we are trying to simulate the “real world” as much as possible. Many developers will inherit projects that were already started without making good decisions at the beginning. And perhaps all developers will build applications based on requirements that change after development has already started. In any case, we will almost certainly be in a position where we need to make drastic changes to the database and update our application with all the changes. LLBLGen Pro and Visual Studio make this process as painless as it can possibly be. Let’s look at the process for making changes and regenerating your code.
Minor Changes To begin, we will start with some fairly minor changes. What are minor changes? Adding new entities, typed lists, typed views are minor changes. Changes to columns names in the database, column data types, and the names of relationships are other examples. The process of making these less involved changes is easy and efficient. First, we need to make some changes to the AdventureWorks schema in SQL Server. Open SQL Server Management Studio. Make a full backup of your database by right-clicking on the database, and selecting Tasks > Backup. Save a
backup file in a safe location. In SQL Server Management Studio, make the column name changes according to Table 12.1 in the
AdventureWorks database.
123
Chapter 12
Table SalesOrderDetail
Old Column Name OrderQty
New Column Name
Database Dependencies
Quantity
LineTotal calculated formula, CK_SalesOrderDetail_OrderQty constraint, as well as removing/altering the trigger associated with the table.
SalesOrderHeader TaxAmt
TaxCost
TotalDue calculated formula, CK_SalesOrderHeader_TaxAmt constraint
SalesOrderHeader Freight
ShipCost
TotalDue calculated formula, CK_SalesOrderHeader_Freight constraint
SalesOrderHeader TotalDue
TotalCost
Table 12.1. AdventureWorks database column name changes
Now, we can open our LLBLGen Pro project that we created in Chapter 2 and update it with these schema changes. If Visual Studio is open, close it now. Open LLBLGen Pro and open your AW.Data LLBLGen Pro project file. Select File > Preferences Ensure that the SyncMappedElementNamesAfterRefresh option is set to false. (Figure 12.2)
Figure 12.2. LLBLGen Pro preferences screen
If this option happened to be set to true, then a change in the name of a field in the database would also change the corresponding entity property name. If you would rather not create breaking changes in your code, you can set this option set to false, which is the default. Generally, keeping the column names and entity properties the same will 124
Regenerating Code
avoid unnecessary confusion. We will make our breaking changes manually here in order to have more control over them. Another property you can set is the CleanUpVsNetProjects option. When selected, this option will update your Visual Studio project file to keep it current with any file names that might change in the regeneration process. With our minor changes, no code file names will change, but as we make major changes, this option will be useful to enable. Keep in mind that LLBLGen Pro cleans the project file by removing all files in the project and re-adding the files that were just generated. This cleaning-up process will remove old files from the project that are no longer needed, but if you have added extra files manually (as we did with the SalesOrderHeaderEntityValidator.cs and Utility.cs file that we created in Chapter 8) you will have to add them again to your project after regenerating your code. Â&#x201E; Set the CleanUpVsNetProjects option to true. Click Save to close the Preferences screen. Â&#x201E; Go to Project > Refresh All Catalogs ...
You will be prompted with this ominous, but necessary warning. (Figure 12.3) Click Yes and proceed confidently.
Figure 12.3. Schema update warning
Next, you will be prompted with the same database selection screen you saw when you first set up your project. (Figure 12.4)
Figure 12.4. Database selection screen 125
Chapter 12
Choose the same database server, username, password and database catalog as you did in the beginning. Select
Retrieve schema(s). (Optionally, you can elect to select Project > Unattended Refresh of All Catalogs instead of Refresh All Catalogs to bypass this screen if the settings to connect to your SQL database are always the same.) LLBLGen Pro will scan the schema again and make the appropriate changes to your project. After the process has finished, you will get a screen summarizing the changes that were made to your project. (Figure 12.5)
Figure 12.5. Catalog refresh results screen
Notice that all the changes we made were found. As of this moment, we have not introduced any breaking changes into our Windows application. Our data layer has adjusted to the change in the schema and isolated the upper application layers from this change. Next, we will go ahead and make the field name changes in the SalesOrderDetail and SalesOrderHeader entities, so that the field names will match the database column names. Right-click on the SalesOrderDetail entity and select Edit / Properties. Change the name of the OrderQty column to Quantity. Right-click on the SalesOrderHeader entity and select Edit / Properties. Change the name of the TotalDue column to TotalCost, the Freight column to ShipCost, and the TaxAmt column to
TaxCost. We will also make a few other changes while we have this screen open. Remember in Chapter 6, when we added all the properties to the SalesOrderHeaderEntity class which exposed rows from the CustomerViewRelated entity (Example 6.1)?. Let’s add all those properties to our LLBLGen Pro project, so LLBLGen Pro will take care of exposing those properties for us. In the SalesOrderHeader entity edit screen, click on the Fields on related fields tab. Add 6 new related fields from the CustomerViewRelated entity for FirstName, LastName, City, StateProvinceName,
CountryRegionName, and PostalCode (Figure 12.6). These breaking changes should be enough for now. Let’s regenerate the AW.Data project with the settings in Figure 12.7. Save the project, by going to File > Save Project. Go to Project > Generate. Select Self-Servicing, Two Class Scenario (Base Classes Only). 126
Regenerating Code
Figure 12.6. Entity SalesOrderHeader fields on related fields screen
Selecting the Self-Servicing, Two Class Scenario (Base Classes Only) configuration will ensure that our custom code is not overwritten. All of the other settings should be same as they were when we first generated our code. By only regenerating the base classes, which contain all the field definitions, the entities we use will be updated, while everything we customized will remain untouched. Before generating, if you want to ensure that any particular file is not overwritten, you can set the read-only flag of the file to true (right-click on the file in Windows Explorer and select Properties). Â&#x201E; Click Start Generator.
Figure 12.7. Generator settings screen 127
Chapter 12
Figure 12.8. Generation process results screen
The process will begin, and we will receive the notification when it is finished. (Figure 12.8) You can browse these results to see which items have been regenerated and which have been preserved. Notice that the entity classes have been preserved as well as the validator classes. Now open your AW solution again in Visual Studio. Enable the Show All Files option in Solution Explorer (Figure 12.9). This option will help us to see items that are
in the same folders as the project, but not part of the project. Remember that LLBLGen Pro cleaned up our project file and we need to add the SalesOrderHeaderEntityValidator.cs file and the Utility.cs file back to our project. Right-click on both files and select Include in Project. Without making any other changes, compile the AW.Data project. The AW.Data project compiles without a problem. We have no custom code in the entity classes that needs to be updated. We do have properties in the SalesOrderHeader entity that we have defined manually, and in our LLBLGen Pro project. Both properties reveal the same information from the related entity under two different names, and it now makes sense to remove the old ones. Although there are no breaking changes in the AW.Data project, we will find some breaking changes in our AW.Win project. Try to compile the AW.Win project.
There are 7 errors that we will need to correct before we can compile (Figure 12.10). Notice that 2 of these errors were in DataGridView columns. If we had not updated the DataPropertyName property to point to the field index, we might have missed these changes! Update all the references in the AW.Win project until it compiles.
As you build and run the application, you should notice that it now operates the way it did before the schema changes. This result means there are no runtime errors. We changed the schema of the database and in a few minutes updated our data access layer, making the necessary changes in our custom code, and bringing our application back to the full working state it was before we started. THAT is one of the best advantages of using the techniques in this 128
Regenerating Code
Figure 12.9. Solution Explorer Show All Files toggle
book. This kind of development may not be glamorous, but it will definitely save you hours and days of code maintenance and run-time bug hunting.
Major Changes Now we will make some major changes to the database. Unless you enjoy painful, tedious processes, you probably will not do this very often, but it is a useful topic to understand. Major changes include changing table names or drastically altering the schema (adding/removing/redesigning tables). In the case of table name changes, the underlying names of the class files themselves will change as they are based on the names of the table. If you find yourself completely overhauling your schema, you might want to start from a brand new generated data and business logic layer and then migrate over your custom code from your old project. For this example, we will just be making a few changes to table names in order to show you how to handle them without starting over from a new Visual Studio project. In the AdventureWorks database, the schema is well organized and normalized: it is not screaming for changes like the old Northwinds example database that used to be included with SQL Server. We will however, change some of the
Figure 12.10. Project AW.Win compilation errors 129
Chapter 12
table names anyway to practice doing so. A complaint one might make about the AdventureWorks schema is the long names for the order-related tables. It seems logical that the name OrderHeader would work as well as SalesOrderHeader and save unnecessary typing. Close Visual Studio, open up SQL Server Management Studio again, and make changes to table names according
to Table 12.11. Old Table Name
New Table Name
SalesOrderHeader
OrderHeader
SalesOrderDetail
OrderDetail
SalesOrderHeaderSalesReason OrderHeaderSalesReason Table 12.11. AdventureWorks table name changes
Changing the table and fields names may break stored procedures and other dependent database objects. If you want affected objects to continue operate, you will need to fix them in SQL Server. Now we can update our LLBLGen Pro project. Open LLBLGen Pro and select Project > Unattended Refresh of All Catalogs.
You should be notified by the program that of the 2 tables in our project that were renamed (the OrderHeaderSalesReason table is not in our project). We need to change the names of entities manually if we want them to be the same as the table names in our database now. The relationships between the entities will also need to be updated, since the field name is based on the entity name. Because the entities and relationship objects make a large number of changes, it is easier to delete the entity from the project and re-add it. Delete the SalesOrderHeader and SalesOrderDetail entities from the project. Re-add both entities to the project.
We also need to restore the relationship we created in the Chapter 2 between the OrderHeader entity and the customer view we added as an entity in Figure 2.16. This relationship exists only in our LLBLGen Pro project and not in the database. Regenerate your code with the Self Servicing, two class scenario (Full/Safe) Visual Studio 2005 configuration to the
same folder you used before. This option will try to regenerate all the files (base classes, entity classes, and Visual Studio project files), but not overwrite any files that already exist. For our situation, this option is what we need. Since the OrderHeader and OrderDetail tables are “new”, we need the base class files and the business logic classes. But we do not want to lose the ones that we have already customized. Once your regeneration is complete, open Visual Studio and your AW Solution. Enable the Show All Files option in Solution Explorer. Remember that LLBLGen Pro cleaned up our project file and we need to add the
SalesOrderHeaderEntityValidator.cs file and the Utility.cs file back to our project. Copy and paste your custom code from the old SalesOrderHeaderEntity class to the new OrderHeaderEntity class.
Also move code from the SalesOrderDetailEntity class to the new OrderDetailEntity class. You will need to make changes to class names in your code as necessary. Continue migrating and updating code in the AW.Data and AW.Win project until you can get the solution to compile again.
130
Regenerating Code
You can use the Find and Replace function to save time, but be careful: you might end up changing more than you intend! If you find that you have made unexpected changes, you can always close the tabs that open automatically without saving the changes. You will notice that there are a lot of breaking changes in these projects! This process of updating code should take about 30 minutes. Considering the number of changes to our database, the time is well spent. Also, remember that we are catching these breaking changes at compile-time and not run-time. It would take considerably longer to do testing on all the features in our application if the errors were all run-time errors. Â&#x201E; Compile and run the application.
The application should now operate just like it did before the schema changes! We just made major changes to the schema of our database, and, in less than an hour, propagated those changes to the application. If we had many hardcoded strings in our code that referenced table names and column names, we would still need to take more time to fully purge the old names from the system. Having a reliable process for maintaining your code ensures that your application will be flexible enough to evolve as the needs of your organization change, which they almost certainly will. A wise developer prepares not only for the expected, but also for the unexpected!
131
Appendix 1 Free Icons Microsoft has included free icons with Visual Studio that can add a professional look to your applications. The graphics are located in a ZIP file inside the Visual Studio installation folder (C:\Program Files\Microsoft Visual Studio 8\Common7\VS2005ImageLibrary\VS2005ImageLibrary.zip). Once you unzip the files, you can begin using them in your projects. To add an icon to a ToolStripButton Select the ToolStripButton and click the button in the Image property. Select Import and browse to the location where you unzipped the files.
Select the picture you wish to import. Select Open, and then OK to close the Select Resource window.
You will need to select the color that should be transparent in order for the control to render properly without a background. Change the ImageTransparentColor property of the ToolStripButton to Black (for most icons in the library).
Here are some useful icons you might want to use in your project:
Afterword â&#x20AC;&#x153;The most important development in your life is not the programs you write, but the kind of person you are becoming.â&#x20AC;? The Author We have finished the life cycle of this example application, but your journey has just begun. As you have followed these examples, you have been able to see why the methods presented here are a powerful way to build an application quickly, without sacrificing maintainability. While the application at this point has only a few features, it is a powerful framework to build on. Hopefully, you have come to appreciate how the combination of Visual Studio .NET 2005, LLBLGen Pro, and SQL Server 2005 automates repetitive tasks more efficiently than you could do on your own, while leaving you the flexibility to extend the framework to meet your specific needs and implement your business logic. You have also been exposed to the advantages of using dynamic SQL and seen how your application becomes more maintainable by working with strongly-typed objects. You have been able to take advantage of the newer Visual Studio features to work with code visually, and you have seen how easy it is to deploy a Windows application. Each of these techniques applies to web applications as well. Because so much of our logic is moved to the bottom tiers of our application, porting the Adventure Works application to the web would be quite easy. It is my hope that this book has been helpful to you as a developer. Please drop me an email with your comments and suggestions, or just give me your email address so that I can notify you as the book is updated. Also, if you would like to see more titles like this one, let me know as well. Joseph Chancellor josephchancellor@gmail.com www.lulu.com/josephchancellor