Functional Programming, Testing Tools, SQL, CRUD in HTML, Legal Notes
JAN FEB codemag.com - THE LEADING INDEPENDENT DEVELOPER MAGAZINE - US $ 5.95 Can $ 8.95
2016
Swift 2.0
Introducing Apple’s Latest Programming Language
Maintain JavaScript Sanity with JSLint, AngularJS, and TDD Integrate YouTube into Your iOS Applications Host Desktop Apps in Azure
TABLE OF CONTENTS
Features 8 Legal Notes: Code of Conduct If you’ve ever been to a conference, you’ve seen a code of conduct. Are they binding? What do they really mean, anyway? John tells us what’s wrong with most of them and how to create one that’s inclusive and legal.
John V. Petersen
12 CRUD in HTML, JavaScript, and jQuery Using the Web API In this second installment of his new series on working within HTML and the Web API, Paul looks at the four standard HTTP verbs GET, POST, PUT, and DELETE. By creating a product information page with mock data, you’ll get a good idea of the power of these tools.
Paul Sheriff
22 JSLint, AngualrJS, and TDD Sahil explores the three cardinal rules of working on any JavaScript project and introduces some cool new tools.
Sahil Malik
30 The Baker’s Dozen: 13- SQL Server Interview Questions Kevin uses his experience on both sides of the interview table to help you wow at your next interview. Not only that, but you’ll probably pick up a few pointers, too!
Kevin Goff
44 Introduction to Swift 2.0 Unless you’ve been living under a rock, you’ve heard about the new Swift language that’s taken the iOS/OSX community by storm. Learn some of Swift’s basic features as Mohammad explores what’s new in this second release.
Mohammad Azam
48 How Functional Reactive Programming (FRP) is Changing the Face of Web Development
60 From MSTest to xUnit, Visual Studio, MSBuild, and TFS Integration Punit explores the necessary detail of testing and a useful collection of tools that you can employ. His advice ensures not only that your code runs as designed, but that the testing process is as painless as possible.
Punit Ganshani
66 Azure Skyline: Remote App — Hosting Desktop Apps in Azure In this next installment of his exploration of Microsoft Azure, Mike explores the benefits of remote desktop apps.
Mike Yeager
Columns 74 Managed Coder: On Motivating Ted Neward
Departments 6 Editorial 23 Advertisers Index 73 Code Compilers
Joe helps you understand the emerging front-end framework technology that’s showing up everywhere these days.
Joe Eames
52 Integrating YouTube into Your iOS Applications You’ve made your website pretty spiffy, but the one thing it’s missing is the one thing that makes social media platforms so hard to compete with. Add sound and video to your content by making it YouTube-capable. Jason shows you how!
Jason Bender
US subscriptions are US $29.99 for one year. Subscriptions outside the US pay US $44.99. Payments should be made in US dollars drawn on a US bank. American Express, MasterCard, Visa, and Discover credit cards are accepted. Bill me option is available only for US subscriptions. Back issues are available. For subscription information, send e-mail to subscriptions@codemag.com or contact customer service at 832-717-4445 ext 028. Subscribe online at codemag.com CODE Component Developer Magazine (ISSN # 1547-5166) is published bimonthly by EPS Software Corporation, 6605 Cypresswood Drive, Suite 300, Spring, TX 77379 U.S.A. POSTMASTER: Send address changes to CODE Component Developer Magazine, 6605 Cypresswood Drive, Suite 300, Spring, TX 77379 U.S.A. Canadian Subscriptions: Canada Post Agreement Number 7178957. Send change address information and blocks of undeliverable copies to IBC, 7485 Bath Road, Mississauga, ON L4T 4C1, Canada.
4
Table of Contents
codemag.com
ONLINE QUICK ID 00 EDITORIAL
The Curious Coder I’ve been thinking a lot lately about what makes me tick. What is it that I really like about this job? Having been a software engineer for 25+ years, I still wonder what it is that makes me wake up every morning excited to continue my dominion over the silicon drones at my fingertips.
One of the things that makes me tick is an insatiable technological curiosity. Every day, I work on unique problems for my customers, my employees, and myself. I think it’s the curiosity aspect that keeps this field of work interesting. It boils down to the types of problems I’m trying to solve and the questions I’m asking.
This is one the rare books that has had a powerful impact on my life. I want to emphasize one real benefit to satisfying that curiosity…it’s fun and, arguably, the best part of this job. Try remembering the first time you made a computer do something—anything—it doesn’t matter how long you’ve been doing this, spending time opening your mind to new ideas is something magical!
Here are a few questions I’ve been asking lately: • What’s the best solution to processing a million or more records in memory? • How does Google query billions of documents every nanosecond? • What the heck is Data Science and why do I care? • Are we going to look back on JavaScript as a huge mistake? • What’s the best way to teach a new coder programming concepts? • What is this GO, Swift, Rust, Insert Your Language Here all about? I constantly grapple with questions like those. What’s great is that I work in an industry where I can just try to solve them myself. Let’s take the first question from the list. “What’s the best solution to processing a million or more records in memory?” A client of mine receives hundreds of thousands of responses to advertisements on a daily basis. It took them hours and hours to process these records and it was becoming a real burden. I decided to investigate whether it was possible to process these records faster. During some spare cycles one weekend, I spent a few hours playing around with some C# code and .NET Framework structures to see if we could process these records more efficiently. During this investigation, I discovered the Dictionary<> class provided by the .NET Framework. By carefully structuring our hashes/keys, we were able to process this code in minutes versus hours, resulting in some very happy users. As an additional benefit, we’ve rolled this concept into multiple processes around the company, resulting in many more hours of total savings. So what was the underlying cause of this outcome? I believe it was innate curiosity. That cu-
6
Editorial
Rod Paddock
Figure 1: A great book about curiosity riosity made me attempt to solve this problem. Was I sure I could speed up the process? Absolutely not! I had a little confidence but it wasn’t 100% guaranteed. It took curiosity and will power to work through multiple iterations, techniques, failures, and, ultimately, successes to come to a resolution. Luckily for me, this one paid off. It is fun being curious. Your day can never be boring if you’re curious and if you strive to satisfy that curiosity. Take the time to read an article on a subject you know nothing about but would like to, install that new compiler, try out that beta, go to a session where you’ll never use a technology but would like to know more. If you want to “level up” as a developer, take the time to satisfy that curiosity. One option is to broaden your horizons with a great book on Curiosity. You can’t go wrong with “A Curious Mind: The Secret to Bigger Life” by Brian Grazer (Simon & Schuster, 2015, ISBN: 147673075X). I read this book over the summer and found Brian’s stories amazing and enriching.
codemag.com
ONLINE QUICK ID 1601021
Legal Notes: Codes of Conduct It seems that every few months or so, the Twitter and blog-o-spheres burn up over the topic of Codes of Conduct. Specifically, the issue is whether or not a conference should have a formal and publically posted code of conduct. I’ve written and commented extensively on this very topic. As a general matter, I believe codes of conduct for any event, whether it’s a software
John V. Petersen johnvpetersen@gmail.com codebetter.com/johnpetersen @johnvpetersen John is a graduate of the Rutgers University School of Law in Camden, NJ and has been admitted to the courts of the Commonwealth of Pennsylvania and the state of New Jersey. For over 20 years, John has developed, architected, and designed software for a variety of companies and industries
conference, convention, or sporting event, etc., is a good idea. At the same time, I don’t believe an event’s worthwhileness turns on the presence or absence of a code of conduct. If you’re going to implement a code of conduct, it needs to be reasonable, clear in its intent, and, above all, enforceable. In addition, before you decide to implement a code of conduct, you should be clear about the potential liability that can be incurred as a result of implementing such a code. In this article, I’ll review the text of what is likely the most pervasive code of conduct today: http://confcodeofconduct.com/, which is a repost of the one found at: http://geekfeminism.wikia.com/wiki/Conference_anti-harassment/Policy. DISCLAIMER: This and future columns should not be construed as specific legal advice. Although I’m a lawyer, I’m not your lawyer. The column presented here is for informational purposes only. Whenever you’re seeking legal advice, your best course of action is to always seek advice from an experienced attorney licensed in your jurisdiction.
The Typical Conference Code Of Conduct What follows comes from http://www.confcodeofconduct. com (licensed under the Creative Commons Attribution 3.0 Unported License). Many conferences today adopt some form of this code. After each section, there’s an annotation of the issues associated with that section. All attendees, speakers, sponsors, and volunteers at our conference are required to agree with the following code of conduct. Organizers will enforce this code throughout the event. We are expecting cooperation from all participants to help ensuring a safe environment for everybody. The first issue with this opening paragraph is the requirement that attendees, speakers, sponsors, and volunteers agree with the following code of conduct. The problem is that there’s usually never a mechanism to manifest that agreement. The second issue is the expectation of cooperation from all participants. This is an example of inconsistent language and a lack of defined terms. At the outset, a specific list of persons (attendees, speakers, sponsors, and volunteers) is enumerated. In the next sentence, all of that appears to be collapsed into a single term – participants. Also, while there is an expectation of cooperation, there is no tacit requirement. Language matters in documents that are expected to be legally binding. Right off the bat, this model code of conduct is off to a bad start.
Legal Notes: Codes of Conduct
The Quick Version Our conference is dedicated to providing a harassment-free conference experience for everyone, regardless of gender, gender identity and expression, age, sexual orientation, disability, physical appearance, body size, race, ethnicity, religion (or lack thereof), or technology choices. We do not tolerate harassment of conference participants in any form. Sexual language and imagery is not appropriate for any conference venue, including talks, workshops, parties, Twitter, and other online media. Conference participants violating these rules may be sanctioned or expelled from the conference without a refund at the discretion of the conference organizers. This section is meant to be an abbreviated summary of the next longer section. I’ll defer most of the analysis for the longer paragraph on which this summary depends. There are, however, three things that need to be pointed out. First, there is a pattern of having enumerated lists to define entities. In the first paragraph, it was the list of persons covered. In this paragraph, it’s a list of protected traits. Whenever there are lists such as these, they become words of limitation, meaning that if something is not contained in the list and there was an opportunity to have a more inclusive list, there’s a strong argument that the omitted term was not meant to be covered. This is why you often see the phrase including, but not limited to. With that magic phrase, the list becomes one of inclusion with a list of examples to help illustrate what could be included in a list.
If you’re going to implement a code of conduct, it needs to be reasonable, clear in its intent, and above all, enforceable.
You have our contact details in the emails we’ve sent.
The second issue relates to the mention of the venue. Conferences never own the venues. Rather, conferences sign agreements with the host convention center, hotel, etc. Those agreements require that conferences abide by the facility’s rules, which may include a code of conduct. The irony is that a conference may not be allowed to substitute its own code for the one that already exists for the property.
This may or may not be true. For any legal document to succeed, the specific terms need to be contained within
The third issue relates to the last sentence where a violating participant may be sanctioned or expelled. Given the
Need Help?
8
its four corners. Alternatively, there need to be links so that you can easily access the supplemental material. Assuming that certain information has been received and comprehended in emails that may or may not have been delivered is shaky ground at best.
codemag.com
climate around these codes, you’d think that any violation would result in immediate expulsion. A good alternative to review is a code from Lincoln Financial Field, the home of the Philadelphia Eagles: http://www.lincolnfinancialfield.com/fan-code-of-conduct/. In the fan code of conduct, the language is much more direct. Also, take note of the last bullet item that reads: Any behavior which otherwise interferes with other fans’ enjoyment of the game. This is equivalent to and better, in my opinion, than the phrase including, but not limited to.
It’s important to note that “Conference Organizers” are not themselves, a legal entity. Rather, they are simply a group of people putting on an event. It may very well be that the entity who signed an agreement with the venue is in fact, a legal entity. In such a case, it’s the entity, not the individuals, who should be referenced in such a code and not the organizers. Also, the action is not absolute. In other words, the possibility exists that even if somebody engages in harassing behavior as defined under the policy, no action would be taken against that individual.
The Less Quick Version
If you are being harassed, notice that someone else is being harassed, or have any other concerns, please contact a member of conference staff immediately. Conference staff can be identified as they’ll be wearing branded t-shirts.
Harassment includes offensive verbal comments related to gender, gender identity and expression, age, sexual orientation, disability, physical appearance, body size, race, ethnicity, religion, technology choices, sexual images in public spaces, deliberate intimidation, stalking, following, harassing photography or recording, sustained disruption of talks or other events, inappropriate physical contact, and unwelcome sexual attention. What if a joke is made about someone’s political affiliation or stance on a social issue like gun control? Those items are not contained in the list that defines harassment. The goal of what this paragraph seeks to achieve is laudable. The bottom line, however, is that there is an intent to make this a legally binding and enforceable document. The downfall of any legal document results from ambiguities, which is normally from the result of undefined and inconsistent terms. Participants asked to stop any harassing behavior are expected to comply immediately. I mentioned earlier the issue with the undefined phrase participants. The other problem is the phrase expected. A better, more appropriate word would be required. Sponsors are also subject to the anti-harassment policy. In particular, sponsors should not use sexualized images, activities, or other material. Booth staff (including volunteers) should not use sexualized clothing/uniforms/costumes, or otherwise create a sexualized environment. Three paragraphs in, sponsors are now included. This issue is related to the undefined term participants. There needs to be a section or a defined term early in the text that clearly spells out who is covered under the policy. There’s a liberal use of the word should as opposed to the word shall. If the goal is to have a policy with teeth in it as to enforceability, then there can’t be any question as to discretion on the part of individuals expected to be covered under the policy. There’s also a distinction between participants and sponsors. Why are there two separate terms? Do we need different terms? Is the intent that there should be a difference in how the two are treated under the policy? These are the sorts of questions, ambiguities, and unintended consequences that arise when non-lawyers draft legal documents. If a participant engages in harassing behavior, the conference organizers may take any action they deem appropriate, including warning the offender or expulsion from the conference with no refund. This section implies that the conference is not a corporate entity, but rather, a group of affiliated individuals.
codemag.com
Conference staff will be happy to help participants contact hotel/venue security or local law enforcement, provide escorts, or otherwise assist those experiencing harassment to feel safe for the duration of the conference. We value your attendance. We expect participants to follow these rules at conference and workshop venues and conference-related social events. The goal and intent of these last sections appears to be clear: to make people feel safe. The problem is, the language is largely unworkable for several reasons. First, it’s extremely broad and ambiguous. Second, as mentioned previously, there are no definitive requirements for what constitutes harassment. Third, there’s an attempt to make conference participants part of the policing apparatus. Although there’s the generic requirement that we all stay within the generally accepted boundaries of socially and legally acceptable behavior, the health, safety, and welfare of conference attendees rests squarely with the venue and the conference entity. Rather, there are simply expectations. There’s also the issue with training. Conference volunteers rarely have the training necessary to deal with the situations contemplated in the code of conduct. Going back to the agreements between the venue and conference, there’s normally an affirmative requirement on the part of the conference to report any incidents to the venue directly. The irony is that the possibility exists that the mere presence and attempted enforcement of a non-venue code can result in breach of the agreement between the venue and the conference.
An Alternative Code of Conduct The idea of a code, or, as I like to put it, a standard of conduct is a good thing. If nothing else, it provides an objective yardstick that everyone can reference. It puts everyone on notice and under the same standard. For that to happen, however, there must consistent and unambiguous language. I’ve created such a standard that can be found here: https://gist.github.com/johnvpetersen/8873654. The text is as follows:
Who is Covered? Anyone who is affiliated with this Conference (The “Participant”) is expected to conduct themself in a civil manner and treat any other Participant with respect and civility. (The “Standard of Conduct”). A Participant includes, but is not limited to any Conference attendee, guest, sponsor, or staff.
Legal Notes: Codes of Conduct
9
Anybody who could be associated with a conference is covered. There’s a clear definition of what a participant is. Most importantly, there are no words of exclusion such that a group could fall through the cracks and not be covered.
What is Covered? The Standard of Conduct is defined by what is deemed to be generally accepted by the Conference; the conference location (the ”Venue”); the Venue’s own standards of conduct, rules, and regulations; or any legal authority to which the Venue or Participant is subject. Any other conduct by a Participant that otherwise disrupts another Participant’s Conference experience shall be covered as well.
Other Codes of Conduct In the past few years, the debate and discussions around conference codes of conduct have made me hyper-aware of codes that I see in everyday life. Whenever I see one, I always take time to read them. In addition to the fan code of conduct for Lincoln Financial Field, one of the biggest shopping mall owners, Simon Property Group has a code: http://www.simon. com/legal/code-of-conduct. For Disney, there’s a code of conduct for manufacturers: https://thewaltdisneycompany. com/citizenship/respectfulworkplaces/ethical-sourcing/ils/ code-conduct-manufacturers. Finally, in 2007, the International Federation of Accountants (www.ifac.com) drafted guidance on Defining and Developing an Effective Code of Conduct for Organizations. In that report, the following working definition was postulated: Principles, values, standards, or rules of behavior that guide the decisions, procedures and systems of an organization in a way that (a) contributes to the welfare of its key stakeholders, and (b) respects the rights of all constituents affected by its operations.”
10
Legal Notes: Codes of Conduct
One of the biggest problems I see with the typical conference code of conduct is that it attempts to cover all forms of harassment and offensive behavior through specific enumerated lists. It’s impossible to capture, in specific words, all of the possible forms of offensive conduct and protected classes of people. Broken down most simply, anybody who attends the conference, whether that person is an attendee, guest, staff, sponsor, or etc., is covered and each owes a duty of respect and civility to each other. That’s the ethical part of the standard. Respect and civility encompass everything and therefore, there’s no requirement to list specific examples. I also hook in the legal requirements. Between the ethical and legal requirements, every possible behavior is covered. With fewer and the right words, you end up with something that covers much more the typical conference code of conduct that we see today.
At 235 words, this code of conduct is about 140 words less than the typical conference’s version. The gaps and ambiguities that exist in the typical code of conduct are resolved. In my standard of conduct, there are no gaps as to who and what is covered and how the standard is enforced. There are defined terms where necessary and as a result, there are no ambiguities because there’s no deviation from those defined terms. Above all, it puts everyone under the same standard. No document, no matter how well drafted, can absolutely guarantee that everyone will be free of a negative experience. Somebody who insists on being a jerk and ruining an experience for others will always have deaf ears when it comes to respecting boundaries. The remedy? Eject them with no recourse. Conference attendees are best classified as licensees. Like with any other license, there come duties and responsibilities. It’s a privilege and under some circumstances, that privilege can be revoked. The key is when dealing with such a situation, you, as the conference organizer, need to be able to point to a document that is clear and unambiguous, as those are the hallmarks of an enforceable document. John V. Petersen
How is this enforced? Only timely and direct reports of violations with sufficient factual details to the Conference organizers can be investigated. Upon investigation, allegations may result in sanctions including, but not limited to, expulsion from the Conference and Venue without recourse. Any report deemed to have not been made in good faith or with a reasonable factual basis may be treated as a violation. Investigations and sanctions imposed shall be conducted and determined at the sole discretion of the Conference. Nothing in this Standard of Conduct interferes with or discourages a Participant from exercising his or her right to contact the Venue and/or law enforcement directly and, in such a case; the Conference shall fully cooperate with the Venue and law enforcement. The enforcement section is probably the biggest deviation from the typical code of conduct that we see. Such a standard is a shield, not a sword. That said; if somebody lodges a complaint in bad faith, there should be accountability for that as well. That may be an issue for some that contend that such a phrase may be a bar to those who have a legitimate issue. As I see it, there needs to be at least a modicum of due process. Above all, any reported violation must be timely and must contain sufficient facts for an investigation to reasonably conclude that there was a violation. That’s a pretty low bar to meet. At a minimum, such facts have to include who the alleged parties are and what transpired. As the above standard makes clear, if the complaining party doesn’t feel they are getting an adequate remedy, he or she is free to contact the venue or law enforcement directly.
codemag.com
ONLINE QUICK ID 1601031
CRUD in HTML, JavaScript, and jQuery Using the Web API In my last article (CODE Magazine, November/December 2015), I showed you how to manipulate data in an HTML table using only JavaScript and jQuery. There were no post-backs, so the data didn’t go anywhere. In this article, you’ll use the same HTML and jQuery, but add calls to a Web API to retrieve and modify product data. It isn’t necessary to go back and read the previous article; this article presents all of the HTML and the calls to work client-side and add the serverside code as well. I’ll be using Visual Studio and .NET to build the Web API service, but the client-side coding is generic and can call a Web API built in any language or platform.
Paul D. Sheriff PSheriff@pdsa.com Paul D. Sheriff is the President of PDSA, Inc. (http://www.PDSAServices.com). PDSA develops custom business applications specializing in Web and mobile technologies. PDSA, founded in 1991, has successfully delivered advanced custom application software to a wide range of customers and diverse industries. With a team of dedicated experts, PDSA delivers cost-effective solutions, on-time and on-budget, using innovative tools and processes to better manage today’s complex and competitive environment. Paul is also a Pluralsight author. Check out his videos at http://www.pluralsight.com/ author/paul-sheriff.
This article focuses on the four standard HTTP verbs that you use to work with the Web API: GET, POST, PUT, and DELETE. The GET verb retrieves a list of data, or a single item of data. POST sends new data to the server. The PUT verb updates an existing row of data. DELETE sends a request to remove a row of data. These verbs are used to map to a method you write in your Web API controller class. It’s up to you to perform the retrieval of data, adding new rows, and updating and deleting of rows of data. Let’s see how all of this works by building a project step-by-step.
Create a Product Information Page If you’re using Visual Studio, create a new ASP.NET Web Application project. Select “Empty” for the project template as you don’t want any MVC, Web Forms, or even the Web API at this point. Add a new HTML page and name it Default.html. Open the Manage NuGet Packages dialog to add Bootstrap to your project. Bootstrap isn’t necessary for the demo, but it does make your page look nicer. Open up Default.html and drag the bootstrap.min.css file, the jQuery-1.9.1.min.js file, and the bootstrap.min. js files into the <head> area of the page, as shown in the following code snippet. <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title></title> <link href="Content/bootstrap.min.css" rel="stylesheet" /> <script src="Scripts/jquery-1.9.1.min.js"> </script> <script src="Scripts/bootstrap.min.js"> </script> </head> <body> </body> </html>
In the <body> tag of this page, build the Web page that looks like Figure 1. Add a Bootstrap container, and a row and column within the <body> element. Add an <h2> element with the words Paul’s Training Company (or substitute your name).
12
CRUD in HTML, JavaScript, and jQuery Using the Web API
<body> <div class="container"> <div class="row"> <div class="col-sm-6"> <h2>Paul's Training Company</h2> </div> </div> </div> </body>
Immediately below the row you just added, create another row, and within that row, build the skeleton of an HTML table. Just build the header for the table, as you’ll build the body dynamically in JavaScript using data retrieved from your Web API. To learn more about building a table dynamically in JavaScript, please read my last article entitled “CRUD in HTML, JavaScript, and jQuery”. <div class="row"> <div class="col-sm-6"> <table id="productTable" class="table table-bordered table-condensed table-striped"> <thead> <tr> <th>Product Name</th> <th>Introduction Date</th> <th>URL</th> </tr> </thead> </table> </div> </div>
In Figure 1, you can see an “Add Product” button immediately below the table. This button is used to clear the input fields of any previous data so that the user can add new product data. After entering data into the input fields, the user clicks on the “Add” button to send the data to the Web API. Build this “Add Product” button by adding another Bootstrap row and column below the previous row. In the onClick event for this button, call a function named addClick. You haven’t created this function yet, but you will later in this article. <div class="row"> <div class="col-sm-6"> <button type="button" id="addButton" class="btn btn-primary" onclick="addClick();"> Add Product </button>
codemag.com
</div> </div>
To create the product input area you see in Figure 1, the individual fields are placed inside a Bootstrap panel class. The panel classes are ideal to make the input area stand out on the screen separate from all other buttons and tables. A Bootstrap panel consists of the panel wrapper, a heading, a body, and a footer area made out of <div> tags. <div class="row"> <div class="col-sm-6"> <div class="panel panel-primary"> <div class="panel-heading"> Product Information </div> <div class="panel-body"> </div> <div class="panel-footer"> </div> </div> </div> </div>
</button> </div> </div>
If you run the Default.html page now, it should look similar to Figure 1, except there will be no listing of product data.
Add the Web API to Your Project Now that you have your HTML page built, it’s time to add on the appropriate components within Visual Studio so you can build a Web API to retrieve and modify product
All label and input fields are placed within the “panelbody” <div> tag. To achieve the input form look, use the Bootstrap <div class=”form-group”> around the label and input fields. Use the “for” attribute on the <label> and include class=”form-control” on each of your input types to achieve the correct styling. <div class="form-group"> <label for="productname">Product Name</label> <input type="text" id="productname" class="form-control" /> </div> <div class="form-group"> <label for="introdate">Introduction Date </label> <input type="date" id="introdate" class="form-control" /> </div> <div class="form-group"> <label for="url">URL</label> <input type="url" id="url" class="form-control" /> </div>
Figure 1: Use a product information page to list, add, edit, and delete data.
The final piece of the input form is the Add button that you place into the panel-footer <div> tag area. This input button’s text changes based on whether or not you’re doing an add or edit of the data in the input fields. The JavaScript code you write checks the text value of this button to determine whether to POST the data to the Web API or to PUT the data. A POST is used for adding data and the PUT is used to update data. Again, there’s a function name in the onClick event that you haven’t written yet, but you will. <div class="row"> <div class="col-xs-12"> <button type="button" id="updateButton" class="btn btn-primary" onclick="updateClick();"> Add
codemag.com
Figure 2: Use the Manage NuGet Packages screen to add the Web API to your project.
CRUD in HTML, JavaScript, and jQuery Using the Web API
13
data. In Visual Studio, this is easily accomplished using the Manage NuGet Packages dialog, as shown in Figure 2. Search for the “Microsoft ASP.NET Web API 2.2” package and click the Install button to add the appropriate components to your Web project. To create an endpoint for your Web API, create a controller class. Create a new folder called \Controllers in your project. Right-click on the Controllers folder and select Add | Web API Controller Class (v2.1). Set the name of your new controller to ProductController. Next, you need to specify the routing for your Web API. Create a new folder called \App_Start in your project. Add a new class and name it WebApiConfig. Add a Using statement at the top of this file to bring in the namespace System.Web.Http. Add a Register method in your WebApiConfig class. public static class WebApiConfig { public static void Register( HttpConfiguration config) { config.Routes.Clear(); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); } }
Clear the Routes collection in case there are any default routes created by .NET. The route, api/{controller}/{id}, that is specified in the MapHttpRoute method is very standard for Web API projects. However, feel free to change this to whatever you want. For example, you could just use {controller}/{id} if you don’t want to have to specify api in front of all your calls. This is just a matter of preference. However, as most developers have an existing Web project that they’re adding API calls to, it makes sense to have a different route for your API calls. This keeps them separate from the route of your Web pages, which is typically “views.” The last thing you need to do before you can start calling your Web API is to register this route so that your Web application running on your Web server will recognize any call made to your API. Right-click on your project and select Add | New Item… | Global Application Class. This adds a Global.asax file to your project. At the top of the global application class, add the following using statement. using System.Web.Http;
Within the Application_Start event, add a call to the Register method of the WebApiConfig class that you created earlier. ASP.NET creates an instance of an HttpConfiguration class and attaches it as a static property of the GlobalConfiguration class. It is to this configuration object that you’ll add to the routes you wish to support in your Web application. protected void Application_Start(object sender, EventArgs e) { WebApiConfig.Register(
14
CRUD in HTML, JavaScript, and jQuery Using the Web API
GlobalConfiguration.Configuration); }
Create Controller and Mock Data
You now have a Web page and all the plumbing for the Web API ready to go. Let’s start building your first API call to return a list of product objects to display in the table on the HTML page. Create a class called Product in your project and add the following properties. public class Product { public int ProductId { get; set; } public string ProductName { get; set; } public DateTime IntroductionDate { get; set; } public string Url { get; set; } }
Instead of worrying about any database stuff, you can just create some mock data to learn how to work with the Web API. Open the ProductController class and add a private method to create mock data, as shown in Listing 1.
Return Values from API Calls
If you look at the ProductController, you’ll see methods that look like Listing 2. The problem with these methods is that each one returns a different type of value or no value at all. This means that if you want to return HTTP status codes like a 200, 201, 404, etc. you have to write extra code. If you want to return error messages back to the client, you have to change the return value on each of these. Introduced in the Web API 2 is a new interface called IHttpActionResult. This interface is built into the ApiController class (from which your ProductContoller class inherits) and defines Helper methods to return the most common HTTP status codes such as a 202, 201, 400, 404, etc. The methods you’ll use in this article are Ok, Created<T>, and NotFound. These methods return 200, 201, and 404 respectively. The Ok and Created methods allow you to pass data back so you can include things like collections of products or a new product object.
Get (GET) All Products Let’s start by modifying the GET method to return all products created in the mock data collection. Locate the GET method that has no parameters in the ProductController class, and modify the code to look like the following. [HttpGet()] public IHttpActionResult Get() { IHttpActionResult ret = null; List<Product> list = new List<Product>(); list = CreateMockData(); ret = Ok(list); return ret; }
Modify the return value of the GET method to use the new IHttpActionResult interface. Although it’s not necessary, I like adding the attribute [HttpGet()] in front of the method to be very explicit about which HTTP verb
codemag.com
this method supports. Declare a variable named ret, of the type IHttpActionResult. Declare a variable named list, to hold a collection of product objects. Build the list of data by calling the CreateMockData method that you defined earlier. Set the ret variable to the Ok method built into the ApiController class, passing in the list of product objects. The Ok method does a couple of things; it sets the HTTP status code to 200, and it includes the list of products in the HttpResponseMessage sent back from this API call.
Call the GET Method
With the GET method created to return a list of products, you can now call it from your HTML page. Open the Default.html page and add a <script> tag at the bottom of the page just above the </body> tag. You know that you have to create at least two functions right away because they were the ones you called from the buttons you defined in the HTML. Add these two function stubs now. <script> // Handle click event on Update button function updateClick() { } // Handle click event on Add button function addClick() { } </script>
Add a new function called productList to make the Ajax call to the GET method that you created. function productList() { // Call Web API to get a list of Product $.ajax({ url: '/api/Product/', type: 'GET', dataType: 'json', success: function (products) { productListSuccess(products); }, error: function (request, message, error) { handleException(request, message, error); } }); }
In this Ajax call, there are two additional functions that you need to write. The productListSuccess function processes the collection of products returned when you successfully retrieve the data. The handleException function takes the error information and does something with it. The productListSuccess function is very simple and uses the jQuery $.each() iterator to loop over the collection of product objects. function productListSuccess(products) { // Iterate over the collection of data $.each(products, function (index, product) { // Add a row to the Product table
Listing 1: Create some mock data for your Web API private List<Product> CreateMockData() { List<Product> ret = new List<Product>(); ret.Add(new Product() { ProductId = 1, ProductName = "Extending Bootstrap with CSS, JavaScript and jQuery", IntroductionDate = Convert.ToDateTime("6/11/2015"), Url = "http://bit.ly/1SNzc0i" }); ret.Add(new Product() { ProductId = 2, ProductName = "Build your own Bootstrap Business
Application Template in MVC", IntroductionDate = Convert.ToDateTime("1/29/2015"), Url = "http://bit.ly/1I8ZqZg" }); ret.Add(new Product() { ProductId = 3, ProductName = "Building Mobile Web Sites Using Web Forms, Bootstrap, and HTML5", IntroductionDate = Convert.ToDateTime("8/28/2014"), Url = "http://bit.ly/1J2dcrj" }); return ret; }
Listing 2: The default methods in the controller need to be modified. // GET api/<controller> public IEnumerable<string> Get() { return new string[] { "value1", "value2" }; } // GET api/<controller>/5 public string Get(int id) { return "value"; } // POST api/<controller>
codemag.com
public void Post([FromBody]string value) { } // PUT api/<controller>/5 public void Put(int id, [FromBody]string value) { } // DELETE api/<controller>/5 public void Delete(int id) { }
CRUD in HTML, JavaScript, and jQuery Using the Web API
15
productAddRow(product); });
var msg = "";
}
msg += "Code: " + request.status + "\n"; msg += "Text: " + request.statusText + "\n"; if (request.responseJSON != null) { msg += "Message" + request.responseJSON.Message + "\n"; }
The productAddRow function called from within the iterator is responsible for building a new row to add to the HTML table. function productAddRow(product) { // Check if <tbody> tag exists, add one if not if ($("#productTable tbody").length == 0) { $("#productTable").append("<tbody></tbody>"); } // Append row to <table> $("#productTable tbody").append( productBuildTableRow(product)); }
Notice that you first check to ensure that the <tbody> tag exists on the table. This ensures that any <tr> elements you add go into the correct location in the DOM for the table. The function to build the actual <tr> is in a function called productBuildTableRow. This is in a separate function because you’ll use this later in this article to build a row for editing a row in a table. function productBuildTableRow(product) { var ret = "<tr>" + "<td>" + product.ProductName + "</td>" + "<td>" + product.IntroductionDate + "</td>" + "<td>" + product.Url + "</td>" + "</tr>";
alert(msg); }
Call the productList function after the Web page loads using the jQuery $(document).ready() function. Add the following code within your <script> tag. $(document).ready(function () { productList(); });
Run the HTML page, and if you’ve done everything correctly, you should see your mock product data displayed in the HTML table. What’s really neat about this is that you didn’t have to use MVC, Web Forms, PHP, or any other Web development system. You simply use an HTML page to call a Web API service.
Modify the GET Method
}
The GET method you wrote earlier assumes that you successfully retrieved a collection of data from your data store. However, when you retrieve data from a database table, you may have an empty table. In this case, you need to respond back to the front-end client that no data was found. In this case, the list variable would be an empty list. If no data is returned, you should send back a 404 status using the NotFound method. Modify the GET method to look like the following.
The last function to add is handleException. If an error occurs, display the error message information in an alert dialog. You can figure out how you want to display error messages later, but for now, you only want to see the error details.
[HttpGet()] public IHttpActionResult Get() { IHttpActionResult ret = null; List<Product> list = new List<Product>();
return ret;
list = CreateMockData(); if (list.Count > 0) { ret = Ok(list); } else { ret = NotFound(); }
function handleException(request, message, error) {
Listing 3: Get a single product [HttpGet()] public IHttpActionResult Get(int id) { IHttpActionResult ret; List<Product> list = new List<Product>(); Product prod = new Product(); list = CreateMockData(); prod = list.Find(p => p.ProductId == id); if (prod == null) { ret = NotFound(); } Else { ret = Ok(prod); } return ret; }
16
CRUD in HTML, JavaScript, and jQuery Using the Web API
return ret; }
Get a Single Product When you wish to edit a product, you call the Web API to retrieve a single product object to ensure that you have the latest data from the database, and then display that data in input fields to the user. The user then modifies that data and posts it back. You’ll learn how update data later in this article, but for now, let’s see how to get a single product into the HTML input fields. Open the ProductController and locate the second Get method, the one that accepts a single parameter named
codemag.com
id. Modify that function to look like Listing 3. Because you’re using mock data, go ahead and build the complete collection of products. Locate the product id using LINQ to search for the ID passed into the method. If the product is found, return an Ok and pass the product object to the Ok method. If the product is not found, return a NotFound.
Listing 4: Get the product ID and use Ajax to get a single product object function productGet(ctl) { // Get product id from data- attribute var id = $(ctl).data("id"); // Store product id in hidden field $("#productid").val(id);
Add an Edit Button to Each Row of the Table
// Call Web API to get a list of Products $.ajax({ url: "/api/Product/" + id, type: 'GET', dataType: 'json', success: function (product) { productToFields(product);
Each row of data in the HTML table should have an edit button, as shown in Figure 1. The raw HTML of the button looks like the following, but of course you have to build this code dynamically so you can get the data- attribute assigned to the correct product id in each row. <button class="btn btn-default" onclick="productGet(this);" type="button" data-id="1"> <span class="glyphicon glyphicon-edit"></span> </button>
To add this button into each row, add a new <th> element in the <thead> of the table.
// Change Update Button Text $("#updateButton").text("Update"); }, error: function (request, message, error) { handleException(request, message, error); } }); }
<th>Edit</th>
Modify the productAddRow function and insert the code below before the other <td> elements. "<td>" + "<button type='button' " + "onclick='productGet(this);' " + "class='btn btn-default' " + "data-id='" + product.ProductId + "'>" + "<span class='glyphicon glyphicon-edit' />" + "</button>" + "</td>" +
Within the <td>, build a button control. Add an onClick to call a function named productGet. Pass in this to the productGet function so that you have a reference to the button itself. You’re going to need the reference so you can retrieve the value of the product ID you stored into data-id attribute. To simplify the code for this article, I concatenated the HTML values together with the data. You could also use a template library, such as Underscore or Handlebars, to separate the HTML markup from the data.
Make an Ajax Call to Get a Single Product
In Listing 4, you can see the productGet function that you need to add to your <script> on your HTML page. In this function, you retrieve the product ID from the data-id attribute you stored in the edit button. This value needs to be passed to the Get method in your controller and it needs to be kept around for when you submit the data back to update. The best way to do this is to create a hidden field in your HTML body. <input type="hidden" id="productid" value="0" />
You’re now ready to call the Get(id) method in your ProductController. Add the function productGet within your <script> tags, as shown in Listing 4. The Ajax call is very
codemag.com
similar to the previous call you made, but the product ID is included on the URL line. This extra ID is what maps this call to the Get(id) method in your controller. If the Get method succeeds in returning a product object, call a function named productAddToFields and pass in the product object. Change the update button’s text from “Add” to “Update.” This text value will be used later when you’re ready to add or update data. The productToFields function uses jQuery to set the value of each input field with the appropriate property of the product object retrieved from the API call. function productToFields(product) { $("#productname").val(product.ProductName); $("#introdate").val(product.IntroductionDate); $("#url").val(product.Url); }
Run this sample and ensure that you can retrieve a specific product from your API.
Add (POST) a New Product You used the GET verb to retrieve product data, so let’s now learn to use the POST verb to add a new product. In the ProductController class, to call the Post method. Modify the Post method by adding an [HttpPost()] attribute and changing the return value from void to IHttpActionResult. Change the parameter to the Post method to accept a Product object. You don’t need the [FromBody] attribute, so go ahead and delete that. The [FromBody] attribute is only needed for simple data types, such as string and int. [HttpPost()] public IHttpActionResult Post(Product product) { IHttpActionResult ret = null; if (Add(product)) {
CRUD in HTML, JavaScript, and jQuery Using the Web API
17
ret = Created<Product>(Request.RequestUri + product.ProductId.ToString(), product); } else { ret = NotFound(); }
function updateClick() { // Build product object from inputs Product = new Object(); Product.ProductName = $("#productname").val(); Product.IntroductionDate = $("#introdate").val(); Product.Url = $("#url").val();
return ret; }
The Add method is a mock method to simulate adding a new product. This method calculates the next product ID for the new product and returns a Product object back to the Web page with the new ID set in the ProductId property. private bool Add(Product product) { int newId = 0; List<Product> list = new List<Product>(); list = CreateMockData(); newId = list.Max(p => p.ProductId); newId++; product.ProductId = newId; list.Add(product); // TODO: Change to 'false' to test NotFound() return true; }
Add a New Product in HTML
With the Post Web API method written, you can write the necessary JavaScript in Default.html to call this method. Define a new JavaScript object called Product with the following name/value pairs. var Product = { ProductId: 0, ProductName: "", IntroductionDate: "", Url: "" }
Each of the names in this JavaScript object needs to be spelled exactly the same as the properties in the Product class you created in C#. This allows the Web API engine to map the values from the JavaScript object into the corresponding properties in the C# object.
Each of the names in this JavaScript object needs to be spelled exactly the same as the properties in the Product class you created in C#.
The user fills in the blank fields on the screen, then click the Add button to call the updateClick function. Modify this function to create a new Product object and retrieve the values from each input field on the page and set the appropriate values in the Product object. Call a function named productAdd to submit this JavaScript object to the Post method in your API.
18
CRUD in HTML, JavaScript, and jQuery Using the Web API
if ($("#updateButton").text().trim() == "Add") { productAdd(Product); } }
The productAdd function uses Ajax to call the Post method in the Web API. There are a couple of changes to this Ajax call compared to the GET calls you made earlier. First, the type property is set to POST. Second, you add the contentType property to specify that you’re passing JSON to the API. The last change adds a data property where you take the JavaScript Product object you created and serialize it using the JSON.stringify method. function productAdd(product) { $.ajax({ url: "/api/Product", type: 'POST', contentType: "application/json;charset=utf-8", data: JSON.stringify(product), success: function (product) { productAddSuccess(product); }, error: function (request, message, error) { handleException(request, message, error); } }); }
Add a New Product to the HTML Table
If the call to the Post method is successful, a new product object is returned from the Web API. In the success part of the Ajax call, pass this object to a function called productAddSuccess. Add the newly created product data to the HTML table by calling the productAddRow function you created earlier. Finally, clear the input fields so that they’re ready to add a new product. function productAddSuccess(product) { productAddRow(product); formClear(); }
The formClear function uses jQuery to clear each input field. You should also change the addClick function to clear the fields when the user clicks on the Add button. function formClear() { $("#productname").val(""); $("#introdate").val(""); $("#url").val(""); } function addClick() { formClear(); }
codemag.com
Update (PUT) a Product
type: 'PUT', contentType: "application/json;charset=utf-8", data: JSON.stringify(product), success: function (product) { productUpdateSuccess(product); }, error: function (request, message, error) { handleException(request, message, error); } });
At some point, a user is going to want to change the information about a product. Earlier in this article, you added an Edit button to retrieve product data and display that data in the input fields. The function also changes the updateButton’s text property to Update. When the user clicks on the Update button, take the data from the input fields and use the HTML verb PUT to call the Put method in your Web API. Modify the Put method in your ProductController to look like the following. } [HttpPut()] public IHttpActionResult Put(int id, Product product) { IHttpActionResult ret = null; if (Update(product)) { ret = Ok(product); } else { ret = NotFound(); } return ret; }
The call to the Update method from the Put method is a mock to simulate modifying data in a product table. To see the NotFound returned from this method, change the true value to a false value in the Update method shown below. private bool Update(Product product) { return true; }
Edit and Submit the Product Data
With your Put method in place, it’s now time to modify the updateClick() function in your JavaScript. Locate the updateClick() function and add an else condition to call a function named productUpdate. function updateClick() { ... ... if ($("#updateButton").text().trim() == "Add") { productAdd(Product); } else { productUpdate(Product); }
If the update is successful, a function called productUpdateSuccess is called and passed the updated product object. The productUpdateSuccess function passes the product object on to another function named productUpdateInTable. I know that you could just call the productUpdateInTable function directly from the Ajax call, but I like to keep my pattern of naming functions consistent. Besides, in the future, you may want to do some additional coding within the productUpdateSuccess function. function productUpdateSuccess(product) { productUpdateInTable(product); }
Update the Modified Data in the HTML Table
The productUpdateInTable function locates the row in the HTML table just updated. Once the row is located, a new table row is built using the productBuildTableRow function that you created earlier. This newly created row is inserted immediately after the row of the original data. The original row is then removed from the table. All of this happens so fast that the user doesn’t even realize when this add and delete occur. Another option is to clear the whole table and reload all of the data by calling the productList function. function productUpdateInTable(product) { // Find Product in <table> var row = $("#productTable button[data-id='" + product.ProductId + "']").parents("tr")[0]; // Add changed product to table $(row).after(productBuildTableRow(product)); // Remove original product $(row).remove(); formClear(); // Clear form fields // Change Update Button Text $("#updateButton").text("Add"); }
}
Delete (DELETE) a Product
The productUpdate function is passed the product object, and you’ll send that via an Ajax call using the verb PUT. This maps to the Put method you created in the controller. Just like you did in the POST Ajax call, change the type property to PUT, set the contentType to use JSON and serialize the Product object using JSON.stringify.
The last HTTP verb you need to learn is DELETE. Once again, you need to modify the ProductController. Locate the Delete method and modify it by adding the [HttpDelete] attribute, and changing the return value to IHttpActionResult.
function productUpdate(product) { $.ajax({ url: "/api/Product",
[HttpDelete()] public IHttpActionResult Delete(int id) { IHttpActionResult ret = null;
codemag.com
CRUD in HTML, JavaScript, and jQuery Using the Web API
19
if (DeleteProduct(id)) { ret = Ok(true); } else { ret = NotFound(); } return ret; }
The DeleteProduct method is a mock to simulate deleting a product from a table. Just create a dummy method that returns true for now. You can switch this method to return false to test what happens if the delete fails.
Summary In this article, you learned the various HTTP verbs GET, POST, PUT, and DELETE and how to use them to create add, edit, delete, and list Web pages. What’s nice about combining the Web API with pure HTML is that you’re not performing full postbacks to the server, rebuilding the whole page, and then resending the whole page back down to the client browser. This comes in very handy on mobile devices where your user may be on a limited data connection through a service provider. The less data you send across, the fewer minutes and gigs of data you use on mobile phone plans. In the next article, you’ll see some methods for handling validation and error messages. Paul D. Sheriff
private bool DeleteProduct(int id) { return true; }
Sample Code You can download the sample code for this article by visiting my website at http://www.pdsa.com/ downloads. Select PDSA Articles, then select “Code Magazine— CRUD in HTML using the Web API” from the drop-down list.
Add a Delete Button
Each row in your table should have a Delete button (Figure 1). Add that button now by adding a new <th> within the <thead> tag. <th>Delete</th>
Modify the productBuildTableRow function and add the following code immediately before the closing table row tag (“</tr>”). "<td>" + "<button type='button' " + "onclick='productDelete(this);' " + "class='btn btn-default' " + "data-id='" + product.ProductId + "'>" + "<span class='glyphicon glyphicon-remove' />" + "</button>" + "</td>" +
Delete a Product Using Ajax
In the code that builds the delete button, the onClick event calls a function named productDelete. Pass this to the productDelete function so that it can use this reference to retrieve the value of the product ID contained in the data-id attribute. The Ajax call sets the URL to include the ID of product to delete and sets the type property to DELETE. Upon successfully deleting a product, remove the complete row from the HTML by finding the <tr> tag that contains the current delete button and calling the remove() method on that table row. function productDelete(ctl) { var id = $(ctl).data("id"); $.ajax({ url: "/api/Product/" + id, type: 'DELETE', success: function (product) { $(ctl).parents("tr").remove(); }, error: function (request, message, error) { handleException(request, message, error); } }); }
20
CRUD in HTML, JavaScript, and jQuery Using the Web API
codemag.com
ONLINE QUICK ID 1601041
JSLint, AngularJS, and TDD In my last two CODE Magazine articles, I outlined numerous interesting challenges that we deal with when writing JavaScript, and how TypeScript solves most of them. Certainly, a good programming language helps, but it’s not a replacement for good development habits. Historically, JavaScript has been intertwined with HTML. Yes, I know that these days we write enterprise class applications in JavaScript that have nothing to do with HTML, but the Web is where the roots of JavaScript are. As a result, unlike other languages, JavaScript lends itself to abuse. Nothing prevents someone from creating a JavaScript code snippet using string concatenation, and then run it on the fly, which is incredibly unmaintainable code.
NameMalik Sahil Autor www.winsmarts.com www.internet.com @sahilmalik asdfasdfasdfasdfker, a .NET Sahil Malik is a Microsoft MVP, author, consultant and trainer. INETA speaker, a .NET author, Sahilasfasdfasdfasdfasdfasdfdconsultant and trainer. fainings are full of humor and Sahil loves interacting withfind felpractical nuggets. You can low realtrainings time. Hisattalks moregeeks aboutinhis and trainings are full of humor http://wwasdfasdfasfasdfasdf and practical nuggets. You can find more about his training at http://www.winsmarts.com/ training.aspx. His areas of expertise are cross platform Mobile app development, Microsoft anything, and security and identity.
In this article, I share three cardinal rules of working on any JavaScript project. 1. Use tools, such as JSLint, along with good coding practices. 2. Use design patterns, or frameworks, that encourage MVC, such as AngularJS. 3. Use TDD.
JSLint JSLint is a static code analysis tool used in software development for checking whether JavaScript source code complies with coding rules. JSLint takes a JavaScript code file or JSON data, and scans it. If it finds a problem, it highlights the problem area for you. What’s cool about JSLint is that it goes much beyond the basic syntax of the language and scans your source code using established JavaScript best practices. Although JSLint isn’t always 100% right, and neither will it catch 100% of your issues, it should be looked at as “another set of eyes,” helping you write consistent, bug free, maintainable and understandable JavaScript. JSLint is very customizable. If there are things in your code that JSLint views as issues but you don’t, you can put hints in your code as comments that JSLint will understand and stop nagging you about those specific issues. By default, JSLint checks for the most common errors that JavaScript developers are prone to making. If you’re writing plain JavaScript, which in some cases you still need to, you should definitely use JSLint to keep your code clean. You can read more about JSLint and the sorts of checks it does here http://www.jslint.com/help.html.
MVC and AngularJS It helps to be well organized. You could achieve the same functionality by putting all of your code in the global namespace, or in a single function. It wouldn’t be very reliable and it wouldn’t be very maintainable. MVC is an industry standard design pattern that allows you to structure your code nicely. And, since it’s an industry standard design pattern, chances are that your friends who’re developers will readily understand it as well.
22
JSLint, AngularJS, and TDD
Platforms such as iOS development insist on MVC so much that they’ve built all their foundation classes to use MVC. In fact, not to use MVC in Mac development is an effort, so you might as well use it. On the Microsoft side of things, we’ve had well established design patterns all the way back to MFC (remember document view?). Even in the modern universal apps, we use a design pattern similar to MVC called MVVM. Design patterns are good. Design patterns should be used. Unfortunately, JavaScript is incredibly flexible and doesn’t enforce the usage of Design patterns. Contrary to other enterprise dev platforms, using MVC in JavaScript requires upfront thought and effort.
Design patterns are good. Design patterns should be used.
It’s therefore no surprise that, over time, frameworks have emerged that require you think to think in MVC while coding JavaScript. One such framework is AngularJS. Okay let’s be honest. AngularJS has pretty much been the de facto framework for the last year or so, which is a squirrel’s lifetime in JavaScript, where things tend to be shorter-lived. MVC can be described like this: • A controller can send commands to the model to update the model’s state (e.g., editing a document). It can also send commands to its associated view to change the view’s presentation of the model (e.g., by scrolling through a document). • A model stores data that’s retrieved to the controller and displayed in the view. Whenever there’s a change to the data, it’s updated by the controller. • A view requests information from the controller. The controller fetches it from the model and passes it to the view used to generate an output representation to the user. This article is not a tutorial on AngularJS, although a basic understanding of AngularJS is necessary to understand the concepts presented below. In AngularJS, you begin by defining a module, which can be thought of as the entry point to your application. Inside the module, you have controllers, which are exactly what you’d think; they’re the controllers in an MVC design pattern. The “M” or Model, is JavaScript data that you hang as properties on the controller. The controller usually has the logic to fetch and massage the model as necessary,
codemag.com
and present whatever the view demands. The best part about AngularJS is that the view is merely embellished HTML. You can databind the model held by the controller to the view, which is HTML. And the specific rendering details are left to view specific concepts such as directives. Additionally, not everything fits into the overhead of MVC. Sometimes you want code to be reusable across modules. Or sometimes you need a library of helper functions, etc. Those candidates are perfect for another Angular concept called services. Now you have some high level concepts in AngularJS that require testability: • Controllers, or rather specific controller methods, including: • Simple controller methods • Controller methods that depend on deferreds returned from services • Controller methods that depend on external dependencies that you didn’t write but that are part of the framework, such as $http • Directives, because testing UI is incredibly difficult, yet Angular makes it easy for us. • Service methods
means that you can test services such as $http and $q without complex server-side infrastructure. Invariably, testing is hampered by the complexity of testing and setup. The applications we’re all writing these days have too many dependencies, if you get bogged down in setting up all those dependencies, you’ll invariably end up not testing enough. A good test is what can be run in a developer’s IDE or a continuous integration setting with almost no setup. • You might be asking why isn’t he talking about ReactJS, the new kid on the block? Simply because it hasn’t had enough street time yet. If it catches on, I’m sure that equivalent TDD patterns will emerge for that, as well. Angular is a bit more mature, and has a very rich testing framework called Jasmine.
TDD
Because AngularJS is a fairly mature framework by now, it lends itself very well to TDD for two reasons:
The essence of TDD is writing tests for your code before or while you write your code. Although it may seem like a lot of extra work, the real payoff is when your project starts to scale. The problem is that the code you write is extremely interconnected. Yes, I know that through better architecture, you try to keep these interconnections clean, but as the project scales, the number of interconnections can increase exponentially. Not to mention that requirements change, which renders your architecture based on initial assumptions invalid.
• AngularJS was designed with TDD in mind. Even the framework objects can be easily mocked. This
You need confidence to change the code or refactor. If you had written tests for every bit of your code, as long as the
ADVERTISERS INDEX
Advertisers Index
1&1 Internet, Inc. www.1and1.com
5
LEAD Technologies www.leadtools.com
11
SPTechCon www.sptechcon.com
21
SXSW Interactive www.sxsw.com
75
Aspose www.aspose.com CODE Consulting www.codemag.com/consulting CODE Divisions www.codemag.com
2 65
JAN FEB
2016
Swift 2.0
76
Introducing Apple’s Latest Programming Language
Maintain JavaScript Sanity with JSLint, AngularJS, and TDD
CODE Framework www.codemag.com/framework
43
CODE Staffing www.codemag.com/staffing
51
Integrate YouTube into Your iOS Applications
dtSearch www.dtSearch.com
29
Hibernating Rhinos Ltd. http://ravendb.net
7
codemag.com
Functional Programming, Testing Tools, SQL, CRUD in HTML, Legal Notes
38 codemag.com - THE LEADING INDEPENDENT DEVELOPER MAGAZINE - US $ 5.95 Can $ 8.95
/n software www.nsoftware.com
Host Desktop Apps in Azure
Advertising Sales: Tammy Ferguson 832-717-4445 ext 026 tammy@codemag.com
This listing is provided as a courtesy to our readers and advertisers. The publisher assumes no responsibility for errors or omissions.
JSLint, AngularJS, and TDD
23
tests produce the same output (i.e., the tests continue to pass), you can be confident that all those interdependent parts will continue to function as intended.
I think that one day in the very near future, we’ll be able to write these tests in TypeScript too, but for now, I went with JavaScript.
Sample Application
Anyway, the goal here is to have a setup so that:
I need a sample guinea pig application to demonstrate my point. This simple app has: • • • •
One Angular Module One controller One service One HTML view
The service has one method that calls an AJAX service using $http and returns a deferred. You need to write a test for this method. The controller has two methods: one that returns a string and the other that calls the service method. This very simple application allows me to demonstrate the various scenarios you’ll need to write tests for.
Dev Environment Setup
The aim of a good development environment in this scenario is something that allows you to run your tests easily with a single key stroke. I’m going to use Visual Studio Code, and I’m going to write Gulp tasks to enable me to run tests easily. My testing framework of choice is Jasmine and my IDE of choice is Visual Studio Code. Although the code is written in TypeScript, I choose to write my tests using JavaScript. There are two reasons for this: • The .d.ts files for jasmine aren’t quite there yet. • When I wish to mock objects, all the support for that is in JavaScript.
• I can use Visual Studio Code • I can author code in TypeScript and JavaScript, and TypeScript code continually compiles to JavaScript using the watch method. • As I write tests and code, I want the tests to run continuously. • I want all of this to work in Visual Studio Code. Step #1 is to install Visual Studio Code from code.visualstudio.com. Step #2 is to install nodejs from nodejs.org because I’ll make use of some node packages to facilitate the Gulp tasks. Once the node is installed, open a terminal (on Mac) or command window (on Windows) and run the following commands to install various node packages. You may need to run these in administrator mode (Windows), or using sudo (Mac). npm npm npm npm
i -g typescript install -g gulp install gulp-typescript install -g karma
Great! Next, set up a folder structure, as shown in Figure 1. You may have already guessed that: • Index.html is the entry point for the app. • Lib contains standard angular.js and angular-mocks. js libraries that you downloaded from the Internet.
Figure 1: App folder structure, the lib folder
Figure 2: The dts packages and the scripts folder
24
JSLint, AngularJS, and TDD
codemag.com
Listing 1: The tasks.json {
{ "version": "0.1.0", "command": "gulp", "isShellCommand": true, "args": [ "--no-color" ], "tasks": [ { "taskName": "build", "args": [], "isBuildCommand": true },
• I also created a tsconfig.json file for inputs to the typescript compiler. • I also created a karma.conf.js to provide inputs to karma for the tests. • The gulpfile.js defines the Gulp tasks. Now let’s peek inside the scripts folder. This can be seen in Figure 2. As can be seen, the code is kept inside two files: • The app.ts, which is the TypeScript file with the application code • Tests.js, which will contain the tests
"taskName": "scripts", "args": [], "isBuildCommand": true }, { "taskName": "test", "args": [], "isBuildCommand": true } ] }
Listing 2: The scripts task var gulp = require('gulp'); var ts = require('gulp-typescript'); var tsProject = ts.createProject('tsconfig.json'); gulp.task('scripts', function () { var tsResult = tsProject.src().pipe(ts(tsProject)); return tsResult.js.pipe(); });
Listing 3: The tsconfig.json { "compilerOptions": { "module": "commonjs", "noImplicitAny": false, "removeComments": true, "preserveConstEnums": true, "declaration": true, "sourceMap": true, "watch":true }
You’ll also find a dts folder where I’ve placed definitely typed files for angular and jquery. Now, in a real application, you probably want to follow some best practices, and separate out your angular code into multiple files based on responsibility and folders based on views. However, since my application is so simple, I’m avoiding that overhead. }
Next, open your project folder in Visual Studio code, and add a .settings folder with a “tasks.json” file in it that identifies the tasks you’d like. Put the content shown in Listing 1 into the tasks.json file. Great! Now to complement this tasks.json, first let’s flesh out the scripts tasks. The purpose for this task is to continually watch the TypeScript code and compile it into JavaScript. This can be seen in Listing 2. Note that the watch functionality shown in Listing 2 depends on using the tsx.exe compiler from the command line. If you want watch it work in Gulp, write a Gulp task to do the watch for you. This scripts task depends on tsconfig.json. In the tsconfig. json file, I’ve specified that I want commonjs JavaScript generated, and I want the TypeScript compiler to continuously watch and compile the code. This tsconfig.json can be seen in Listing 3. Also, enter a task called test that will be responsible for running the tests. This can be seen in Listing 4. To complement the Gulp task shown in Listing 4, edit your karma.conf.js file as shown in Listing 5. What you’re doing in Listing 5 is identifying the test framework as
codemag.com
jasmine, and I’ve set up the tests to run in the chrome browser on port 9876. Also, the tests will run continuously, so as I edit the files (either TypeScript or test), the tests will continually run and alert me of any inadvertent bugs that I might introduce. Finally, Visual Studio Code doesn’t come with a built-in Web server. Thankfully, it’s very easy to persuade nodejs to provide one for you, using a Gulp task. This can be seen as the final build task, as shown in Listing 6. Now you might be thinking, what build task? Build is supposed to run tests and compile the scripts. Well, yes! You finally add a default task, which runs the build task with dependencies on build, scripts, and test, as shown here: gulp.task(‚default', [‚build', ‚scripts', ‚test']);
Figure 3: Generated TypeScript/ JavaScript files
At this point, your IDE is set up! Now if you write some code in TypeScript in the app.ts file, you should see app. js, app.d.ts, and app.js.map files created, as shown in Figure 3.
JSLint, AngularJS, and TDD
25
Listing 4: The test task var karma = require('gulp-karma'); gulp.task('test', function () { var testFiles = [ 'lib/angular/angular.js', 'lib/angular/angular-mocks.js', 'scripts/app.js', 'scripts/tests.js' ]; return gulp.src(testFiles)
.pipe(karma({ configFile: 'karma.conf.js', action: 'watch' })) .on('error', function (err) { throw err; }); });
Listing 5: karma.conf.js module.exports = function (config) { config.set({ basePath: '', frameworks: ['jasmine'], files: [ 'lib/angular/angular.js', 'lib/angular/angular-mocks.js', 'scripts/app.js', 'scripts/tests.js' ],
reporters: ['progress'], port: 9876, colors: true, logLevel: config.LOG_INFO, autoWatch: true, browsers: ['Chrome'], singleRun: false }) }
Listing 6: The build task, providing a Web server filename += '/index.html';
gulp.task('build', function () { var http = require("http"), url = require("url"), path = require("path"), fs = require("fs"), port = 8888; http.createServer(function (request, response) { var uri = url.parse(request.url).pathname , filename = path.join(process.cwd(), uri); fs.exists(filename, function (exists) { if (!exists) { response.writeHead(404, { "Content-Type": "text/plain" }); response.write("404 Not Found\n"); response.end(); return; }
fs.readFile(filename, "binary", function (err, file) { if (err) { response.writeHead(500, { "Content-Type": "text/plain" }); response.write(err + "\n"); response.end(); return; } response.writeHead(200); response.write(file, "binary"); response.end(); }); }); }).listen(parseInt(port, 10)); console.log( "Static file server running at\n => http://localhost:" + port); });
if (fs.statSync(filename).isDirectory())
Writing the Application
I’m going to assume that you know AngularJS. And, as described above, the application consists of one module (myApp), with one controller (myCtrl), and with one service (myService). These three are described in the interfaces shown in Listing 7. The value returned by staticMethod is displayed on the view. This method is incredibly simple and returns a basic string. The “asyncValue” starts with “Not initialized” and is shown on the view. The view using the ng-init directive calls asynchMethod, which then calls myService.methodThatReturnsDeferred. The called myService.methodThatReturnsDeferred encapsulates a $http call and returns a deferred. The $http call returns a string called “intialized, which is
26
JSLint, AngularJS, and TDD
then passed back to asynchMethod, which sets it to asyncValue and is then displayed on the view. I have a simple case, and I have a complex case. And this means that to get a good coverage of tests, I need to author three tests: • A test that tests myCtrl.staticMethod • A test that tests myCtrl.asynchMethod, ensuring that it calls myService.methodThatReturnsDeferred • A test that tests myService.methodThatReturnsDeferred, ensuring that it makes a POST request with appropriate inputs, and returns a response with value “Initialized” Before you write the application, let’s write these tests.
codemag.com
Setting the Controller’s Test Harness
Before you can write the tests for the various methods in the controller, you need to set up the basic harness of the controller. This means that you need to have mock objects for: • The scope service as it applies to this controller • An instance of the controller itself • A mock object representing the myService
You can pretty much guess what the above code snippet is doing. It’s calling the “staticMethod” on the controller’s scope. And its return value should be ‘static value’. You can imagine that if this static method had more logic in it, or if it depended upon multiple property values, you could easily enhance this logic to handle more complex scenarios as well.
Testing a Controller that Depends on a Deferred Service
You’ll write a test to test the service method separately. That will ensure 100% coverage using the tests. Here you just need to make sure that the service method is called.
This is where things get more interesting. The idea here is that I have no interest in checking whether the $http call fails or succeeds, since that’s the responsibility of the service, not the controller. The only thing the controller is responsible for is making sure that the service method is called.
In order to know that the service method is called, you need to “spy” on that method. Jasmine allows you to create a mock object representing the myService service. When you create this mock object, you request to spy on the “methodThatReturnsDeferred” method.
That’s all the controller method needs to do; it just needs to call the service method, and you need to test to make sure that the service method is called.
Once you’re able to create mock objects for both scope and myService, or, for that matter, any framework services, you can then create a controller instance and store it within the function scope. All this can be seen in action in Listing 8. As is evident from Listing 8, you have three scope variables. • myController that represents an instance of the controller, initialized with the mock objects • scope that has been created using the injected $rootScope service • myServiceMock object, that’s a mock object, and is set up to spy on the methodThatReturnsDeferred method. Now, writing tests becomes rather easy.
Testing a Simple Controller Method
The simple controller method is perhaps the easiest to test. It can be seen in the code snippet below. it("Should return static value", function () { var returnValue = scope.staticMethod(); expect(returnValue).toBe('static value'); });
As you may recall from Listing 8, you had set up a myServiceMock object, where you were spying on the methodThatReturnsDeferred method. In other words, you can now know whether or not that method was called. This means that you can now write a test around it, which can be seen in the snippet below: it( "Should attempt to call the service deferred method", function () { scope.asynchMethod(); expect( myServiceMock.methodThatReturnsDeferred) .toHaveBeenCalled(); });
Listing 7: The application’s structure interface Window { myApp: angular.IModule }; interface myServiceType { methodThatReturnsDeferred(): angular.IPromise<string> }; interface myControllerScope extends angular.IScope { staticMethod(): string; asyncValue: string; asynchMethod(): void; };
Listing 8: The controller test harness describe("myCtrl", function () { var scope var myController; var myServiceMock;
})); beforeEach(inject(function ($rootScope, $controller) { scope = $rootScope.$new(); myController = function () { return $controller('myCtrl', { '$scope': scope, myService: myServiceMock }); }; })); // actual tests come here
beforeEach(module('myApp')); beforeEach(inject( function ($q) { myServiceMock = jasmine.createSpyObj('myServiceMock', ['methodThatReturnsDeferred']); myServiceMock.methodThatReturnsDeferred.and.returnValue( $q.when());
codemag.com
});
JSLint, AngularJS, and TDD
27
Testing a Service Method that Returns a Deferred
Now this is where things get interesting! You need to write tests for the service. Not only that, you need to write a test for a service method that makes an asynch Listing 9: The service test harness describe("myService", function () { var myService; var httpBackend; beforeEach(module('myApp')); beforeEach(inject(function (_myService_, $httpBackend) { myService = _myService_; httpBackend = $httpBackend; }));
call using a standard AngularJS service called $http. Just like the controller, first you need to author a test harness for the service, which can be seen in Listing 9. As you can see from Listing 9, you’re creating an instance of myService, and you’re creating an instance of $httpBackend. These instances are being passed by the testing framework. The convention is that for standard framework level services, there are mock objects already available that are simply injected to the method, such as $httpBackend. For custom services, you simply write the service name with preceding and succeeding underscores, such as _myService_ and the testing framework sends an instance of the service. The $httpBackend is a service provided by Angular-mocks , and allows you to set up a fake backend. This means, in order to run tests, you don’t need a server. You can write logic such as “for such POST payload, return the following return value”, and then let the service method act upon the return value. Once the service method has acted upon it, you can check to see if it produces the appropriate output, which pretty much defines the test. The test can be seen in Listing 10.
// tests come here });
Listing 10: Testing the service method it("should return Initialized", inject(function (myService, $httpBackend) { httpBackend.expectPOST('/').respond(200, 'Initialized'); myService.methodThatReturnsDeferred().then( function (response) { expect(response).toEqual("Initialized"); }); httpBackend.flush(); }));
As you can see from Listing 10, before you even run the test, you have set up the httpBackend object to expect a POST request to the root URL, with a blank payload. If you get such a request, it responds with an HTTP 200, with text value “Initialized.” Assuming that the methodThatReturnsDeferred sends such a request, it receives a
Listing 11: The final application code /// <reference path="dts/angularjs/angular.d.ts" /> interface Window { myApp: angular.IModule }; interface myServiceType { methodThatReturnsDeferred(): angular. IPromise<string> }; interface myControllerScope extends angular.IScope { staticMethod(): string; asyncValue: string; asynchMethod(): void; }; (function() { var myApp = angular.module('myApp', []); myApp.controller('myCtrl', ['$scope', 'myService', function($scope: myControllerScope, myService: myServiceType) { $scope.staticMethod = function() { return "static value"; }; $scope.asyncValue = "Not initialized"; $scope.asynchMethod = function() { myService.methodThatReturnsDeferred().then(
function(response) { $scope.asyncValue = response; }); }; }]); myApp.factory('myService', ['$http', '$q', function($http: angular.IHttpService, $q: angular.IQService) { return { methodThatReturnsDeferred: function() { var deferred = $q.defer(); $http.post('/', "samplepost").success( function(response) { deferred.resolve("Initialized"); }); return deferred.promise; } } }]) window.myApp = myApp; })();
Figure 4: Test output
28
JSLint, AngularJS, and TDD
codemag.com
response called “Initialized.” You can easily write this test without having to set up complicated test servers and environments behind the scenes.
Once the tests are written and the TypeScript interfaces have been defined, writing the application becomes really simple.
Writing the Application
Once the tests are written and the TypeScript interfaces have been defined, writing the application becomes really simple. The final application code can be seen in Listing 11. Note that I’m making full use of TypeScript in Listing 11. TypeScript helped me define the structure of the application and therefore define the structure of the tests. Once the tests are written, I can easily leverage the same interfaces and classes to write rest of the application. The last thing to do now is to make sure that the tests run, and that Visual Studio Code gives a convenient shortcut key to run the tests. On Macs, press T, and on Windows, press CTRL_SHIFT_T to run the tests. You might note that a Chrome browser window pops open and the output window shows the test’s output, as shown in Figure 4. Also note that as you modify the .ts file or the tests.js file, the tests continue to run automatically. You can, of course, turn this behavior off by editing the karma.conf.js file. But now you have the basic harness ready to test your code very easily as the developers make changes to it. This can be very easily integrated into continuous test integration suites, such as karma.
Summary Writing good JavaScript is hard, and requires a lot of self discipline. In this article and the previous two articles, I tried to demonstrate the most common mistakes that JavaScript developers make, and how to avoid them. I recommended using TypeScript in my previous article, and in this article, I showed how I use tests and TDD, along with TypeScript. Yes, you may argue that going through that route for the simple application I showed in this article was a bit of an overkill and it most certainly was! But the real benefits start showing up when the application gets more complex, more interconnected, and most of all, developed by more than one set of hands. I hope you found these articles helpful. Until the next article, happy coding. Sahil Malik
codemag.com
JSLint, AngularJS, and TDD
29
ONLINE QUICK ID 1601051
The Baker’s Dozen: 13 SQL Server Interview Questions Throughout my career, I’ve been on both sides of the interview table. I’ve interviewed other developers, either as a hiring manager or as part of an interview team. Of course, I’ve also sweated through interviews as the applicant. I also taught SQL Server programming for years, coached people preparing for technical interviews, and authored certification and test questions. I’m going to borrow from those experiences in this installment of Baker’s Dozen, by giving you a chance to take a test on SQL Server and Transact-SQL topics. In the last issue of CODE Magazine, I started using the Baker’s Dozen format to present 13 technical test questions on SQL Server topics, and I think that format will work well for this subject.
Kevin S. Goff kgoff@kevinsgoff.net http://www.KevinSGoff.net @KevinSGoff Kevin S. Goff is a Microsoft SQL Server MVP. He is a Database architect/developer/speaker/ author, and has been writing for CODE Magazine since 2004. He is a frequent speaker at community events in the MidAtlantic region and also speaks regularly for the VS Live/Live 360 Conference brand. He creates custom webcasts on SQL/ BI topics on his website.
Life in Detail as a Technical Mentor One of my favorite musical artists is Robert Palmer and one my favorite Palmer tracks is called “Life in Detail.” The song contains rapid-fire lyrical messages about reality checks, false conclusions, perspectives, and (painful) discoveries. Those occur in software development as well! In my consulting business, I mainly serve as a technical database/applications mentor. Some days I write code, some days I lead in design efforts, and often I’m laying out work for others and going over some of the technical hurdles they might face. I have a million shortcomings in life (as friends and co-workers will attest), but one of my strengths is a photographic memory. Nearly every week, I work with developers on database technology topics that remind me of prior blog entries, interview and certification questions, classroom discussions, and webcast talking points that go back many years. In my opinion, one of the keys to success is recalling past issues and even prior mistakes, and how you solved/learned from them. Often, discussions with other developers help to bridge technology gaps, address misunderstandings about how a specific database feature works, and why a feature might work acceptably in one scenario but not others. In other words, the dirty details and their role in reality checks-are precisely what Robert Palmer was talking about. A few readers might be going on database technical interviews in the near future. I can’t guarantee that any topic I cover in these articles will appear on a technical interview screening (though I suspect a few will). Regardless, the more a person can speak in terms of scenarios on a technical topic, the more likely they’ll impress the interviewer.
What’s on the Menu?
Figure 1: Row Count for each table (note that some tables have high row counts)
30
I’d categorize these 13 questions for intermediate developers. (These are the types of interview questions I’d expect a mid-level database developer to be able to answer). In the first half of this article, I’ll cover the test questions, and then in the second half I’ll cover the answers. Although this article isn’t an interactive software product that requires you to answer before seeing the re-
The Baker’s Dozen: 13 SQL Server Interview Questions
sults, you’ll at least have a chance to read the questions and try to answer, if you can avoid peeking to the second half of the article to see the answers! 1. Knowing the differences between Materialized Views and Standard Views 2. Obtaining the row count of every table in a database 3. Determining when subqueries are necessary 4. Understanding the use of NOLOCK 5. Understanding the SQL Server Isolation Levels 6. Understanding the SQL Server Snapshot Isolation levels 7. Baker’s Dozen Spotlight: Dealing with a long list of columns in a MERGE statement 8. Knowing what DISTINCT and GROUP BY accomplish 9. Understanding how to use PIVOT 10. Generating multiple subtotals 11. Capturing queries that other applications are executing 12. Handling a dynamic GROUP BY scenario 13. Diagnosing performance issues on Linked Servers and Parameterized Queries Before I begin, I want to mention that references I make to specific SQL Server language features and product versions are for historical context. Some developers might have to support older versions of SQL Server. In this article, I refer to the MERGE and GROUPING SETS enhancements, which Microsoft added in SQL Server 2008. I also talk about the SQL Server Isolation Levels, which Microsoft added in SQL Server 2005. Unless I indicate otherwise, you can assume that any feature I reference from an older release still applies today.
Question 1: Knowing the Differences Between Materialized Views and Standard Views SQL Server allows developers to create two types of views: materialized and standard. Name as many differences as you can between them, including when you might use materialized views, and which one (generally) performs better.
Question 2: Obtaining the Row Count of Every Table in a Database Suppose you need to generate a result set that contains the row count for every table in a database (see Figure 1 for an example). Some of the tables contain at least hundreds of millions of rows. Your manager told you that your query will run frequently throughout the day. Describe how you’d write a query to accomplish this.
Question 3: Determining When Subqueries are Necessary
Using the AdventureWorks table Purchasing.PurchaseOrderHeader (which contains purchase order rows with col-
codemag.com
umns for the EmployeeID, TotalDue, and ShipDate), you want to produce a result set that shows the highest total due dollar amount for each employee, along with the ship date(s). In the example in Figure 2, note that Employee ID 250 (who could have had thousands of orders) has two orders for a dollar amount of $100,685, one in 2008 and one in 2007. Employee ID 251’s highest dollar amount is $609,422 for a purchase order in 2008. You want to produce a result set that shows each employee, the highest dollar amount for that employee (across all orders for that employee), and the associated ship date(s) and Purchase Order ID(s). Here’s the question: Can you write a query with a single SELECT statement to produce this result set, or would you need to write a subquery (or two queries)?
Question 4: Understanding the Use of NOLOCK
Suppose you execute the following line of code in SQL Management Studio against a SQL Server table. SELECT * FROM Purchasing.PurchaseOrderHeader WITH (NOLOCK)
What does the WITH (NOLOCK) accomplish? Are there any pitfalls to using this? (Hint: Even for those who know the basics of this topic, there are some additional nuances of NOLOCK that people often don’t talk about).
Question 5: Understanding the SQL Server Isolation Levels
I’ll ask two questions relating to SQL Server Isolation Levels in questions 5 and 6. First, suppose you have a procedure where you want to read a row (or a set of rows), and then lock the rows (so that no one else can update them) and perform multiple operations on the rows in the procedure. What is the least restrictive isolation level that allows you to read rows, such that the read itself locks those rows (and only those rows) for a period of time until I finish my transaction?
Question 6: Understanding the SQL Server Isolation Snapshot Isolation Levels Here’s another question on SQL Server Isolation Levels. In SQL Server 2005, Microsoft implemented the Snapshot Isolation Level. SQL Server specifically offers two forms of the SNAPSHOT isolation level. What are the names of these two, how do they differ from each other, and how do they differ from other SQL Server transaction isolation levels?
Question 7: Baker’s Dozen Spotlight: Dealing with a Long List of Columns in a MERGE Statement Readers of prior Baker’s Dozen articles know that I frequently discuss the T-SQL MERGE statement that allows developers to combine multiple DML operations into one statement. I recommend the MERGE statement in data warehouse scenarios where developers read in a source table that might contain a certain number of rows to insert and update into a target table. In a moment, you’ll look at an example of a MERGE that performs the following: • It compares the Source table with the Target Table based on a composite key lookup (BzKey1 and BzKey2).
codemag.com
Figure 2: Each Employee ID, the single highest order dollar amount, and the corresponding ship date and Order ID • For any rows in the Source table that don’t exist in the target table (based on the composite key), it inserts them into the target table • For any rows in the Source table that exist in the target table (based on the composite key), it updates them in the target table when at least one non key column is different between the two tables. Stated another way, if there’s a lookup match between the source table and the target table based on the composite key, and all of the non-key columns between the two tables are the same, there’s no need to perform an update. • It outputs the insertions and updates into a temp table (recall that the OUTPUT statement allows you to tap into the INSERTED and DELETED system tables that hold the state of insertions and the before/after state of updates). MERGE TargetTable Targ USING SourceTable Src ON Targ.BzKey1 = Src.BzKey1 AND Targ.BzKey2 = Src.BzKey2 WHEN NOT MATCHED THEN INSERT (NonKeyCol1, NonKeyCol2) VALUES (Src.NonKeyCol1, Src.NonKeyCol2) WHEN MATCHED AND (Targ.NonKeyCol1 <> Src.NonKeyCol1) OR (Targ.NonKeyCol2 <> Src.NonKeyCol2) THEN UPDATE SET NonKeyCol1 = Src.NonKeyCol1, NonKeyCol2 = Src.NonKeyCol2 OUTPUT $ACTION , INSERTED.*, OUTPUT.* INTO #SomeTempTable
There’s no one magic answer to this question.
OK, now for the question. Suppose you have many tables where you’d like to implement the MERGE. Do you need to re-write a MERGE statement/procedure for each table and spell out each combination of columns? Does the MERGE statement have any type of wildcard feature in any of the statement sections, so that you don’t need to list out each column? Is there any way to automate this?
The Baker’s Dozen: 13 SQL Server Interview Questions
31
NULL values do not exist for the VendorID and TotalDue columns. You want the result set to contain one row per Vendor ID with the summarized order dollars. I have two questions. First, which of the three queries (A, B, and C) produces the correct results? There could be more than one correct answer. Second, will query A and B produce the same results, regardless of whether or not they are the correct answers?
-- Query A SELECT VendorID, SUM(TotalDue) as VendorTotal FROM Purchasing.PurchaseOrderHeader GROUP BY VendorID
-- Query B Figure 3: The source sales data and the source returns data for the query
SELECT DISTINCT VendorID, SUM(TotalDue) as VendorTotal FROM Purchasing.PurchaseOrderHeader GROUP BY VendorID
-- Query C SELECT DISTINCT VendorID, SUM(TotalDue) as VendorTotal FROM Purchasing.PurchaseOrderHeader
Question 9: Understanding How to Use PIVOT
Suppose you have the data in Figure 3. It includes a table of sales data (on the left side of the figure) that includes sales amounts by client on specific days, and another table that lists sales returns by data, client, and a return reason (on the right side of the figure). I realize that most production systems would have a more elaborate table structure; I’m simply using the core necessary elements for this example. Figure 4: The results you want to generate (both the Sales and one column for each Reason Code)
In the end, you’d like to produce a result set that shows the sales by day and all possible Return Reasons as columns, knowing that the Return Reasons are dynamic (as seen in Figure 4). Note that there is nothing particularly “wrong” with the large number of NULL values. In any matrix-like result set of all possible dates and (in this case) sales reasons combinations, you might see a certain number of NULL data points. Analytically, days without any sales or returns might be just as important as days where they occur. (This could hypothetically represent sales for one person). How would you write a query to produce these results, in such a way that the query automatically accounts for new Return Reason codes?
Question 10: Generating Multiple Subtotals Figure 5: Generating multiple subtotals There’s no one magic answer to this question. As an interviewer, I’m simply looking to evaluate a person’s thought process, how the person might tackle this question, and whether the person has done it in the past.
Question 8: Knowing What DISTINCT and GROUP BY Accomplish Suppose you want to summarize the Order dollars in AdventureWorks by VendorID. You can safely assume that
32
The Baker’s Dozen: 13 SQL Server Interview Questions
Suppose you need to produce a result set with multiple levels of subtotals. Look at the results in Figure 5. The results show the following: • One row per Shipper Name and Order Year, with a summary of Freight and Total Due Dollars • One row per Shipper Name, with a summary of Freight and Total Due Dollars across all Order Years • One row for the grand summary total of Freight and Total Due Dollars You might ask, “Why should we worry about writing a query to produce all of the subtotals? Isn’t that what
codemag.com
report writers are for?” In the majority of situations, a developer could use a report writer (such as SQL Server Reporting Services). However, a developer might work in an environment where no one has configured SSRS. Or perhaps the application must output the results to a format that SSRS doesn’t support. Or maybe the application requires a smaller footprint/lower overhead than SSRS. Assume that for this situation, you’re not using a report writer or other tool available to summarize results, so you must generate a result set with the necessary totals.
Question 11: Capturing Queries Run by Other Applications Suppose you have a third party application that accesses your SQL Server databases. The application appears to be performing poorly and is taking a long time to lock and query your data. You don’t have the source code, and the vendor isn’t being cooperative. You want to be able to prove that the vendor’s code needs optimizations. How can you determine the queries that the application is generating?
Question 12: Handling a Variable GROUP BY Scenario
For this question, let’s look at the orders in the AdventureWorks Purchasing.PurchaseOrderHeadertable. Suppose you want to retrieve orders and summarize the TotalDue dollar amount by year. Because the table contains both a ship date and an order date, you want to pass a parameter to the query to summarize either by Ship Year or by Order Year. The results look like Figure 6 if you summarize by Order Year and Figure 7 if you summarize by Ship Year. You can generically refer to the appropriate year column as “Order Year”. The query should summarize based on the parameter. Here’s the question: Based on the parameter you pass to the query (for which date to use), do you need to use dynamic SQL to generate the SELECT and the GROUP BY?
Question 13: Diagnosing Performance Issues on Linked Servers and Parameterized Queries I’ll admit that this last question might seem esoteric to some developers,, but those who access other database systems (such as Oracle, IBM DB2, etc.) using Linked Servers might appreciate the value of this question. Suppose you use a Linked Server in SQL Server to access Oracle tables. In this example, the Linked Server is OracleLinkedServer, the table you want to query is OracleDB. OrderTable, and the business Key is the OrderNumber. You use each of the following three queries (A, B, and C) to return the row for OrderNumber 12345: -- Query A (uses OpenQuery): SELECT * FROM OPENQUERY( OracleLinkedServer, ' SELECT * FROM OracleDB.OrderTable WHERE OrderNumber = ''12345'' ') -- Query B (uses four-part notation): SELECT * FROM
codemag.com
OracleLinkedServer..OracleDB.OrderTable WHERE OrderNumber = '12345' -- Query C (uses four-part notation -- with parameter) DECLARE @OrderNumber varchar(5) = '12345' SELECT * FROM OracleLinkedServer..OracleDB.OrderTable WHERE OrderNumber = @OrderNumber
Figure 6: Use a query to summarize/ group by Order Date Year.
Query A uses the OpenQuery function in SQL Server to access the Linked Server and pass a query string to retrieve the order row. Query B avoids the OpenQuery call by using the four-part naming convention to access the Linked Server. Query C is similar to Query B, except that Query C uses a parameter for the Order Number, instead of hardwiring the order number value. In this example, assume that the OrderTable contains a very large number of rows.
Figure 7: Use a query to summarize/ group by Ship Date Year.
You execute all three queries. Query A runs instantly, whereas Queries B and C are much slower. Why do you think Query A-and OPENQUERY-is so much faster? If your answer is “Because OPENQUERY runs faster,” then the question is: “Why?”
The Answer Sheet OK, here are the answers. Let’s see how you did.
Answer 1: Knowing the Differences Between Materialized Views and Standard Views Here are the main points. • A regular view is essentially a stored query. If you run a view 10 times, SQL Server executes the query that you defined in the view. SQL Server doesn’t permanently store the results of the view. Subsequent executions might benefit from caching, but SQL Server must still query the source tables every time. • A materialized view is also a query, but one where SQL Server stores the results. Usually this means a materialized view performs better than a regular view. • You can create indexes on materialized views to further improve performance. Let’s take a look at an example. I’ll use the Microsoft ContosoRetailDW test database, which contains tables with millions of rows. Suppose I want to create a simple view that joins two tables and summarizes sales by promotion. I can do it like this: CREATE VIEW dbo.RegularView as SELECT PromotionLabel, PromotionName, SUM(TotalCost) as TotCost FROM FactSales JOIN DimPromotion ON FactSales.PromotionKey = DimPromotion.PromotionKey GROUP BY PromotionLabel, PromotionName
Now I’ll execute that view, and return the promotions that have a cumulative aggregated sale amount of $100 million. I’ll also look at the Time and IO Statistics as well as the execution plan (Figure 8).
The Baker’s Dozen: 13 SQL Server Interview Questions
33
Figure 8: Execution plan for regular view against FactSales and DimPromotion
Figure 9: Execution plan for a Materialized/Indexed View
SQL Server permits us to create clustered indexes on materialized views for performance. • Third, I’ve added a column using the aggregate function COUNT_BIG(). SQL Server indexed views with aggregations (the sum of TotalCost in the query) require COUNT_BIG. This helps SQL Server internally optimize certain operations on the original table. Now let’s test the view:
SET STATISTICS TIME ON SET STATISTICS IO ON SELECT * FROM dbo.RegularView WHERE TotCost > 100000000 Table 'DimPromotion'. Scan count 0, logical reads 56 Table 'FactSales'. Scan count 5, logical reads 37,230
Table 'Materializedview'. Scan count 1, logical reads 2
SQL Server Execution Times: CPU time = 2266 ms, elapsed time = 641 ms.
SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms. SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 0 ms.
Note the execution times in the statistics, as well as the execution plan. Although this view doesn’t contain a large number of JOIN statements and the statistics don’t represent a five-alarm emergency, let’s now take a look at what you can accomplish with a materialized view. create view dbo.Materializedview (PromotionLabel, PromotionName, NumRows,TotCost) with schemabinding as select PromotionLabel, PromotionName, COUNT_BIG(*) AS NumRows, SUM(TotalCost) as TotCost FROM dbo.FactSales JOIN dbo.DimPromotion ON FactSales.PromotionKey = DimPromotion.PromotionKey GROUP BY PromotionLabel, PromotionName GO create unique clustered index IX_Temp on dbo.Materializedview (PromotionLabel)
Note three things I’ve done here in the materialized view: • First, although I’ve created a second view here with a different name, I referenced all the columns at the top, and added the keywords WITH SCHEMABINDING. This tells SQL Server to “materialize” the storage for this structure. • Second, at the end, I’ve created a clustered index on the key field of the view (the Promotion Label).
34
SELECT * FROM dbo.Materializedview -- NOEXPAND forces SQL to use materialized -- view WITH (NOEXPAND) WHERE TotCost > 100000000
The Baker’s Dozen: 13 SQL Server Interview Questions
Because SQL Server materialized (generated and stored) the results, you can see in Figure 9 that the cost is nothing more than a clustered index scan against the materialized view. SQL Server automatically keeps the materialized/indexed view in sync with the underlying physical tables, and automatically “pre-computes” any expensive joins and aggregations that a regular view needs to handle on the fly every time. For reporting and data warehouse/business intelligence applications, this increase in performance is significant. There’s one more item to discuss. Note that I used WITH (NOEXPAND) after I specified the materialized view in the query. This is a query hint to force SQL Server to use the indexed view. This is necessary because SQL Server, by default, expands view definitions until the database engine reaches the base tables. You want SQL Server to treat the view like a standard table with the clustered index you created. Using the NOEXPAND query hint gives you that guarantee. Microsoft covers this topic in an outstanding MSDN article: https://msdn.microsoft.com/en-us/ library/dd171921(SQL.100).aspx.
Answer 2: Obtaining the Row Count of Every Table in a Database There’s one answer that’s not optimal in large databases and that’s using the COUNT() function. COUNT() is expensive on tables with large rows. Fortunately, there’s a better way. SQL Server provides a series of dynamic management views (DMVs) that can provide us with row
codemag.com
count information. The query below uses the SQL DMV called sys.dm_db_partition_stats to return the row count for each table partition. Regardless of whether the primary partition on a table contains one row or a billion rows, the DMVs return the information almost instantly. SELECT SysObjects.name as TableName, PartStats.row_count FROM sys.indexes AS SysIndexes JOIN sys.objects AS SysObjects ON SysIndexes.OBJECT_ID = SysObjects.OBJECT_ID JOIN sys.dm_db_partition_stats AS PartStats ON SysIndexes.OBJECT_ID = PartStats.OBJECT_ID AND SysIndexes.index_id = PartStats.index_id WHERE SysIndexes.index_id < 2 AND SysObjects.is_ms_shipped = 0 ORDER BY row_count DESC
Knowing to use DMVs instead of the COUNT() function to determine row count is one of several ways to determine whether a developer has worked with large databases.
Knowing to use Dynamic Management Views (DMVs) instead of the COUNT() function to determine row count is one of several ways to determine whether a developer has worked with large databases.
There is one caveat here: Because of how the SQL Server DMVs determine row counts, the results might represent an approximation. Stated differently, at any one single point in time, the DMVs might return a slightly different count than the COUNT() function. But if you’re trying to determine row counts on very large tables where SELECT COUNT(*) introduces performance issues, the tradeoff of DMV performance is probably well worth it.
Amswer 3: Determining When Subqueries are Necessary
I’ve written in the past about identifying patterns where subqueries are necessary. One such pattern is when you’re aggregating across multiple one-to-many relationships. Another pattern is the one I’ve introduced in the question. You want to produce a result set that shows each employee, the highest dollar amount for that employee (across all orders for that employee), and the associated ship date(s) and Purchase Order ID(s). At the risk of turning this into a sentence-diagramming exercise, let’s parse out some key points here. You want the highest dollar amount for each employee. You can easily do that with a simple SELECT, a MAX on the dollar amount for any one order, and a GROUP BY on employee. You also want to show other non-key fields associated with the order that represented the highest dollar amount. In other words, show each employee, the highest dollar amount for an order, and “oh yeah, by the way,
codemag.com
show some other columns related to that highest order, like the ship date.” It’s that last bit for which you need a subquery. In layman’s terms, you need to aggregate (find the maximum) the dollar amount by employee ID, and define that as a temporary table, or subquery, or common table expression. Then you need to take that temporary result and join it “back” into the original table, based on a match of the employee ID and the maximum dollar amount. Once you join back to the original table, you can pull any other columns associated with that highest order by employee. You simply cannot do this as one query: ;WITH MaxShipCTE as (select EmployeeID, MAX(Totaldue) as MaxTotalDue FROM Purchasing.PurchaseOrderHeader GROUP BY EmployeeID) select MaxShipCTE.*, ShipDate ,PurchaseOrderID from MaxShipCTE JOIN Purchasing.PurchaseOrderHeader POH ON POH.EmployeeID = MaxShipCTE.EmployeeID AND POH.Totaldue = MaxShipCTE.MaxTotalDue ORDER BY EmployeeID, ShipDate desc
I’ve seen code that tries to do this as one query. Of course, SQL Server generates an error because you must include any non-aggregate columns in the GROUP BY. Then, if someone decides to include the ShipDate and PurchaseOrderID columns in the output, the level of granularity in the aggregate is no longer only by Employee, but also by the Employee, Date, and PurchaseOrder ID. You must use a subquery! -- Incorrect, because we need to GROUP BY -- on all non-aggregated columns select EmployeeID, ShipDate, PurchaseOrderID, MAX(Totaldue) as MaxTotalDue FROM Purchasing.PurchaseOrderHeader GROUP BY EmployeeID
Answer 4: Understanding the Use of NOLOCK
Suppose you need to run a query that joins two tables (an order header and an order detail table). The query must read across a substantial number of rows and will take several seconds or longer. During that time, when the query executes, by default, SQL Server places a shared lock on the rows. As a result, you can potentially run into two issues. First, no SQL Server process can physically update those rows while the query is reading the rows. This is because the shared lock prevents an update (write) lock from executing until the query finishes reading the rows. Depending on organizational workflow, this can cause problems if users need to frequently update rows where other processes are performing lengthy queries. Second, if a lengthy UPDATE first occurs, SQL Server places a write lock on those rows. As a result, any lengthy SELECT that places a shared lock can’t execute while the UPDATE is running. In other words, if the shared lock occurs first, the write lock can’t execute until the shared lock releases. Conversely, if the write lock occurs
The Baker’s Dozen: 13 SQL Server Interview Questions
35
first, the shared lock can’t execute until the write lock releases. Now the question becomes: Can you somehow run the query in such a way that SQL Server doesn’t place any shared read locks on the rows? The answer is yes, by using NOLOCK. You can place the syntax WITH (NOLOCK) after each table, and SQL Server won’t place a shared lock on the rows. Is the problem solved then? Well, yes and no. Although NOLOCK eliminates the need for users to wait for locks to clear, it introduces a new problem. If other processes execute multiple-table UPDATE transactions, that lengthy SELECT WITH (NOLOCK) might wind up reading data in the middle of a transaction. In other words, it’s a “dirty read.” This is precisely why DBAs call NOLOCK a dirty read:-because you’re reading UNCOMMITTED data.
“We’re Talking About Practice” Some athletes hold back on effort during practice because it’s not a game situation. If you truly care about your craft, then practice at full tilt. Talk about software tools with other developers. There’s residual benefit of community speaking; You can often get back more than you realize.
Some environments discourage the use of NOLOCK, because reading uncommitted data-can potentially generate an incomplete set of results, especially if a dirty read occurred while transactions were still executing. Other environments recognize this but still use NOLOCK when they only expect result sets to reflect an approximation of the data. Although developers sometimes use NOLOCK for expediency, they need to be aware that the results might not be 100% accurate. There is one other downside about NOLOCK, beyond the generally recognized problem of picking up uncommitted data or incomplete multiple-table transactions. If you execute a query WITH NOLOCK while other simultaneous INSERT/UPDATE operations cause database page splits, it’s conceivable that the query will either miss/skip rows entirely, or it might count rows twice. If you want to read more about this, do a Web search on “SQL Server,” “NOLOCK,” and “PAGE SPLIT,” and also the keywords “INDEX ORDER SCAN” and “ALLOCATION ORDER SCAN.” You’ll find several online articles that cover this situation. There are multiple scenarios where WITH NOLOCK’s benefit of avoiding lockout situations comes at a price of inaccuracy. Again, developers looking for a quick solution and who are only concerned with an approximation of results might still be satisfied with the results of NOLOCK. But as you’ll see in Question 6, Microsoft implemented a new transaction isolation level in SQL Server 2005 (snapshots) that reduces the need for NOLOCK.
Answer 5: Understanding the SQL Server Isolation Levels
The answer to what transaction isolation level specifically allows you to read rows and keep them locked until your transaction is finished is the REPEATABLE READ isolation level. For an explanation of REPEATABLE READ, imagine that I need to read the value of a single row and keep it locked so that no other process will modify the row. After I obtain this lock, I might need to run a few processes and then update the row, knowing that no one touched it in between. SET TRANSACTION ISOLATION LEVEL REPEATABLE READ BEGIN TRANSACTION SELECT * FROM Purchasing.Vendor
36
The Baker’s Dozen: 13 SQL Server Interview Questions
WHERE BusinessEntityID = 1464 -- With a REPEATABLE READ LOCK, no one -- can UPDATE this row until I finish -- the transaction -- Perform other processes UPDATE Purchasing.Vendor SET SomeColumn = 'NEW VALUE' END TRANSACTION -- lock is released
Note: Some might have responded with SERIALIZABLE. Although both REPEATABLE READ and SERIALIZABLE address the situation, the question specifically asked which of the least restrictive isolation levels addresses the scenario. REPEATABLE READ is a little less restrictive.
Answer 6: Understanding the SQL Server Isolation Snapshot Isolation Levels Microsoft introduced the Snapshot Isolation Level in SQL Server 2005. The Snapshot Isolation level comes in two flavors: a regular (or as some people would say, “static”) snapshot, and a more actve snapshot that essentially turns every query that uses the default READ COMMITTED isolation level into a dynamic snapshot. The Snapshot Isolation Level addresses the problem back in Question 4. The problem in Question 4 is that NOLOCK prevents lockout conditions but raises the possibility of reading dirty, uncommitted data. Avoiding NOLOCK (i.e., using the default READ COMMITTED isolation level or even any higher level) means the possibility of delays or lockout conditions. The Snapshot isolation Level avoids both problems by reading the last “good” (i.e., committed) row version from TempDB. You can turn on the Snapshot Isolation Level by configuring the database to manage snapshots (i.e., rowversions in TempDB):
ALTER DATABASE AdventureWorks2014 SET ALLOW_SNAPSHOT_ISOLATION ON Once you’ve configured a database to use the regular Snapshot Isolation Level, you can use the isolation level as part of any stored procedure. If you need to read data that another process is updating, SQL Server neither forces you to wait, nor returns uncommitted data. Instead, SQL Server returns the last good committed version. This is major functionality that continues (to this day) to reduce the headaches of many database developers.
SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT * FROM someTable -----
will return rows from last good committed version, without locking out and without returning uncommitted data
Using this posed two challenges. First, it meant that developers needed to add references to the Snapshot isolation Level in every procedure. Second, and more specific to the behavior, any “snapshot” was basically static. In other words, in the example above, if some-
codemag.com
one updated/committed to the table after you read from the table, any subsequent read that you perform in the transaction still uses the “old” snapshot. Some might view this as good, and some might want any subsequent read to reflect any committed changes from the last few seconds.
Note that Query B contains a DISTINCT. As it turns out, the DISTINCT is not necessary because the GROUP BY accomplishes the same thing and more, since the GROUP BY also allows you to aggregate. Some developers use DISTINCT in addition to a GROUP BY. In almost all cases, the DISTINCT isn’t needed.
Microsoft added a second flavor of snapshots; in my opinion, this is one of the most powerfully elegant features of SQL Server. You can enable READ_COMMITTED_SNAPSHOT in the database, and it automatically turns EVERY SINGLE Read Committed operation (either explicitly or implicitly) into a dynamic snapshot!
Answer 9: Understanding How to Use PIVOT
ALTER DATABASE AdventureWorks2014 set read_committed_snapshot on GO
Answer 7: Baker’s Dozen Spotlight: Dealing with a Long List of Columns in a MERGE Statement Unfortunately, there’s no way to specify all columns in an INSERT or UPDATE clause with any kind of asterisk or other wild-card character. However, some developers create a utility or function that generates a long string of MERGE syntax by reading from a data dictionary or even the sys.tables and sys.columns system tables. Imagine that you had a Target Product table and an Incoming Product table from some ETL load operation. Now imagine that you had twenty different tables following that pattern: Each has a target table and a mirror incoming table. You could call a function and pass the names of the tables, and the function could generate some or all of the MERGE syntax for you. It might not be perfect, but it’s still far better than writing out all the MERGE code by hand! For instance, take a look at the following: CREATE TABLE dbo.TargetProduct (ProductPK int identity, ProductBzKey varchar(20), ProductName varchar(20), ProductPrice money) CREATE TABLE dbo.IncomingProduct (ProductBzKey varchar(20), ProductName varchar(20), ProductPrice money) SELECT [dbo].[tvfGenerateSQLMerge] ('dbo.TargetProduct', 'dbo.IncomingProduct', 'ProductBzKey')
The function dbo.tvfGenerateSQLMerge takes the two input table definitions and the definition for the business key and generates the entire string. The function generates the syntax for the MERGE that you’d otherwise to have write by hand!. Listing 1 shows the entire function and Listing 2 shows an example of the results.
Answer 8: Knowing What DISTINCT and GROUP BY Accomplish The answer is both A and B. Both queries generate the correct results. Query C fails because the SELECT contains an aggregation and a non-aggregated column, but no GROUP BY on the non-aggregated column.
codemag.com
Before I get into the SQL end, some might reply that you could use the dynamic pivot/matrix capability in SSRS. You simply combine the two result sets by one column and then feed the results to the SSRS matrix control, which spreads the return reasons across the column axis of the report. However, not everyone uses SSRS (although most people should!). Even then, sometimes developers need to consume result sets in something other than a reporting tool. For this example, let’s assume that you want to generate the result set for a Web grid page. You need to generate the output directly from a stored procedure. As an added twist,-next week there could be Return Reasons X and Y and Z. You don’t know how many return reasons there could be at any point in time. You simply want the query to pivot on the possible distinct values for Return Reason. Here is where the T-SQL PIVOT has a restriction: You need to provide the possible values. Because you won’t know that until run-time, you need to generate the query string dynamically using the dynamic SQL pattern. The dynamic SQL pattern involves generating the syntax, piece by piece, storing it in a string, and then executing the string at the end using a SQL Server system procedure. Dynamic SQL can be tricky, as you have to embed syntax inside a string. In this case, it’s your only true option if you want to handle a variable number of return reasons.
Powerfully Elegant Read Committed Snapshots in SQL Server 2005 might be one of the most powerfully elegant features in the history of the product (and there have been many powerful features). If you run into many locking/contention issues, take a close look at Read Committed Snapshot.
I’ve always found that the best way to create a dynamic SQL solution is to determine what the “ideal” query would be at the end (in this case, given the Return reasons you know about). After you establish the ideal query, you can then reverse-engineer it and build it using Dynamic SQL. When implementing a dynamic SQL solution, always write out the model for the query you want to programmatically construct.
When implementing a dynamic SQL solution, always write out the model for the query you want to programmatically construct.
And so, here is the SQL you need, if you knew those Return Reasons (A through D) were static and wouldn’t change. You can find the query in Listing 3, which does the following: • Combines the data from SalesData with the data from ReturnData, where you hard-wire the word Sales as an Action Type from the Sales Table, and then use the Return Reason from the Return Data into the same ActionType column. That gives you a
The Baker’s Dozen: 13 SQL Server Interview Questions
37
Listing 1: A reusable function to generate a MERGE CREATE function [dbo].[tvfGenerateSQLMerge] ( @TargetTable varchar(max) , @InputTable varchar(max), @BzKeyColumn varchar(max) ) RETURNS varchar(max) as begin DECLARE @SchemaName varchar(100) = ( select substring(@TargetTable,1, charindex('.',@TargetTable)-1)) DECLARE @CoreTableName varchar(100) = ( SELECT REPLACE(@TargetTable,@SchemaName + '.',''))
declare @CheckForUpdateSyntax varchar(max) = (select replace( replace( ( select ( (SELECT 'Src.' + ColumnName + ' <> Targ.' + ColumnName + ' OR ' FROM @ColumnList WHERE ColumnName <> @BzKeyColumn for xml path ('')) ) ) , '&lt;', '<'), '&gt;', '>') ) set @CheckForUpdateSyntax = '(' + substring(@CheckForUpdateSyntax , 1, len(@CheckForUpdateSyntax)-3) + ')'
DECLARE @ColumnList TABLE (ColumnName varchar(100)) INSERT INTO @ColumnList SELECT Columns.name from sys.columns columns join sys.types types on columns.user_type_id = types.user_type_id where is_identity = 0 AND OBJECT_ID = (SELECT tables.OBJECT_ID FROM sys.Tables tables join sys.schemas schemas on tables.schema_id = schemas.schema_id where tables.Name = @CoreTableName and schemas.name = @SchemaName)
DECLARE @MergeSyntax varchar(max) = ' MERGE ' + @TargetTable + ' Targ' + @crlf SET @MergeSyntax = @MergeSyntax + ' USING ' + @InputTable + ' Src ON Targ.' + @BzKeyColumn + ' = Src.' + @BzKeyColumn + @crlf SET @MergeSyntax = @MergeSyntax + ' WHEN NOT MATCHED ' + @crlf + ' THEN INSERT ' + @crlf + ' ' + @SourceFieldList + @crlf SET @MergeSyntax = @MergeSyntax + ' @SourceFieldList + @crlf
VALUES ' +
declare @crlf varchar(max) = char(13) + char(10) declare @UpdateSyntax varchar(max) = ( select (stuff ((SELECT ', ' + ColumnName + ' = Src.' + ColumnName FROM @ColumnList WHERE ColumnName <> @BzKeyColumn
SET @MergeSyntax = @MergeSyntax + ' WHEN MATCHED ' + @crlf + ' AND ' + @CheckForUpdateSyntax + @CRLF SET @MergeSyntax = @MergeSyntax + ' THEN UPDATE SET ' + @UpdateSyntax + @crlf SET @MergeSyntax = @MergeSyntax + @crlf + ' OUTPUT $Action; '
for xml path ('')), 1, 1, '') ) ) declare @SourceFieldList varchar(max) = '(' + ( select (stuff ((SELECT ',' + ColumnName FROM @ColumnList for xml path ('')), 1, 1, '') ) ) + ')'
return @MergeSyntax end
Listing 2: Result for a MERGE MERGE dbo.TargetProduct Targ USING dbo.IncomingProduct Src ON Targ.ProductBzKey = Src.ProductBzKey WHEN NOT MATCHED THEN INSERT (ProductBzKey,ProductName,ProductPrice) VALUES (ProductBzKey,ProductName,ProductPrice) WHEN MATCHED AND (Src.ProductName <> Targ.ProductName OR Src.ProductPrice <> Targ.ProductPrice) THEN UPDATE SET ProductName = Src.ProductName, ProductPrice = Src.ProductPrice OUTPUT $Action;
clean ActionType column on which to pivot. You’re combining the two SELECT statements into a common table expression (CTE), which is basically a derived table subquery that you subsequently use in the next statement (to PIVOT).
40
The Baker’s Dozen: 13 SQL Server Interview Questions
• Uses a PIVOT statement against the CTE that sums the dollars for the Action Type in one of the possible Action Type values. If you knew with certainty that you’d never have more return reason codes, Listing 3 is the solution. However, you need to account for other reason codes. You need to generate that entire query in Listing 3 dynamically as one big string—right where you construct the possible return reasons as one comma-separated list. Listing 4 shows the entire T-SQL code necessary to generate (and execute) the desired query. There are three major steps in constructing the query dynamically. The first step is that you need to generate a string for the columns in the SELECT statement (for example, [SalesAmount], [Reason A], [Reason B], [Reason C], [Reason D]). You can build a temporary common table expression that combines the hard-wired “Sales Amount” column
codemag.com
Listing 3: Example of PIVOT that we want to generate (final result) ;WITH TempResultCTE AS (SELECT SalesDate AS ActivityDate, Client, 'Sales' AS ActionType, SalesAmount AS Dollars FROM dbo.SalesData UNION ALL SELECT ReturnData.ReturnDate AS ActivityDate, Client, ReturnReason AS ActionType, ReturnAmount AS Dollars from dbo.ReturnData)
SELECT
ActivityDate, Client, [Sales], [Reason A], [Reason B], [Reason C], [Reason D] FROM TempResultCTE PIVOT ( SUM(Dollars) FOR ActionType IN ( [Sales], [Reason A],[Reason B],[Reason C], [Reason D]) ) TEMPLIST
Listing 4: SQL procedure to generate the PIVOT dynamically DECLARE @ActionSelectString nvarchar(4000) , @SQLPivotQuery nvarchar(4000)
-- where we the original model is so critical -- Start by generating the original CTE
-- Step 1, Generate the list of columns by doing a SELECT DISTINCT -- on the reason codes, and then using the STUFF and FOR XML PATH -- technique to build a comma separated list
SET @SQLPivotQuery = ';WITH TempResultCTE as (select SalesDate as ActivityDate, Client, ''Sales'' as ActionType, SalesAmount as Dollars FROM dbo.SalesData ' + @CRLF + 'UNION ' + @CRLF + ' SELECT ReturnDate as ActivityDate, Client, ReturnReason as ActionType, ReturnAmount from dbo.ReturnData) ' + @CRLF + @CRLF
;WITH ListTempCTE as (SELECT 'SalesAmount' AS ActionType , 1 as OrderNum UNION SELECT DISTINCT ReturnReason as Actiontype, 2 as OrderNum FROM ReturnData ) SELECT @ActionSelectString = stuff ( ( select ',[' + cast(ActionType as varchar(100)) + from ListTempCTE ORDER BY OrderNum, ActionType for xml path('') ), 1, 1, '')
']'
-- Step 3, continue by concatenating with the action string set @SQLPivotQuery = @SqlPivotQuery + ' SELECT ActivityDate, Client, ' + @ActionSelectString + ' FROM TempResultCTE ' + @CRLF + ' PIVOT ( SUM(Dollars) for ActionType in (' + @ActionSelectString + ')) TEMPLIST ' + @CRLF + @CRLF
-- End of step 1 -- this dynamically yields [SalesAmount],[Reason A], -- [Reason B],[Reason C],[Reason D] in @ActionSelectString -- Step 2 Now that we've built the necessary select strings -- as variables, we'll start generating the core query. Here's
with the unique list of possible reason codes (using a SELECT DISTINCT). Once you have that in a CTE, you can use the nice little trick of FOR XML PATH(‘’) to collapse those rows into a single string, put a comma in front of each row that the query reads, and then use STUFF function to replace the first instance of a comma with an empty space. This is a trick that you can find in hundreds of SQL blogs. This first part builds a string called @ActionSelectString that you can use further down in the SELECT portion of the query. The second step is where you begin to construct the core SELECT. Using that original query as a model, you want to generate the original query (starting with the UNION of the two tables), but by replacing any references to pivoted columns with the string (@ActionSelectString) that you dynamically generated above. Also, although not absolutely required, I’ve also created a variable to simply any carriage return/line feed combinations that you want to embed into the generated query (for readability). You’ll construct the entire query into a variable called @ SQLPivotQuery. In the third step, you continue constructing the query by adding the syntax for PIVOT. Again, concatenating the
codemag.com
DECLARE @CRLF VARCHAR(10) = CHAR(13) + CHAR(10)
EXEC sp_executesql @SqlPivotQuery
syntax, you can hard-wire with the @ActionSelectString (that you generated dynamically to hold all the possible return reason values). After you generate the full query string, you can execute it using the system stored procedure sp_executesql. To recap, the general process for this type of effort is: 1. Determine the final query based on your current set of data and values (i.e., build a query model). 2. Write the necessary T-SQL code to generate that query model as a string. 3. Determine the unique set of values on which you’ll PIVOT, and then collapse them into one string using the STUFF function and the FOR XML PATH(‘’) trick. This is arguably the most important part.
Answer 10: Generating Multiple Subtotals
Those who have read prior Baker’s Dozen articles know that I’ve stressed the value of GROUPING SETS, which Microsoft introduced in SQL Server 2008. GROUPING SETS allows developers to define multiple sets of GROUP BY definitions (thus the name). Although report writers re-
The Baker’s Dozen: 13 SQL Server Interview Questions
41
Listing 5: Example of GROUPING SETS SELECT CASE WHEN GROUPING(SM.Name)=1 THEN 'Total' else SM.Name End as ShipName, CASE WHEN GROUPING(Year(OrderDate))=1 THEN 'Total' else CAST(YEAR(OrderDate) AS VARCHAR(4)) end as OrderYear, SUM(Freight) as TotFreight, SUM(TotalDue) as TotalDue FROM Purchasing.ShipMethod SM JOIN Purchasing.PurchaseOrderHeader POH
Check Your Technical Soft Underbelly Are there any features in SQL Server (or any product you regularly use) where you might say, “I basically understand how feature XYZ works, but I can’t explain it in words”? One of my mentors told me years ago, “If you can’t explain it in words, then maybe you don’t understand it as well as you should.” It was tough-love advice, but there’s one remedy: research and practice! You might wind up learning more than you ever expected.
ON SM.ShipMethodID = POH.ShipMethodID where SM.ShipMethodID in (1,2) GROUP BY GROUPING SETS ( (SM.Name, YEAR(OrderDate)), (sm.name), () )
duce the need to generate multiple levels of subtotals in a result set, not all queries are consumed by report writers. Listing 5 shows an example of GROUPING SETS.
Answer 11: Capturing Queries Run by Other Applications I’ve asked this question many times in technical interviews, and I’ve always looked for a two-word answer: SQL Profiler. SQL Profiler is a management tool that comes with SQL Server. It allows you to define a trace against a database, and to view all queries and stored procedure execution references that hit a particular database. I’ve found it to be very helpful on many occasions!
SQL Profiler is extremely valuable when trying to determine specific query syntax that’s hitting the database.
Answer 12: Handling a Variable GROUP BY Scenario
Some developers might conclude that dynamic SQL is necessary to account for the variable nature of the GROUP BY. As it turns out, you can specify a CASE statement in the GROUP BY definition! Take a look at the following code snippet, which uses a CASE statement in the GROUP BY to aggregate by the selected option. DECLARE @DateOption int = 1 SELECT CASE WHEN @DateOption = 1 THEN YEAR(OrderDate) ELSE YEAR(ShipDate) END as ReportYear, SUM(Totaldue) as SumTotaldue FROM Purchasing.PurchaseOrderHeader GROUP BY CASE WHEN @DateOption = 1 THEN YEAR(OrderDate) ELSE YEAR(ShipDate) END ORDER BY ReportYear
Answer 13: Diagnosing Performance Issues on Linked Servers and Parameterized Queries As I stated in the question, this is definitely an esoteric topic. I encountered the subject about two years ago, where I noticed dramatically different performance on what seemed to be the same linked server query.
Sometimes you can’t avoid dynamic SQL. Just make sure that there isn’t an alternative approach that’s practical and won’t affect performance.
The reason that A can be much faster than B and C boils down to this: Although B and C use the four-part naming and essentially bypass the OPENQUERY call, they sometimes force the linked server driver to pull down ALL the rows from the source table, and then perform the filtering/WHERE clause locally in SQL Server. If the source table is large, this dramatically harms performance. Only by using OPENQUERY and passing the entire query as a string can you have a near-guarantee that the entire query (and the WHERE clause) will be executed at the remote server end. Although the four-part convention might seem cleaner (especially if you try to parameterize the query), you risk SQL Server pulling down all rows.
Final Thoughts:
As I stated at the beginning of the article, I’m going to use this format for some of my Baker’s Dozen articles in the future. In my next column, I’ll present a 13-question test on topics in SQL Server Reporting Services. Stay tuned! Kevin S. Goff
The query also uses a CASE statement in the SELECT clause to match the column in the GROUP BY definition. You don’t need to construct the query dynamically just because you have a variable GROUP BY definition. Sometimes you can’t avoid dynamic SQL, but first check to see if there’s an alternative approach that’s practical and won’t affect performance.
42
The Baker’s Dozen: 13 SQL Server Interview Questions
codemag.com
ONLINE QUICK ID 1601061
Introduction to Swift 2.0 The Swift language has taken the iOS/OSX community by storm. In a matter of only a year, Swift has managed to climb from nineteenth to fifteenth position in the list of the most popular programming languages. Apart from the major performance improvements, Swift 2.0 also added tons of new language features including error handling, protocol extensions, availability APIs, and much more. The new features enabled the developers to write safer code and be productive at the same time.
Mohammad Azam azamsharp@gmail.com www.azamsharp.com @azamsharp Mohammad Azam works as a Senior Mobile Developer at Blinds.com. He’s been developing iOS applications since 2010 and has worked on more applications than he can remember. HIs current app, Vegetable Tree - Gardening Guide, is considered the best vegetable gardening app in the app store and was featured by Apple in 2013. He’s a frequent speaker at Houston Tech Fest and Houston iPhone Meetup. When not programming, Mohammad likes to spend his time with his beautiful family and planning for his next trip to the unknown corners of the world.
In this article, I’ll take a glance into the new features of the Swift language. I’ll cover the most important features of the new Swift language and explore the practical aspects of each. At the end of this article, you’ll be equipped with the information you need to take advantage of the new cool features of the Swift programming language.
Error Handling Error handling is the process of responding to and recovering from error conditions in your program. Swift 2.0 introduces new language features for handling exceptions. In order to better understand Swift 2.0 exception handling, you must take a step back and look at the history of exception handling in the Cocoa framework. The earlier releases of Objective-C didn’t even have native exception handling. Exception handling was performed by using the NS_DURING, NS_HANDLER, and NS_ENDHANDLER macros, like this:. NS_DURING [self callSomeMethodThatMightThrowException]; NS_HANDLER NSLog(@"Exception has occured"); NS_ENDHANDLER
The NS_DURING macro marks the beginning of the block containing the code that might throw an exception. The NS_HANDLER macro handles the actual exception and NS_ENDHANDLER marks the end of the exception-handling block. The callSomeMethodThatMightThrowException is implemented in this bit of code.
lot of Cocoa applications that provide no exception handling at all. Apple addressed this issue by adding native exception handling with OS X 10.3. Native exception handling in Objective-C allowed developers to use a modern approach in handling exceptions within their Objective-C code, as shown here: @try { [self callSomeMethodThatMightThrowException]; } @catch(NSException *exception) { // handle the error } @finally { // clean up code // code that is executed whether the // exception is thrown or not }
The @try keyword marks the beginning of the block where an exception can occur. The @catch is where the exception is handled. You can have multiple catch blocks to refine the type of exception you’re handling. The @finally block is triggered regardless of whether an exception has occurred or not. The @finally block is ideal for cleaning up resources or closing connections.
The @finally block is ideal for cleaning up resources or closing connections.
-(void) callSomeMethodThatMightThrowException { [NSException raise:@"Exception" format:@"Something bad happened"]; }
As you can imagine, the exception handling in Objective-C wasn’t very intuitive and majority of the developers didn’t bother to implement any kind of exception handling in their Cocoa applications. The complex nature of exception handling in the earlier releases of Objective-C also contributed to a mindset among Cocoa developers where exceptions were used to mark unrecoverable errors. A common theme was to kill the application instead of recovering from the errors, which might result in corrupted data. This is the main reason that you can still witness a
44
Introduction to Swift 2.0
Despite the progress in handling exceptions in Objective-C using native exception handling, developers still associate exceptions with catastrophic events. The recoverable errors were represented by the NSError class that predates exception handling. The NSError pattern was also inherited by Swift 1.x, as shown in in the next snippet. The error instance is returned as part of the callback method. If the error isn’t nil, take an appropriate action to notify the user; otherwise, continue with the normal execution of the application, like this: let session = NSURLSession()
session.dataTaskWithURL(NSURL(string: "")!) { (data :NSData?, response :NSURLResponse?, error :NSError?)
codemag.com
-> Void in
var card = Card(number: 9090, pin: 5555) try swipe(card)
if error != nil { // do something with the error }
print("sss") }.resume()
Swift 2.0 introduces modern error-handling to the Swift language that allows you to write “safe code” (pointers to error-free code). In Swift, the errors are represented by values of types conforming to the ErrorType protocol, as shown in the next snippet:
The try keyword is required when calling the Swipe method because the Swipe method is decorated with the throws keyword, indicating that it can throw an exception. There are some scenarios where a method throws an exception but instead of catching the exception, the best solution is to crash the app. Under those circumstances, you call the method using the syntax shown here: try! swipe(card)
If you’re calling a third-party library that doesn’t have support for error handling, you might want to call those methods with the try! syntax.
enum DebitCardError : ErrorType { case IncorrectPin case InsufficientFunds }
In Swift, the error-handling block is referred as a docatch block. The do block wraps the try keyword and serves as a container to call the potentially risky code. Methods that can throw an exception must contain the keyword throws right before their return type parameter and must be called using the try keyword. The next snippet shows the implementation of the Swipe method that’s capable of throwing an exception. struct Card { var number :Int var pin :Int } func swipe(card :Card) throws {
An acute observer might have noticed that there’s no finally block in the Swift language. In other modern languages, including C# and Java, the finally block is an optional part of the try-catch handler that executes whether the exception is thrown or not. The finally block is an ideal location to clean up the resources including closing database connections etc. Swift uses the new defer keyword to serve the same purpose as the finally keyword. One big difference between defer and finally is that the defer block is executed at the end of its calling scope. This next snippet shows the execution flow for the defer keyword.
do {
func doStuff() throws {
try authenticate(card.pin) } catch DebitCardError.IncorrectPin { print("Incorrect Pin") }
print("This is before defer is called") // called first
catch DebitCardError.InsufficientFunds { print("Insufficient Funds") }
Swift 2.0 Migration Tool The Swift 2.0 migration tool in Xcode 7 allows you to migrate your Swift 1.0 code to the current version, 2.0. I’ve heard mixed stories about the migration tool. For some projects, it migrated the code flawlessly without any compile errors. On other projects, the migration tool has resulted in code that’s filled with errors. Make sure that you take a snapshot of your code before you run the migration tool! For larger code bases, you might also benefit from migrating your code in small batches instead of migrating in one single shot.
defer { print("Defer is called") // called third } print("This is also before defer is called") // called second
} }
In the Authenticate method, the pin number is hardcoded and an exception is thrown if the pin number doesn’t match, as shown here: func authenticate(pin :Int) throws { if pin != 1234 { throw DebitCardError.InsufficientFunds } }
Finally, create an instance of the Card class and call the Swipe method using the try keyword.
codemag.com
try doStuff()
As you can see, even though the defer block appears before the last print statement, it will be last to get executed. Defer is a perfect place to perform resource cleanups, close database connections etc. The modern syntax of Swift exception handling allows you to write safer code for applications that potentially provide a much richer user experience to the users and also provide you with relevant information to improve your existing applications.
Introduction to Swift 2.0
45
Guard
In addition to the modern error handling introduced in Swift 2.0, the Swift team also added a new language keyword guard. Using a guard keyword ensures that the conditions are met or else the execution won’t continue. From afar, you might feel that the guard statement provides the same logic flow as a class’ if-else conditional statement, but it’s really quite different. The next snippet shows a very simple function that checks the username length and takes appropriate action based on the validity of the username string. var userName :String? func loginButtonTapped() { if userName!.characters.count > 0 { print("do something") } else { print("invalid username") }
static var entityName :String { get } static var defaultSortDescriptors :[NSSortDescriptor] { get } }
In Swift 2.0, you can extend the ManagedObjectType protocol to include a default implementation of the protocol properties, like this: extension ManagedObjectType {
The function in this next snippet can be implemented using the new guard keyword. The implementation is shown in the previous section.
public static var defaultSortDescriptors :[NSSortDescriptor] { return [] } }
guard userName?.characters.count > 0 else { return } }
The guard statement makes sure that the execution of the application only continues if the conditions are met. Another great use of the guard keyword is to make your code more readable and beautiful. Swift is a leap forward when compared to the Objective-C language but it also has dark corners. One of the smelly code issues in the Swift language is the use of force unwrap for the optional keyword “!”. Using the guard keyword, you can force unwrap the instance at the same time, as shown here: guard let url = NSURL(string: "http://www.code-magazine.com") else { fatalError("Error creating URL with the supplied parameters!") }
Using the guard keyword, you can also unwrap multiple optional instances.
Guard not only simplifies the flow of the application but also enhances the readiness of the code. No longer do you have to use the ugly “!” syntax to unwrap the optional instances.
Protocol Extensions Swift 2.0 protocols can be extended to provide a method and property implementations to the conform-
Introduction to Swift 2.0
public protocol ManagedObjectType {
}
func loginButtonTapped() {
46
ing types. This means that you can define method and property implementations for your protocols that can be used as default implementations for the conforming classes. A few snippets ago, you declared a ManagedObjectType protocol. The ManagedObjectType is used by NSManagedObject entity classes that provide sorting behaviors. The next snippet shows the implementations of the ManagedObjectType protocol.
This means that every class inheriting from the ManagedObjectType protocol will also benefit from the default implementation of the defaultSortDescriptors. The concreate class can also provide a custom implementation that overrides the extension method implementation, as shown here: extension Customer : ManagedObjectType { public static var entityName :String { return "Customer" } public static var defaultSortDescriptor :[NSSortDescriptor] { return [NSSortDescriptor(key: "dateCreated" , ascending: false)] } }
Protocol extensions in Swift 2.0 bridge a big gap in protocol adaptation and provide a default implementation of protocol methods and properties that can be used in concrete implementations.
Availability Apple constantly improves their existing APIs and frameworks by adding new features. One of the challenges developers faced was to make sure that they only invoked the new APIs on the devices that run compatible versions of the iOS framework. This was accomplished by first making sure that the APIs existed before calling them. This next snippet shows the implementation using the RespondsToSelector method to make sure that the registerUserNotificationSettings exists before you call it.
codemag.com
func application(application: UIApplication , didFinishLaunchingWithOptions launchOptions: [NSObject: AnyObject]?) -> Bool { if UIApplication.instancesRespondToSelector ("registerUserNotificationSettings:") { let types = UIUserNotificationType .Alert | UIUserNotificationType.Sound | UIUserNotificationType.Badge let settings = UIUserNotificationSettings(forTy pes: types, categories: nil) application.registerUserNotificationSettings(settings) } return true }
Apart from being unclear, the previous implementation doesn’t give any indication as of which iOS version you’re checking against. Swift 2.0 solves this problem by providing the available API, making things much simpler and clearer. Using the available API, you can write the previous code like this: func registerNotifications() { let types = UIUserNotificationType .Alert | UIUserNotificationType.Sound | UIUserNotificationType.Badge let settings = UIUserNotificationSettings (forTypes: types, categories: nil) application.registerUserNotificationSettings(settings) }
if #available(iOS 8.0, *) { registerNotifications() }
The asterisk (in the “if #available” line) is required and indicates that the conditional check executes on the minimum deployment target for any platform that isn’t included in the list of platforms. Of course, the available API isn’t only limited to the iOS platform but can easily be extended to WatchOS, OSX, and tvOS as shown in this next snippet.
illusive calls to the respondsToSelector method and provides a much cleaner and simpler implementation.
Objective-C Interoperability The Swift language is fully interoperable with the Objective-C language, meaning that Swift can easily take advantage of the rich libraries implemented in the Objective-C language. This is great news because you can easily reuse your existing Objective-C code in your new Swift applications. In addition to compatibility with Objective-C, Xcode 7 also includes a Swift migration tool that can be used to migrate Swift 1.0 code to Swift 2.0. The migration tool allows you to preview the new changes that include exception handling, guard, repeat, and verifying the availability of APIs before committing them.
Open Source
SPONSORED SIDEBAR: HTML Mentoring Need help with an HTML project? CODE Consulting’s special introductory offer gives you full access to a CODE team member for 20 hours to guide and give assistance on your project needs. Mentoring and project roadblock resolutions with CODE’s guidance are a perfect partnership. CODE Consulting’s Introductory Package is only $3,000. Email info@codemag.com to set up your time with a CODE specialist, today!
During a WWDC 2015 announcement, Apple announced that Swift will become open source. This is great news as we’re brought one step closer to providing updates to the Swift language, provided that the Swift team is open to pulling changes from developers.
Conclusion Swift 2.0 is a huge improvement from its predecessor Swift 1.0. Swift 2.0 provides improved performance, fast compilation, error handling, availability APIs, and protocol extensions, making it a more powerful and modern high-level language. With the release of Swift 2.0, things also seems to be calming down, which means that your existing code won’t break as often with the new versions of the Swift language. As always, this is a great time to be an iOS developer and even a better time to be an iOS Swift developer. Mohammad Azam
if #available(iOS 8.0, OSX 10.10, watchOS 2, *) { // do something }
You can even decorate your methods with the @ available attribute, which ensures that the methods are only called on the selected platforms as shown here: @available(iOS 8.0, *) func someMethod() { }
The available API provides an easy way to manage platform dependency in code, removing the cumbersome and
codemag.com
Introduction to Swift 2.0
47
ONLINE QUICK ID 1601071
How Functional Reactive Programming is Changing the Face of Web Development Like when watching a sleight-of-hand magician, Web developers have been distracted by the recent popularity wars between the various front-end frameworks, and we’ve missed a fundamental shift in the nature of these frameworks. Functional Reactive Programming (FRP) has been quietly and steadily showing up in many aspects of the main frameworks of today.
Joe Eames joeeames@gmail.com joeeames.me @josepheames Joe began his love of programming on an Apple III in BASIC. Although his preferred language is JavaScript, he has worked professionally with just about every major Microsoft language. He’s currently a consultant and full time author for Pluralsight. Joe has always had a strong interest in education, and has worked both full- and part-time as a technical teacher for since 1999. He’s a frequent blogger and speaker, organizer of ng-conf, the AngularJS conference (www.ng-conf.org), and a panelist on the JavaScript Jabber and Adventures in Angular podcasts.
You see this same change happening elsewhere, but as frameworks are the center of the front-end Web world, this is where all these influences come together. Why is this? Before I can answer that, you need to understand a few things about FRP.
created and then one of the values in that map is changed to a new value. This creates an entirely new object. The old object, map1, still has 2 for the value of b, and map2 has 50 for the value of b.
What is FRP?
Observables have existed in the JavaScript world for a long time, although the FRP version of observables usually comes with more than the simplest form of an observable. These observables often have many more features than a typical observable and you can see this in action with libraries such as RxJS and Bacon.js. Like immutable data, observables give significant performance gains in change detection strategies.
The simple answer to this question is that there is no simple answer. To the more academic side of the industry, FRP is all about values that change over time, sometimes called signals. But to the rest of the industry, FRP is more of an umbrella term that refers less to this esoteric definition, and more to the ancillary constructs and ideas that are generally part of FRP. These ideas and technologies include: • • • • •
Immutable data Observables Pure functions Static typing One-way state transitions
Here’s an example from the RxJS library that shows how to subscribe to an async data source, filter and map it, and then print out the results as they become available. This works not only with one piece of return data, but with a stream of data arriving intermittently as stock data does.
Let’s take a quick look at each one of these items.
var source = getAsyncStockData();
Immutable Data
var subscription = source .filter(function (quote) { return quote.price > 30; }) .map(function (quote) { return quote.price; }) .subscribe( function (price) { console.log('Prices higher than $30: $' + price); }, function (err) { console.log('Something went wrong'); }); subscription.dispose();
Immutable data is a data type that can’t change any value once it is created. Imagine an array that’s created with two integers, 2 and 5. After you create it, you can’t remove or add any elements, or change the values of either of those two elements. At first, this may seem to be unnecessarily limiting. When you need to turn that array into an array with three elements, 2, 5, and 8, you create a new array with those three elements. This may sound extremely nonperformant, but in practice, it’s not nearly as expensive as you would think and when used in change detection algorithms that exist in just about every front-end framework, it can lead to amazing performance gains. The most popular library that implements this is immutable.js. Here’s a simple example of using immutable.js to create a data structure, and then make a modification. var map1 = Immutable.Map({a:1, b:2, c:3}); var map2 = map1.set('b', 50); map1.get('b'); // 2 map2.get('b'); // 50
Notice that in the preceding code example, a new object is created when you change just one value. First, a map is
48
Observables
How Functional Reactive Programming is Changing the Face of Web Development
Pure Functions
Pure functions are perhaps the most vague of these items because “pure” is a very common word. In this case, a pure function has no side effects. You can run the function as many times as you want and the only effect is that it computes its return value. Pure functions are significantly easier to both test and maintain. Using them lowers maintenance costs in applications written in frameworks like React and Mithril.
codemag.com
Here’s a classic example of a non-pure function with side effects: function getCustomerPrice(customer, item) { var price; if(customer.isUnapproved) { customer.unapprovedAttempts .push({itemId: item.id}) } else if(customer.isPreferred) { price = price * .9; } return price; }
Notice how the Customer element is modified if they’re unapproved? This is an example of a side effect. It’s like a byproduct of the main job of the function, which is to simply get the customer price, and is the source of many bugs in programming. That’s why you want pure functions with no side effects. A pure function version of this same code would leave logging the unapproved attempt to another piece of the program and simply return the customer price, as in the following: function getCustomerPrice(customer, item) { var price; if(!customer.isUnapproved && customer.isPreferred) { price = price * .9; } return price; }
In any realistic application, you’ll need state, and you’ll need to modify that state at certain points, like logging the unapproved attempt shown in the first example here. But by drawing a line between areas that modify state, which are non-pure, and the parts that don’t, which are implemented with pure functions, you create a lot of code that’s simpler to build, test, and maintain. Pure functions rely on their inputs and not on any context. That means that they can be moved around and refactored easily. And, as shown in this example, pure functions tend to help you follow the single responsibility principle and keep things simple.
Static Typing
// vanilla JS var todoModel1 = new TodoModel(); todoModel1.finish(); // throws a runtime error //TypeScript var todoModel2:TodoModel = new TodoModel(); todoModel2.finish(); // compile time error
The only difference between these two is that you’ve told the compiler that todoModel2 is of type TodoModel, which is a class that you declared. This lets the compiler know that if, later on, someone tries to call the finish method as shown in the snippet, there’s no finish method and the error can be thrown at compile time. This makes the bug much cheaper to fix than waiting until unit tests are written, or even worse, having the error thrown at run time. Also, notice that an inferred type system can give the same benefits by inferring that the type is TodoModel. That means that you don’t have to declare the type, and the “vanilla JS” version can still act as if typed. Errors like the one discussed can still be caught at compile time. Both Flow and TypeScript support this.
One-Way State Transitions
One-way state transitions are perhaps the most significant of these various technologies. One-way state transitions are an architecture whereby changes to the model all go through a common type of dispatcher, then, when changes to the model are made, these changes are rendered to the screen. Figure 1 illustrates this idea. This is a way of thinking about your program as a simple cycle. Your model passes through a rendering function and becomes viewable elements. Changes (usually represented by user events) update the model, which then triggers a re-rendering of the view. The rendering functionality is ideally implemented as a pure function. This concept deserves its own article to sufficiently explain these ideas, so I’ll just touch on it. This architecture is significantly simpler and easier to learn than typical development because all actions that can change the state of a program are clearly and centrally defined. This makes it easier to look at a program and understand how it behaves. Imagine learning to operate a car without having first spent years in one, and having no guidance. As you explored it,
Static typing is the system whereby types of variables are established at compile time. This allows a program to avoid a host of typical run time errors and simple bugs. Many statically typed languages require a fair amount of ceremony to document the types and many programmers find this onerous. Some statically typed languages can infer types based on context, which means that most types don’t need to be declared. You see this in Flow and TypeScript. Let’s look at an example of using an instance of a class in vanilla JavaScript, and again with TypeScript class TodoModel{ // class properties complete() { // implementation } }
codemag.com
Figure 1: One-Way State Transitions.
How Functional Reactive Programming is Changing the Face of Web Development
49
you’d discover that you could lock and unlock the doors, open the hood and if you sat in it, you could turn the steering wheel (which doesn’t seem to do anything useful), but that doesn’t get you anywhere near to driving. You still need to learn that the key goes in the column to start the car, and how the radio and air conditioning work, and how turning the wheel really only matters when you’re in motion, and how the pedals control the movement of the vehicle. Now instead, imagine a manual that listed every action the car could take, and sent you to a page where you could read exactly what effect that action had. This is the difference between common programming architectures and one-way state transitions.
SPONSORED SIDEBAR: Need Help? Are you looking to convert your application to a new language or do you need help optimizing your application? The experts at CODE Consulting can help you with your project needs! With experience in everything from desktop to mobile applications, CODE developers can be a great resource for your team! For more information visit www. codemag.com/consulting or email us at info@codemag.com.
Using this methodology, it’s cheaper to bring new developers onto a project because they can come up to speed quicker, with less direction, maintenance costs are down because problems are easier to track down, and brittleness is reduced overall. The largest payoff may be that as project sizes grow, velocity doesn’t suffer the typical slow-down. Although the functional programming crowd (Haskell and Erlang programmers, among others) have been touting these very benefits for a long time, mainstream development as a whole hasn’t been listening until now. And many people don’t even realize where all of these ideas came from.
Deja Vu
It’s important to note that most of these ideas are not particularly new, and many of them existed as early as the 80s or even earlier. Each of them can bring benefits to most development projects, but when used together, the total is greater than the sum of its parts. What’s particularly interesting is the fact that these ideas aren’t just suitable for a subset of application types. These same ideas have been used in games, animation, operating systems, and everything in between. For an interesting read on the parallels between React and Windows 1.0, check out this article: http://bitquabit.com/post/the-more-things-change/.
FRP Becomes Mainstream at Last We are starting to see each of these previously mentioned ideas making their way into the mainstream of the Web development world. This is best recognized by their effect on the two dominant frameworks, React and Angular 2. Although it hasn’t been released yet, due to the dominant market share of Angular 1, Angular 2 is definitely one of the frameworks occupying the attention of most front-end developers. I’d be remiss if I didn’t mention a third framework that you’ve probably never heard of: Elm. Elm is another frontend framework that makes the audacious move of being written in an entirely new language. Built by Evan Czaplicki while he attended Harvard, it’s an attempt to make FRP more digestible to mainstream programmers. The reason that this is so remarkable is that it’s had a heavy influence on both React and Angular 2. The main reason that we’re seeing these influences make their way into Web development now is centered on the never-ending quest for more maintainable code. Evan Czaplicki gave a great talk on this topic titled “Let’s be Mainstream” ( https://www.youtube.com/watch?v=oYk8CKH7OhE). As front-end applications grow ever larger, the need to make them easier to maintain has become more important. Be-
50
How Functional Reactive Programming is Changing the Face of Web Development
cause of this, we’re seeing new techniques being introduced in an attempt to achieve this goal. You can see this if you take a brief look at the overall history of front-end Web development. At each stage, the need for maintenance has driven a continual quest for methods to manage the complexity. The first attempt was simpler APIs through libraries like jQuery. Then frameworks came on the scene as a way to organize code into whole applications while still being modular and separating out various concerns. Automated testing quickly became a hot topic at this point. Then, increasingly fully featured frameworks appeared to help developers build bigger and bigger apps. All the while, everyone tried new methods to deal with constantly-expanding code bases. Another reason we’re seeing these influences now is that the high demand in front-end Web development continues to attract developers from other areas. These developers are bringing with them ideas that worked for them in other environments, and existing developers are looking to other areas for ideas that can help improve code built for the browser. Changes are popping up all over. Both React and Angular 2 support using observable and immutable data. Both of them use one-way state transitions internally, and React even goes so far as to codify it for developers to follow with libraries such as Flux and the more recent Redux. Redux was directly inspired by Elm. Angular 2 supports using Redux or something similar, although they don’t directly encourage it at this point. Both of them, meanwhile, encourage developers to use static typing with both Flow for React, and TypeScript in the Angular world.
What’s Next It’s difficult to predict the future in any reasonable way, but there are a few places that you might see these ideas in action. As FRP becomes mainstream, that could open up the door to widespread adoption of FRP languages like Elm. Combined with the features of Web Assembly, which makes JavaScript a more viable compile target, FRP languages such as Elm or Haskell—or new ones we haven’t seen yet—could possibly make their way into popular Web development. Another thing we’ll likely see is a near-standardization on architectures using Flux-like one-way data state transitions. The React world seems to be standardizing on Redux, a simpler implementation than Flux, and adaptations of Redux for other frameworks are starting to appear. Obviously, reading an article is useless unless it gives you actionable knowledge. So here’s my advice to take advantage of the winds of change that are blowing in our industry: 1. Be Aware. Just knowing what’s going on will put you ahead of the curve. 2. Learn. Go check out RxJs or Immutable.js or learn React and Angular 2. 3. Open your mind. Even if these ideas sound strange to you, don’t resist change. Open yourself up to new ideas and new paradigms. This is perhaps the best piece of advice I can give you. Joe Eames
codemag.com
ONLINE QUICK ID 1601081
Integrating YouTube into Your iOS Applications In mobile development, developers often get tasked with integrating third-party social platforms like Facebook, Twitter, or YouTube into the applications they build. Clients usually want ways to aggregate their social content into whatever app they build (pulling content in) or to promote other users to share their content within their respective social circles (pushing content out).
Jason Bender Jason.bender@rocksaucestudios.com www.jasonbender.me www.twitter.com/TheCodeBender Jason Bender is the Director of Development for Rocksauce Studios, an Austin, Texas based Mobile Design and Development Company. He has 11 years coding experience in multiple languages including ObjectiveC, Java, PHP, HTML, CSS, and JavaScript. With a primary focus on iOS, Jason has developed dozens of applications including a #1 ranking reference app in the US. Jason was also called upon by Rainn Wilson to build an iOS application for his wildly popular book and YouTube channel: SoulPancake.
Most of these social platforms provide convenient SDKs (Software Development Kits) that allow you to integrate with their services easier. These SDKs are platform-specific tools and components that help expedite the development process by preventing the need to build out particular features from scratch. Additionally, they also have platform-independent APIs that can be queried using the proper application authentication. These APIs can return a multitude of data from within the system, about which you’ll discover more in this article. Throughout the course of this article, you’ll focus on one of these platforms in particular: YouTube. You’ll learn how to create and register an application and application key, how to pull content from YouTube via their public API, and how to go through the process of integrating their native video player component for playing back YouTube content from within your application.
The YouTube API YouTube currently has over 1 billion users and serves up millions of hours of content on a daily basis. They reach more 18-to-49 year-olds than any cable network in the U.S. (youtube.com/yt/press/statistics.html). As a developer, you can take advantage of this content and integrate many of the site’s features into your own applications using the provided developer tools and APIs that the service offers. For the purposes of this demonstration, you’ll work with the core platform API to learn how to retrieve a subset of videos from YouTube matching a search query that you define. You’ll also learn about various ways to filter the results returned via the query parameters. Lastly, you’ll implement the YouTube-provided native iOS video player to play the content that you retrieve. You can access the documentation for the various API endpoints offered, including ones not covered in this article, at https://developers.google.com/youtube/.
YouTube currently has over 1 billion users and serves up millions of hours of content on a daily basis.
API Usage
The YouTube API is free to use for most applications. Larger applications with bigger user bases may need to
52
Integrating YouTube into Your iOS Applications
apply for a higher quota to satisfy its needs, but in most cases, the free tier should suffice. The free tier has the following parameters: • 50,000,000 units per day • 3,000 requests per user, per second • Daily quotas reset at midnight Pacific Time It’s important to note that “units” are a proprietary metric used by Google and are not a one-to-one relationship to the number of API calls that your application makes. Each different API endpoint cost a specific number of units per call. For instance, the YouTube search API demonstrated in this article costs 100 units per call.
It’s important to note that “units” are a proprietary metric used by Google and are not a one-to-one relationship to the number of API calls that your application makes.
Before you can begin using the API, you must register your application and enable access to the YouTube API from the Google Developer Console. To do this, first navigate to https://console.developers.google.com. Once there, sign-in with your Google account. After a successful sign-in, you’ll arrive at the console dashboard. From the dashboard, complete the following steps: • To the left of your profile avatar at the top right of the page, find the drop-down menu with the “Create a Project” option. Click that. • A dialogue box appears with a prompt for the project name. Fill in the name of the project and click “Create”. The dashboard automatically switches to the new project that you just created. • You should now see a blue dialogue box that says “Use Google APIs” with an option that says, “Enable and manage APIs”. You need to go here to enable YouTube. Click on that option. • Now you can see a list of several different services and APIs that Google offers. Find the one on the far right entitled “YouTube Data API”. Click that option and, on the next screen, press “Enable API”. At this point, you’ve enabled YouTube Data API access for your application, but your application still needs a valid API key so that Google can verify your app’s identity when
codemag.com
Figure 1: Google API Manager console. Select “API Key” for the purposes of this demonstration. it makes an API call. The level of authentication that your application needs to access the API can vary based on the functionality that it wants to take advantage of. Three levels of access exist that your application can generate credentials for (also shown in Figure 1): 1. API Key: Identifies your project using a simple API key to check quota and access. This is the level of access you’ll use later in this article. 2. OAuth 2.0 Client ID: Requests user consent so that your app can access the user’s data. It’s needed to access account-specific details and APIs. 3. Service Account: Enables server-to-server, app-level authentication using robot accounts. It’s mainly for use with Google Cloud APIs. On the screen where you enabled YouTube API access, there’s a menu on the left with an option titled “Credentials”. Go here to generate the API Key that you need. Once there, drop down the “Add Credentials” option, as shown in Figure 1, and select “API Key”. Once selected, you’ll receive a modal that asks you to identify your application’s platform. Select “iOS Key” from the available options to continue. Next, name the key however you see fit and then input the application’s Bundle ID: I used com.codemagazine.YTDemo for the purposes of this demonstration. Lastly, a dialogue should appear with your API key listed in it. Copy this ID because
codemag.com
you’ll need it when you begin working through the code samples later in this article.
The Endpoint
For this demonstration, you’ll work with the YouTube search API to find videos that relate to a search query that you define. The base API request URL to search YouTube videos looks like the following: https://www.googleapis.com/youtube/v3/search
In order to call that endpoint and retrieve the relative data, you need to append several parameters to the request. The required parameters for this call include the search query, the API key, and a “part” parameter that indicates the search properties that you’ll get back in the response. The “part” parameter for a query to the search endpoint should be set to “snippet.” The following code sample demonstrates what you would append to the above URL in order to search for videos using the keyword “programming”. ?q=programming&part=snippet&key= {INSERT YOUR API KEY HERE}
Additionally, there’s a host of optional parameters that you can append to the URL to filter or organize the results to your liking. The following list describes some of the more useful ones (full documentation on parameters can
Integrating YouTube into Your iOS Applications
53
be found at https://developers.google.com/youtube/v3/ docs/search/list). • channelId: Search only returns matched content created by the specified channel. • location & locationRadius: Together, these two parameters let you limit search results to a specific geographical area. • maxResults: The maximum total items returned from the query. Acceptable values range between 0-50 (inclusive). • type: Set to “channel,” “playlist,” or “video.” Defines the type of content returned by the search. • regionCode: Limits results to a specific country. • relevanceLanguage: Returns results most relevant to the indicated language. The parameter is typically an ISO 639-1 two letter language code (en for “English” for instance). • order: Defines the order of the search results. Set to “date,” “rating,” “relevance,” “title,” “videoCount,” or “viewCount.”
For the purposes of this article, request videos on “programming” in English from the search API. Using the parameters described above, that full request looks like this: https://www.googleapis.com/youtube/v3/search? part=snippet&regionCode=US&relevanceLanguage=en &type=video&order=relevance&maxResults=20 &q=programming&key={YOUR API KEY}
The Data
• After sending a search request to the API, the service responds with the resulting matches. The response format is JSON and following snippet demonstrates the structure of the returned data.
{ "kind": "youtube#searchListResponse", "etag": etag, "nextPageToken": string, "prevPageToken": string, "pageInfo": {
Figure 2: Initial project settings when creating the sample project in Xcode. Listing 1: JSON format of video items returned from the YouTube search API { "kind": "youtube#searchResult", "etag": etag, "id": { "kind": string, "videoId": string, "channelId": string, "playlistId": string }, "snippet": { "publishedAt": datetime, "channelId": string, "title": string,
54
Integrating YouTube into Your iOS Applications
"description": string, "thumbnails": { (key): { "url": string, "width": unsigned integer, "height": unsigned integer } }, "channelTitle": string, "liveBroadcastContent": string } }
codemag.com
"totalResults": integer, "resultsPerPage": integer }, "items": [ search Resource ] }
The “items” array contains the matched results. Each individual item in that array has a set of data that describes it. Listing 1 demonstrates the format of each item. Additionally, notice the inclusion of next and previous page tokens. The maxResults parameter described earlier determines the number of items initially returned (if not specified, the default is 5). However, in most cases, the total number of matching items exceeds the maxResults parameter. In those cases, you can use the next and previous page tokens to paginate through the entirety of the results. Referring again to Listing 1, notice that the information returned for each video includes channel information, video title, video description, and video thumbnails among other things. In this demonstration, however, you’ll only use the “videoId” property, which gets used to play the video.
YouTube released an iOS library to simplify this process called YTPlayer (https://github.com/youtube/youtubeios-player-helper). Rather then going through the hassle of implementing a UIWebView and then adding the HTML iFrame code to it directly, you can use the YTPLayer library’s provided player view to handle this implementation for you. Simply initialize an instance of YTPlayerView and tell it to load the respective video by passing it a video ID. The sample application you’ll build in the next section demonstrates how to add this library to your application and take advantage of it.
Let’s Code It In this code exercise, you’ll write a VideoManager class that can fetch videos from YouTube using the search API. The VideoManager class uses callbacks to return an NSArray of videos. You’ll also write a Video object that models the video data returned. This portable library can get dropped into any project that needs to fetch and play videos from YouTube.
Getting Started
YouTube iOS Player
The first thing you’ll want to do is create a new Xcode project for this exercise. To do this, launch Xcode and click “Create a new Xcode project”. Select “Single View Application” and press Next. Finally, fill in a project name and match the other settings to the screenshot from Figure 2.
Currently, you can’t play a YouTube video directly from an iOS application using an AVPlayer object. The only method currently supported by YouTube involves embedding an iFrame container inside your application to load the video. The user then plays the video from this embedded iFrame and, as is the case with any Web-based-video, the operating system’s standard video player takes over once the video begins.
Once your settings match Figure 2, click Next. You should now have a blank Xcode project. The only other thing you need to set up before you get coding is YTPlayer. Add the YTPlayer library to the project using CocoaPods, one of the simplest methods to add third-party content to any iOS project. For the sake of this walkthrough, I’ll assume that you have a working understanding of how to install a
Using CocoaPods The official CocoaPods website can be found at http:// cocoapods.org/. A comprehensive tutorial on how to use CocoaPods can be found at http://www.raywenderlich. com/64546/introduction-tococoapods-2.
Figure 3: Settings when creating the video object model to store video data.
codemag.com
Integrating YouTube into Your iOS Applications
55
Listing 2: Video.h – variable parameters @interface Video : NSObject
@property (nonatomic, strong) NSString* channelID;
/// title of the video as shown in YouTube @property (nonatomic, strong) NSString* videoTitle;
/// snippet of the video description from YouTube @property (nonatomic, strong) NSString* videoDescription;
/// id of the video used to load the video in the iFrame @property (nonatomic, strong) NSString* videoID;
/// url to the cover thumbnail of the video @property (nonatomic, strong) NSString* thumbnailURL;
/// the name of the channel the video is posted in (author name @property (nonatomic, strong) NSString* channelTitle;
/// date the video was posted on YouTube @property (nonatomic, strong) NSDate* pubDate;
/// id that can be used to load further content from the channel
@end
library using CocoaPods. (If not, please refer to the sidebar for additional information on how to use CocoaPods or where you can locate additional installation instructions.) Once you’ve created a Podfile, add the following line to the file and save. pod "youtube-ios-player-helper", "~> 0.1.4"
Once saved, launch terminal and navigate to the root folder for your project. Make sure that the project is closed in Xcode and run the following command in terminal. pod install
This automatically downloads the YTPlayer library and creates an .xcworkspace file. Re-open your project by double clicking that new project file.
Video Model
The next thing you want to add to the project is a video object model that holds the information you receive from the call to the YouTube search API. Click File -> New -> File and then choose “Cocoa Touch Class”. Next, choose NSObject and name the object “Video”, as shown in Figure 3.
Once that has been created, you’ll need to add various properties to correspond to the video data you want to collect from the API response. Although you’ll only use the video ID property in this example, the sample code captures additional details to further demonstrate how to interact with the API. Additionally, you can follow the same structure to store any of the remaining data points that don’t get picked up in this sample. Open the Video.h file that you just generated. Add the properties for the data you’ll track, as shown in Listing 2. These properties map to the various data points that get returned by the API and the comments from Listing 2 offer more detail on what each property represents. The next step involves creating the video manager class that gathers the information needed to populate this video object. That manager class retrieves the video data from the search query as a JSON object. Once you cover that in the next section, you’ll loop back and create an initializer method on this video model to take the JSON data and parse it into the appropriate properties you just created.
Video Manager
Just like you previously did with the video model, go to File -> New -> File and select NSObject to create a new class. Name it VideoManager and click Next and then Save. This VideoManager class directly integrates with the YouTube search API endpoint described earlier to get the desired video information. NSURLSession handles the call for data, so you need to write a function that uses a session to make a request to the service. Listing 3 demonstrates this function implementation in VideoManager.m and how to use NSURLRequest and NSURLSession to formulate the query. Notice that the function definition takes an optional “channelID” parameter. If included, the search results only contain videos from the specified ID. Also notice that there is a callback block that passes an NSMutableArray to the calling function. That array contains the video objects returned by the search query.
Figure 4: Sample of what a YTVideoPlayer looks like.
56
Integrating YouTube into Your iOS Applications
You’ve now enabled the video manager class to communicate with the API. Next, you need to add the “getVideoList” function to the Video model object that you previously created. Notice that this function gets called in the Listing 3 code you just wrote. It creates an array of video objects using the JSON response from YouTube. Be-
codemag.com
fore doing that, switch over to VideoManaher.h and add the function definition for what you just wrote in Listing 3. @interface VideoManager : NSObject - (void)getVideos:(NSString*)channelID completionBlock:(void (^)(NSMutableArray *)) completionBlock;
* using the JSON data passed in the * dictionary parameter * * @param videoData JSON data from API * @param completionBlock array of video objects */ - (void)getVideoList:(NSDictionary*)videoData completionBlock:(void (^)(NSMutableArray *)) completionBlock;
@end
Lastly, you want to import the Video object or else the call to “getVideoList” in VideoManager.m triggers a build error. Add the import to the top of VideoManager.h. #import "Video.h"
Now you can move back to the model and add the functions needed to parse the JSON information into manageable video objects. The next section covers this.
Back to the Video Model
This is the function that the VideoManager class passes the returned JSON data to as described earlier. Listing 4 demonstrates the implementation of this function that you add to Video.m. That function depends on an initialization function that also gets added to the Video.m file. Listing 5 shows that initialization function, which takes in a single video item as JSON data in dictionary format. It parses out the needed properties and then returns an instance of self. This gets called for each video in the “videoData” that JSON provided to “getVideoList”. All of the created instances of video get returned to VideoManager.m in the completion block as an array.
You’ve almost completed the circle. You started by creating a video model object, wrote a manager class to get the data to populate that object, and now you need the final logic that maps that data from the API results into the model. Move back to your Video.h file and declare the following function header.
Notice that Listing 5 has a call to “dateWithJSONString”. This helper function takes a date string from the JSON response and converts it into an NSDate object, a more manageable format for your application. That function should get created within an NSString category and its implementation is demonstrated in Listing 6.
/** * create array of video objects
You have now completed both the VideoManager and Video classes. Simply initialize a VideoManager object
Working with YTPlayer YouTube provides an in-depth walkthrough of how to work with the YTPlayer iOS library. You can check it out as an additional reference at https://developers. google.com/youtube/v3/guides/ ios_youtube_helper.
Listing 3: VideoManager.m – send API request to specified URL @implementation VideoManager
NSURLRequest *request = [[NSURLRequest alloc] initWithURL:[NSURL URLWithString:[URL stringByAddingPercentEncodingWithAllowedCharacters: [NSCharacterSet URLQueryAllowedCharacterSet]]]];
/** * Search programming videos on Youtube * * @param channelID ID of channel to only search on * @param completionBlock Returns array of search results */ - (void)getVideos:(NSString*)channelID completionBlock: (void (^)(NSMutableArray *))completionBlock { // The API Key you registered for earlier NSString* apiKey = @"YOUR API KEY"; // only used if we want to limit search by channel NSString* optionalParams = @"";
// create a session and start the request NSURLSession *session = [NSURLSession sharedSession]; [[session dataTaskWithRequest:request completionHandler:^(NSData *data, NSURLResponse *response, NSError *error) { if (!error){ Video* vid = [[Video alloc] init]; // create an array of Video objects from // the JSON received [vid getVideoList:[NSJSONSerialization JSONObjectWithData:data options:0 error:nil] completionBlock: ^(NSMutableArray * videoList) { // return the final list completionBlock(videoList); }];
// if channel ID provided, create the paramter if(channelID){ optionalParams = [NSString stringWithFormat:@"&channelId=%@", channelID]; } // format the URL request to the YouTube API: // max results set to 20, language set to english, // order by most relevant NSString* URL = [NSString stringWithFormat:@"https://www.googleapis.com/youtube /v3/search?part=snippet&regionCode=US &relevanceLanguage=en&$type=video&order=relevance &maxResults=20&q=%@&key=%@&alt=json%@", @"programming", apiKey, optionalParams];
}
// initialize the request with the encoded URL
@end
codemag.com
} else { // TODO: better error handling NSLog(@"error = %@", error); } }] resume];
Integrating YouTube into Your iOS Applications
57
Listing 4: Video.m â&#x20AC;&#x201C; parse search query JSON into video objects /** * create array of video objects * using the JSON data passed in the * dictionary parameter * * @param videoData JSON data from API * @param completionBlock array of video objects */ - (void)getVideoList:(NSDictionary*)videoData completionBlock:(void (^)(NSMutableArray *)) completionBlock { // get the array of videos from the dictionary containing // the JSON data returned from the call to the YouTube API NSArray *videos = (NSArray*)[videoData objectForKey:@"items"];
NSMutableArray* videoList = [[NSMutableArray alloc] init]; // loop through each video in the array, if it has an ID // initialize an instance of self with the relative // properties for (NSDictionary *videoDetail in videos) { if (videoDetail[@"id"][@"videoId"]){ [videoList addObject:[[Video alloc] initWithDictionary:videoDetail]]; } } // pass the array of video objects back to VideoManager.m completionBlock(videoList); }
Listing 5: Video.m â&#x20AC;&#x201C; initialization function - (instancetype)initWithDictionary:(NSDictionary *)dictionary { self = [super init]; if (self) { _videoTitle = dictionary[@"snippet"][@"title"]; _videoID = dictionary[@"id"][@"videoId"]; _channelID = dictionary[@"snippet"][@"channelId"]; _channelTitle = dictionary[@"snippet"][@"channelTitle"]; _videoDescription = dictionary[@"snippet"][@"description"];
_pubDate = [dictionary[@"snippet"][@"publishedAt"] dateWithJSONString]; _thumbnailURL = dictionary[@"snippet"][@"thumbnails"] [@"high"][@"url"]; } return self; }
from anywhere in your application and call getVideos to retrieve the desired video information.
The Last Piece to the Puzzle
The final piece of the puzzle involves the YTPlayer class that you added to the project earlier. Using the code you already wrote in Listings 1-6, you can get YouTube videos by adding the following code to any controller in your application. NSMutableArray *ytVideos = [[NSMutableArray alloc] init]; VideoManager* vidManager = [[VideoManager alloc] init]; [vidManager getVideos:nil completionBlock:^(NSMutableArray *videoList) { ytVideos = [[videoList sortedArrayUsingDescriptors:[NSArray arrayWithObject:[[NSSortDescriptor alloc] initWithKey:@"pubDate" ascending:NO]]] mutableCopy]; }];
The previous code initializes an instance of the VideoManager class and makes a request to get videos. Once the videos return, they get sorted by publication date and stored in a local array object where you can do with it what you please. Now you have the video results and you want to display them so your users can play them. To do this, you first want to import YTPlayer as follows. Figure 5: Sample application using the YouTube library from this article.
58
Integrating YouTube into Your iOS Applications
#import "YTPlayerView.h"
codemag.com
Listing 6: NSString utility function that converts a string to an NSDate /* * take the current string and converts it into a manipulatable * NSDate object and then returns that object */ - (NSDate*)dateWithJSONString { [NSDateFormatter setDefaultFormatterBehavior: NSDateFormatterBehavior10_4]; NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init];
[dateFormatter setDateFormat:@"yyyy-MM-dd'T'HH:mm:ss.SSSz"]; [dateFormatter setTimeZone:[NSTimeZone timeZoneForSecondsFromGMT:0]]; [dateFormatter setCalendar:[[NSCalendar alloc] initWithCalendarIdentifier:NSCalendarIdentifierGregorian]]; NSDate *date = [[NSDate alloc] init]; date = [dateFormatter dateFromString:self]; return date; }
Next, you create an instance of the player and load the video using the video ID. The next code snippet demonstrates how this is done by taking the first video object from the previous code’s result array, ytVideos, and creating a YTPLayerView to display it. Video *video = [ytVideos objectAtIndex:0]; // initialize a player with the frame that you // want the video preview to be initially // displayed in. // Can also set the view up in Interface Builder // using auto-layout and use an IBOutlet to // connect to your instance of YTPlayerView YTPlayerView *videoPlayerView = [[YTPlayerView alloc] initWithFrame: CGRectMake(0, 0, 375, 320)]; [videoPlayerView loadWithVideoId:video.videoID]; [self.view addSubview:videoPlayerView];
Figure 6: Sample application using the YouTube library from this article. In the above example, the video preview frame is manually set to 375px wide by 320px high. Figure 4 demonstrates an example of what that looks like once it’s added to the view. When the user taps the red play button in the middle of that preview, the standard iOS video player goes to full screen and begins to play the corresponding video.
Practical Application
The code you just wrote is a portable library that can be dropped into any application; just add the VideoManager and Video classes to the project and import the YTPlayer library. To demonstrate the practicality of the library, I’ve used it to create a demo application that uses the additional video properties that you captured in the Video model to visually display the search results in a UITableView. Figure 5 and Figure 6 depict the application UI. The table lists the search results using the video title and thumbnail properties. Thumbnails are loaded asynchronously to prevent any lagging in the scroll. The code included alongside this article not only encompasses the YouTube library covered previously, but it also includes all code for this sample application as well. This should help you understand exactly how to call the library and how to apply the results.
pull video content into your iOS application. If you’d like a more detailed breakdown of the search API or additional YouTube APIs, be sure to visit developers.google. com/youtube for full API documentation. The sidebars for this article have links to additional resources that may also be of use. Jason Bender
SPONSORED SIDEBAR: Does Your Cloud App Leave You Feeling Under the Weather? The developers at CODE have worked on everything from cloud applications to mobile projects. If you’re having problems with your cloud application and need guidance, the developers at CODE Consulting can help you. For more information, visit www.codemag.com/ consulting or email us at info@codemag.com to set up your time with a developer today.
Wrapping Up Hopefully, over the course of this article, you gained a basic understanding of how to use the search service offered by YouTube’s API and how to use that service to
codemag.com
Integrating YouTube into Your iOS Applications
59
ONLINE QUICK ID 1601091
From MSTest to xUnit, Visual Studio, MSBuild, and TFS Integration In the ever-evolving technology world, DevOps has become an essential part of the application delivery process. Be it requirements gathering and traceability, version control management, or test case management or deployment, over the last few years, the focus has shifted from doing it correctly to doing it efficiently with the right technology stack. Quality and the adherence to the processes being followed now define the quality of your product. In the pursuit of achieving this excellence, unit testing plays a vital role.
Punit Ganshani Punit.Ganshani@gmail.com www.ganshani.com @ganshani Punit Ganshani is a Solution Architect and Microsoft .NET MVP, and the author of more than 20 articles published in several magazines like DeveloperIQ, MSDN Press Blog, and DNC Magazine. He’s also the author of “Journey to C” on C programming, published in 2006 by Mahajan Publishers India. He’s an avid reader, loves development, and often contributes to several OSSs on GitHub. He organizes .NET sessions in Singapore and has spoken in various international forums on DevOps, Application Design, and Architecture, Azure and IoT.
When it comes to the enterprise application delivery, the code not only has to be fully unit tested in your favorite IDE but should also be unit tested as a part of continuous builds. You can choose toolsets like TFS that provide complete ALM solution, resort to build engines like TeamCity and Jenkins, or adopt an open-source solution such as Stash+Git+Jenkins, but the essence lies in executing the unit test in an integrated and isolated environment. If you’re developing a .NET-based application, Microsoft provides MSTest framework that has excellent integration with Visual Studio. When it’s time to execute these unit tests on a build engine, the MSTest execution engine requires an installation of Visual Studio (Express works as well) to be available on the build server. You can choose any build server, but Visual Studio must be installed on it. You won’t always have complete access to the build server to install the tools that you need. Here, xUnit comes handy! The xUnit unit-testing tool is a free, open source, community-focused tool for .NET Framework and provides excellent integration with Visual Studio, ReSharper, CodeRush, TestDriven.NET and Xamarin. The xUnit tool has gained popularity over MSTest for following reasons: • It provides support for parameterized tests using the Theory attribute whereas MSTest doesn’t provide such a feature out of the box. • Tests in xUnit don’t require a separate Visual Studio Test Project as mandated by MSTest. • Prioritizing or sequencing xUnit tests is simple (with the TestPriority attribute) unlike in MSTest framework, which requires a unit test settings file. • It supports fluent assertions like Assert.Throws<> instead of ExpectedException attribute in MSTest framework. • It can be referenced in any Visual Studio Project as a NuGet package. • Unlike MSTest, xUnit doesn’t require any additional tools to be installed on the build server! There are many articles that compare the MSTest framework with xUnit and other unit testing frameworks in detail, so that’s not the focus of this article. Rather, this article is focused on converting MSTest (assuming that everyone’s done some unit testing in MSTest framework) to xUnit and then running them in Visual Studio and integrating them with TFS builds.
60
From MSTest to xUnit, Visual Studio, MSBuild, and TFS Integration
Seamless Conversion from MSTest to xUnit Tests If you’ve already written unit tests with xUnit, you can skip this step. When you create an ASP.NET MVC 5 website (let’s name it WebApp) using Visual Studio 2015, the project wizard provides an option to add Unit Tests for your controller. By default, it generates unit tests for HomeController using MSTest framework when you select this option. So let’s take this as an example to demonstrate the conversion of MSTest to xUnit. You can manually convert these unit tests written in MSTest to xUnit by replacing the Namespaces and Assert statements. As far as this ASP.NET MVC application is concerned, it’s an easy task. When you need to do this for an application with 1000+ unit tests, it’s a task that’s repetitive and needs motivation and persistence beyond imagination. So you need an automated toolset that can do this magically for you.
Unlike other unit-testing frameworks, xUnit ships as a NuGet package that can be referenced in any .NET project. You can now embed unit-tests in the same project where your code resides and have them build with your project. What’s more, you can upgrade xUnit to the latest version just like you upgrade versions of any other NuGet reference!
Perhaps, this was the same question that cropped up in the innovative minds of the .NET engineering team at Microsoft, and thus they created the fantastic tool xUnitConverter using Rosyln, which automates the task of converting MSTest to xUnit. As a prerequisite to running this tool on your desktop, you need Visual Studio 2015 or Microsoft Build Tool 2015. I really love the CodeFormatter tool (for more about it, see the sidebar) and strongly recommend that you include its use as a part of daily to-do checklist. However, it doesn’t convert the unit tests from MSTest to xUnit. You need to use xUnitConverter to automate many of the repetitive tasks, like changing to [Fact] attributes, using the correct
codemag.com
Figure 1: Current State with MSTest framework
Figure 2: Before (with MSTest) and After (with xUnit) methods on Assert, updating namespaces, etc. Once youâ&#x20AC;&#x2122;ve downloaded the tool from GitHub, you can run it on the VS command prompt with the syntax shown here: xunitconverter <test project file path>
The solution you created earlier has a test project, called, say, WebApp.Tests, that has a reference to the MSTest library (Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll) and has MSTest unit tests defined in the HomeControllerTest class, as shown in Figure 1. As youâ&#x20AC;&#x2122;ll notice in Figure 1, the current solution uses MSTest attributes like TestClass and TestMethod, and the project WebApp.Tests also has a reference to Microsoft. VisualStudio.QualityTools.UnitTestFramework. To convert all of the MSTest unit tests to xUnit unit tests in the WebApp.Tests project, you can execute the command in the command prompt.
codemag.com
XUnitConverter F:\WebApp.Tests\WebApp.Tests.csproj
Figure 2 depicts the difference in the unit tests after a successful conversion. The namespaces get replaced appropriately, the TestClass attribute is removed from all unit test classes, the TestMethod attribute is replaced by the Fact attribute, and method names in the Assert classes have also been changed. You can now delete the MSTest assembly Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll and add a reference to the NuGet package xUnit by executing the command shown here on the Package Manager Console. Install-Package xunit
This adds a reference to the latest xUnit framework assemblies to the test project and your project references appear as shown in Figure 3 All of the compilation errors should be automatically resolved once the xUnit assemblies are added to the test project.
From MSTest to xUnit, Visual Studio, MSBuild, and TFS Integration
61
Figure 3: Test Project with xUnit references
Executing xUnit Tests in Test Explorer in Visual Studio
To enable this integration, you need to add a reference to the NuGet package xunit.runner.visualstudio using the Package Manager Console command shown in the next code snippet. When you click on the Run All link in Test Explorer window, it discovers all of the test cases in the solution and runs them, as shown in Figure 4. Install-Package xunit.runner.visualstudio
Figure 4: The xUnit tests in the Test Explorer window Listing 1: xunit.MSBuild section in csproj file <Import Project ="..\packages\xunit.MSBuild.2.0.0.0\build\xunit.MSBuild.targets" Condition="Exists( ‚..\packages\xunit.MSBuild.2.0.0.0\build\xunit.MSBuild.targets')"/>
Executing xUnit Tests with Visual Studio 2015 and MSBuild In Test Driven Development (TDD), you start with writing unit tests first, with all of them failing/not passing at the beginning. Gradually, as the product builds, all of the unit tests pass. To ease this process, you need excellent IDE support that allows you to run and debug unit tests. Just like the integration of Test Explorer for the MSTest framework, you can integrate xUnit with Test Explorer in Visual Studio.
62
From MSTest to xUnit, Visual Studio, MSBuild, and TFS Integration
This integration of xUnit in Visual Studio Test Explorer still requires you to take extra effort to execute the test cases in the Test Explorer window, which still doesn’t guarantee 100% tested code. There’s a need to make execution of test cases mandatory.
Executing xUnit Tests as Part of MSBuild
Integration of xUnit with MSBuild ensures that the solution builds only when all test cases pass. Although this appears brutally difficult to achieve initially, its merits take you a step closer to a quality product. Having test cases run as part of MSBuild gives you the ability to run xUnit tests on any computer (development or build server) without any dependency on Visual Studio. Fortunately, Philipp Dolder has made your task easier with his NuGet package xUnit.MSBuild. To add this package as a reference to the test project, you can execute the command on Package Manager Console, as shown here: Install-Package xunit.MSBuild
The installation of this package does more than just adding a reference to a DLL. This package adds a new xUnit
codemag.com
MSBuild Task to your .csproj file. The additional section added to the project file appears in Listing 1. During the build process, only if you choose Release configuration, this xUnit MSBuild task scans for all the tests in your test assembly (.DLL) and runs the xUnit runner to execute these tests. A typical MSBuild output for these xUnit tests is shown in Listing 2. If any unit test fails, your solution build will also fail.
What If Your Visual Studio Solution Doesn’t Have Release Configuration? The NuGet package xunit.MSBuild that enables execution of xUnit tests as part of MSBuild, by default, allows you to run xUnit tests on the build only for the Release configu-
ration. Often, your projects have different configurations, like QualityCheck, Development, UserTesting, and Production, and you may want to run these unit tests only for certain build configurations. In such scenarios, you need to alter your test project configuration in any XML editor. Assuming that you want the unit tests to run on the QualityCheck configuration, you’ll add an XML element RunXunitTests in the PropertyGroup for the QualityCheck configuration, as shown in Figure 5. When you trigger an MSBuild with the QualityCheck configuration, it evaluates xUnit tests just like they’re done for the Release configuration. This integration, however, only ensures that the code you write is tested on your computer. In projects involving large teams, this code,
CodeFormatter The xUnitConverter is part of the project called CodeFormatter– again a tool that the .NET engineering team at Microsoft uses internally to format their code before they check it in to their GitHub repository.
Figure 5: Test Project configuration
CodeFormatter auto-formats the code using standard guidelines (as defined by Microsoft) and produces an improved and standardized code.
Figure 6: Visual Studio Build in Visual Studio Team Services
codemag.com
From MSTest to xUnit, Visual Studio, MSBuild, and TFS Integration
63
Figure 7: The xUnit tests in VSO/TFS builds Listing 2 MSBuild output with xUnit Task 1>------ Build started: Project: WebApp, Configuration: Release Any CPU -----1> WebApp -> F:\WebApp\bin\WebApp.dll 2>------ Build started: Project: WebApp.Tests, Configuration: Release Any CPU -----2> WebApp.Tests -> F:\WebApp.Tests\bin\Release\WebApp.Tests.dll 2> xUnit.net MSBuild runner (32-bit .NET 4.0.30319.42000) 2> Discovering: WebApp.Tests 2> Discovered: WebApp.Tests 2> Starting: WebApp.Tests 2> Finished: WebApp.Tests 2> === TEST EXECUTION SUMMARY === 2> WebApp.Tests Total: 3, Errors: 0, Failed: 0, Skipped: 0, Time: 0.788s ========== Build: 2 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========
for which all tests were passed, may not be compliant with someone else’s code. So even if the test cases pass on your computer, there’s no guarantee that they’ll pass when they’re run against code written by a group of developers. To take this one step further to perfection, you need to integrate this with a build engine like TFS Builds, TeamCity, or Jenkins.
Visual Studio Team Services (or TFS 2015) Integration Setting up a Visual Studio Team Services project is free and easy. You can get started with an intuitive project creation wizard at http://www.visualstudio.com after you’ve signed-in. Once you’ve created the project, you can connect to Visual Studio Team Services using Team Explorer in Visual Studio and check-in your source code.
64
From MSTest to xUnit, Visual Studio, MSBuild, and TFS Integration
Through Visual Studio Team Services (the website), you can create a Visual Studio Build Definition, as shown in Figure 6. You can delete Build Steps like Visual Studio Test, Index Sources, and Publish Symbols, and Publish Build Artefacts. When this build is triggered with the debug configuration (i.e., the value of $(BuildConfiguration)), unit tests won’t be discovered. When you trigger the build with the QualityCheck configuration, the unit tests are discovered and executed as part of the build process, as shown in Figure 7. In the Triggers tab, if you set this build trigger to be Continuous Integration (CI), the unit tests are executed at each check-in on a remote build server (and not on your local computer).
Summary The process outlined above to execute xUnit tests on VSO/TFS builds works on other build engines like Jenkins, TeamCity and Cruise Control as well. With no dependency on Visual Studio on the build server and with modern unit-testing capabilities (like parameterized tests, fluent assertions, and NuGet-based packaging), xUnit is the obvious choice for unit-testing in any .NET application and xUnitConverter makes this transition from MSTest to xUnit easier and faster. Punit Ganshani
codemag.com
ONLINE QUICK ID 1601101
Azure Skyline: RemoteApp — Hosting Desktop Apps in Azure Azure’s RemoteApp lets desktop software run as a hosted service in the cloud. This is useful in many scenarios. It allows software companies to extend the life of mature desktop systems by offering them as pay-as-you-go services and it removes the installation and maintenance burden from customers. It also lets training companies pre-configure desktop environments with everything the user needs to get started and lets companies configure, secure, and maintain desktops for remote workers without installing apps remotely onto the end user’s equipment.
Mike Yeager Mike is the CEO of EPS’ Houston office and a skilled .NET developer. Mike excels at evaluating business requirements and turning them into results from development teams. He’s been the Project Lead on many projects at EPS and promotes the use of modern best practices such as the Agile development paradigm, use of design patterns and test-drive and test-first development. Before coming to EPS Mike was a business owner developing a high-profile software business in the leisure industry. He grew the business from 2 employees to over 30 before selling the company and looking for new challenges. Implementation experience includes: .NET, SQL Server, Windows Azure, Microsoft Surface and Visual FoxPro.
The idea of remote desktops isn’t new. The Remote Desktop client has been part of Windows since Windows XP Pro SP2, and Remote Desktop Services has been around since Windows NT, when it was known as Terminal Services. Many companies already host Remote Desktop Services in their server rooms and Data Centers or on virtual machines (VM) in Azure. Users can run Windows and Windows applications from a variety of clients including Macs, iPhones and iPads, Android devices, and Windows phones. That’s right, users can even run old VB6 and FoxPro apps from the cloud on their Macs and iPads. Talk about extending the life of an old app! So what’s so cool about RemoteApp? RemoteApp is remote desktop as-a-service. Instead of sharing an entire
desktop session, it shares individual applications and the server itself is completely managed and updated automatically. This is somewhat analogous to how Azure Web Sites relieves us of having to create, configure, scale, and maintain Windows Servers and IIS in order to deploy Web sites. A RemoteApp server starts off as a Windows 2012 R2 Server with Remote Desktop Services installed. This can be a VM you create in Azure or create locally as a Hyper-V image. You’ll install and configure the desktop software you want to make available and make sure everything works correctly on the VM. Then you’ll prepare the machine image to be loaded into your RemoteApp environment. At that point, Azure takes over your VM and manages it for you. You can no longer connect to the VM via a remote desktop session. Azure applies OS patches, monitors the image and the hardware, and collects dev-ops information. If you need to update the machine image, you’ll have to upload a new one.
Figure 1: Create a .VHD Virtual Hard Disk.
66
Azure Skyline: RemoteApp —Hosting Desktop Apps in Azure
codemag.com
Figure 2: Create a Generation 1 Hyper-V VM.
Creating a RemoteApp Server Image
If you just want to play around with the technology, Microsoft has provided a Server 2012 R2 image with no additional software installed and a second image that includes Office pre-installed. You can create a new RemoteApp Collection from the portal at any time using one of these images with no additional prep. But images without any of your software on them aren’t all that interesting for most of us. If you want to customize your server with your software, you’ll need to create a custom server image instead. You can choose to start with a copy of the images provided by Microsoft. Create a new VM in Azure and choose the Windows Server Remote Desktop Session Host image (or the version with Office installed) from the Virtual Machine Image Gallery. This image has Remote Desktop Services installed and configured for RemoteApp. I find it much more convenient to work with a local Hyper-V image than an Azure VM image. To create a Hyper-V image locally, there’s a fantastic article (https:// azure.microsoft.com/en-us/documentation/articles/remoteapp-create-custom-image/) that takes you through each step. I’ll highlight the important steps here. Create a new .vhd (not a .vhdx) drive in Hyper-V Manager.
codemag.com
Make the disk a dynamically sizing disk with a maximum size of 127GB and name it RemoteApp.vhd, as shown in Figure 1. Next, create a new VM named RemoteApp, as shown in Figure 2. It doesn’t matter how many CPUs or how much RAM you assign to your VM since you’ll configure all of that again when you put the image into RemoteApp. In fact, only the .vhd will be uploaded to Azure. I’ve chosen to give my local VM 2GB of RAM and 2CPU cores as well as a network adapter so I can install Windows Updates. Make sure that your VM is configured as Generation 1, not Generation 2, or it won’t work in RemoteApp. Next, install Windows Server 2012 R2 and start the Windows Update process to make sure that the VM is fully patched. After a couple of re-starts to complete the patching cycle, configure windows and install the Remote Desktop Session Host (RDSH) role and the Desktop Experience feature. These are required for RemoteApp. Once the components are installed, run Windows update AGAIN and install any patches it finds. Although it’s not a required step, I highly recommend that you shut down the VM and make a copy of your .vhd
Azure Skyline: RemoteApp—Hosting Desktop Apps in Azure
67
Figure 3: Upload a customized RemoteApp system image. order to install software updates, you’ll need to prepare and upload a new image. Self-updating ClickOnce applications are a good way to get around this inconvenience, however at the time I’m writing this, full trust ClickOnce applications won’t run as RemoteApps because the users who’ll be running the apps won’t have sufficient permissions. The work-around for this is to install the ClickOnce application on the computer, then later, configure RemoteApp to run Internet Explorer, passing it an argument of the URL of the ClickOnce .application file to start the app. It’s not elegant and the user will see IE’s icon in place of the application’s icon when launching the app, but it works well. Fixing this is a long-requested feature that we hope to see soon.
Figure 4: Upload an image created locally in HYPER-V Manager.
68
Even though the image can be re-set, it doesn’t mean that everything gets wiped clean. The user’s Documents folders are persisted within the RemoteApp Collection, even if the machine is re-imaged and even when you update the machine image. It’s only deleted for good if you delete the entire RemoteApp Collection, so you can store user-specific configuration here and you could even copy a .NET app to here and run it from this folder. Although that works, it’s not very secure.
in case you want to create another RemoteApp server image later. That way, you won’t have to start this process from scratch every time you want to make a new RemoteApp image. Name this copy RemoteApp.vhd.v00. Once you have a backup, re-start the VM and log in. Install whatever software you want to share and test your applications. Make sure you install the software for all users, not just the current user. There’s no need to install printer drivers, as they’ll be of no use in Azure and are removed when the image is prepared for upload.
Log in as a non-administrative user to make sure that the applications function as expected. This is the last time you’ll have a chance to log into your VM. Again, it’s not a required step, but I highly, highly, highly recommend that you shut down the VM and make a copy of your .vhd at this point. Name this copy RemoteApp.vhd.v01. Restart your VM and log in as an administrative user. Open a command prompt As Administrator and run the following commands:
It’s important to mention that RemoteApp servers can be re-created in Azure at any time, so anything you install or configure after you upload the image can be wiped out. In
Fsutil behavior set disableencryption 1 C:\Windows\System32\sysprep\sysprep.exe /generalize /oobe /shutdown
Azure Skyline: RemoteApp —Hosting Desktop Apps in Azure
codemag.com
Figure 5: The custom image successfully uploaded to Azure.
Figure 6: Create a RemoteApp Collection from your image.
codemag.com
Azure Skyline: RemoteAppâ&#x20AC;&#x201D;Hosting Desktop Apps in Azure
69
Figure 7: Publish applications for your users. The first command disables the encrypting file system. The second command removes all user accounts and hardware drivers in preparation for moving the VM to new hardware. That’s one reason you’ll want to make sure you have that backup with a version number in place before you run this command. Once SysPrep has completed its work, the VM shuts itself down. Your image is ready to be uploaded. If you created your VM image in Azure from one of Microsoft’s images, you’ll still need to run the SYSPREP step. As with a local VM, I highly recommend you back up your VM image before running SYSPREP on a VM in Azure.
Uploading a RemoteApp Server Image to Azure
In order to upload an image, you’ll need Azure PowerShell (it should say “Azure PowerShell” in the title bar, not “Windows PowerShell”). If you haven’t installed it yet, you can install it with the Web Platform Installer. There’s a link in the upload Wizard that starts the installation for you.
Figure 8: Select your applications.
70
Azure Skyline: RemoteApp —Hosting Desktop Apps in Azure
Open the Azure Web portal and sign in. If you don’t yet have an Azure account, go to portal.azure.com and click on the Free Trial link to create one now. You’ll have to enter your credit card information, although Microsoft won’t charge your card without your express consent. They’ll cut off the account when the trial period expires and/or your trial credit is used up. As I write this, RemoteApp can only be configured in the full portal, not the Preview portal. If you’re logged into the preview portal, click on your login name in the upper right corner and click Azure portal.
codemag.com
Select REMOTEAPP from the list of Azure services on the left, and choose the “Template Images” tab, as shown in Figure 3. If you created your VM image in Azure, make sure you’ve run SYSPREP on it and choose Import an image from your Virtual Machine library and skip to Creating a RemoteApp Collection. If you created a local HYPER-V image, choose Upload a new template image as shown in Figure 4 and continue here. Name the new upload MyFirstRemoteApp and choose your Region. The portal creates a PowerShell script and downloads it to your computer. Save that .ps1 file to a convenient spot. Then, open the Azure PowerShell command prompt As Administrator. Copy the command line shown in the portal to the clipboard and paste it into the PowerShell window. Use the CD (change directory) command to switch to the folder where you downloaded the .ps1 file and then paste the command into the PowerShell window and execute it to run the script. The script scrolls down your screen for a while, then prompts you to locate your RemoteApp.vhd file. Make sure you’ve run SYSPREP and shut down the VM, or the upload won’t work. After you select the .vhd, the PowerShell window shows it creating an MD5 hash (for security) and then shows the file upload progress. Because you created a dynamically expanding disk, it should only be as big as the space actually taken up on the disk, not the size of the full capacity of the disk. Still, it may take a while to upload 18GB or so, depending on your connection, but once it’s done, the image should show up in the portal as shown in Figure 5. If your upload fails for any reason, I’ve found that the best way to re-try is to delete any image that shows up in the portal, and then get a new script for your clipboard and download a new PowerShell file before trying again. Otherwise, I tend to run into new errors caused by the failed upload.
Creating a RemoteApp Collection
Provisioning a RemoteApp Collection from your image takes a very long time; it takes about an hour as I write this. Choose the RemoteApp Collections tab and click the Add a RemoteApp Collection link, as shown in Figure 6. Enter a name for your Collection. Choose your Region and either the Basic or Standard plan. The Standard plan is a bit more expensive, but provides more powerful hardware. Then choose your template image file. Make sure to select the region you uploaded your image to first or it won’t show up in the list. While you’re waiting for your server to spin up, you can spend a few minutes adding some users to Azure Active Directory (AAD) if you wish. Your Microsoft account is already in your AAD, so you’ll be able to use RemoteApp, but that’s not usually the point of running RemoteApp, is it? If you haven’t used AAD before, to the end user, it acts almost exactly like your Microsoft account or an Office365 account (if you have one). If you know of a user with a Microsoft account who can help you test your app, you can enter their Microsoft account’s email address into your AAD. Otherwise, click on AAD and add a new user with a name, email address, and password. By default, AAD gives new users who don’t have an exist-
codemag.com
Figure 9: RemoteApp Client ing account an @azureonline.net address, but you can configure AAD to use a custom domain name that you own, if you wish. That’s all there is to it. Create as many accounts as you like. AAD is free unless you want the premium features.
Your Microsoft account is already in your AAD, so you’ll be able to use RemoteApp, but that’s not usually the point of running RemoteApp, is it?
Publishing Apps for Users
Now that your server’s up and running and you’ve configured accounts for your users, let’s publish some applications so that end users can use them. Select your server from the RemoteApp Collections tab and click on publish remoteapp programs, as shown in Figure 7, and look for your application(s) in the list of programs, as shown in Figure 8. If you didn’t install any programs, you can use NotePad.exe or some other app installed on the server. If you’re using ClickOnce applications, remember to publish Internet Explorer as the application and provide the URL of the .application file as a parameter. In this example, I’ve installed an old copy of Microsoft FoxPro on my image, so I’ll publish that.
Test Your Published Applications
Now it’s time to see how you’ve done. Install the RemoteApp Client on your computer or one of your supported devices and log in with an email address that you configured in Azure Active Directory. Remember that the email address from your Azure account was automatically set up
Azure Skyline: RemoteApp—Hosting Desktop Apps in Azure
71
Figure 10: The RemoteApp-published application runs on my desktop. for you. You’ll see the applications that were published for you, as shown in Figure 9. Click on an app and it runs on your device as though it were a locally installed application, even though it’s actually running in Azure. Figure 10 shows FoxPro appearing to run on my Windows 10 desktop, but I could just as easily run it from a Mac or an Android slate or even my iPhone. Printing works too, if you have a printer installed locally on your device.
in the example of publishing Office apps. There‘s no upfront cost or installation hassle for end users. Instead, you pay monthly for RemoteApp, which includes the cost of the server and Office licenses. In future articles, I’ll continue to introduce you to both the mainstream and niche offerings available in Azure. Mike Yeager
Costs
The Basic plan costs $10 per month per user for up to 40 hours of use per month and the plan is capped at $17 per month per user for unlimited hours. The Standard plan costs $15 per month and is capped at $23 per month. There’s no limit on the number of applications you can publish for this price. The Server 2012 R2 license cost is included in these fees, as is the license fee for Office if you used Microsoft’s image as a starting point. It does not, however, cover any Office365 costs for connecting to an Office365 mail server.
Summary RemoteApp is definitely a niche product and it highlights just how diverse Azure is and how powerful it can be. This article fleshed out some interesting scenarios, such as extending the life of older Windows desktop apps, enabling companies to offer Windows apps as a service, and making them available on multiple platforms. RemoteApp is also commonly used to make infrequently used apps easy to deploy and affordable, as
72
Azure Skyline: RemoteApp —Hosting Desktop Apps in Azure
codemag.com
CODE COMPILERS (Continued from 74) do all day. Here’s an opportunity to sit in close proximity with someone who actually understands you, and is now deeply interested in whatever you have to say about the code you’re about to write. For the first time, somebody is actively listening to you.” That’s a pretty powerful thing. As Fowler puts it, “One of the great opportunities you have as a leader is to help your people find meaning, contribute to a social purpose, and experience healthy interpersonal relationships at work. … The role you play as a leader is helping people experience relatedness at work.” • Competence. Our need to feel effective at meeting everyday challenges and opportunities. Demonstrating skill over time. It’s feeling a sense of growth and flourishing. This one, surely, developers can relate to; after all, what feels better than suddenly “getting it” about some technology or pattern or system? Squashing the hard bug that nobody else could fix, putting in the feature that makes all the users go “ooooh,” shipping the revision on time when everybody else said it would be late…. In a lot of ways, developers seem to live for those moments. Ironically, popular thinking seems to be acting in opposition to this—the “everybody gets a trophy, just for participating!” model common in many schools and sports teams offers a false (and clearly manipulative) sense of competence that most recipients (including just about every kid who ever got a participation trophy) can see through from a mile away. Competence feels best when it is earned, not awarded. It comes from actual growth and learning—not a trophy. Resonating yet? My guess is that unless you’re one of those very few who’ve achieved complete selfactualization (in which case, you should probably start a religion around what you’ve learned), a position that offers and a manager who encourages autonomy, relatedness, and competence sounds pretty damn good.
Summary From time to time, I run across people who describe themselves as “driven.” “I’m driven to success” or “I’m driven to do good in the world” or similar statements. Fowler nails this best when she asks, “What’s driving you?” The term “driven” implies that something external to the individual is whipping them along, and it’s often highly enlightening to find out what is doing that driving—is it a deep desire for parental acceptance, a desire to prove to somebody else that yes we can actually “make it” as a programmer, or perhaps a deep desire to prove to ourselves that getting that CS degree wasn’t a mistake after all.
codemag.com
It turns out that we seek self-regulation: “mindfully managing feelings, thoughts, values, and purpose for immediate and sustained positive effort.” It’s the mechanism that counters the emotional triggers and distractions that tend to undermine our psychological needs. But a deeper understanding of that will have to wait until next time. (Yeah, this is the literary equivalent to clickbait to end an article on a cliffhanger like that—but space limits are a hard thing to work with sometimes. You’ll manage, I’m sure. *grin*) Ted Neward
Jan/Feb 2016 Volume 17 Issue 1 Group Publisher Markus Egger Associate Publisher Rick Strahl Editor-in-Chief Rod Paddock Managing Editor Ellen Whitney Content Editor Melanie Spiller Writers In This Issue Mohammad Azam Joe Eames Kevin S. Goff Ted Neward Paul Sheriff
Jason Bender Punit Ganshani Sahil Malik John Petersen Mike Yeager
Technical Reviewers Markus Egger Rod Paddock Art & Layout King Laurin GmbH info@raffeiner.bz.it Production Franz Wimmer King Laurin GmbH 39057 St. Michael/Eppan, Italy Printing Fry Communications, Inc. 800 West Church Rd. Mechanicsburg, PA 17055 Advertising Sales Tammy Ferguson 832-717-4445 ext 026 tammy@codemag.com Circulation & Distribution General Circulation: EPS Software Corp. International Bonded Couriers (IBC) Newsstand: Ingram Periodicals, Inc. Media Solutions Subscriptions Subscription Manager Colleen Cade 832-717-4445 ext 028 ccade@codemag.com US subscriptions are US $29.99 for one year. Subscriptions outside the US are US $44.99. Payments should be made in US dollars drawn on a US bank. American Express, MasterCard, Visa, and Discover credit cards accepted. Bill me option is available only for US subscriptions. Back issues are available. For subscription information, email subscriptions@codemag.com or contact customer service at 832-717-4445 ext 028. Subscribe online at www.codemag.com CODE Developer Magazine 6605 Cypresswood Drive, Ste 300, Spring, Texas 77379 Phone: 832-717-4445 Fax: 832-717-4460
Managed Coder
73
MANAGED CODER
On Motivating For an industry that prides itself on its analytical ability and abstract mental processing, we often don’t do a great job applying that mental skill to the most important element of the programmer’s tool chest—that is, ourselves.
In my last column, “On Motivation,” I brought up Susan Fowler’s book Why Motivating People Doesn’t Work—And What Does, and in particular mentioned that: An important truth emerges when we explore the nature of motivation. People are always motivated. The question is not if, but why they are motivated. … One of the primary reasons motivating people doesn’t work is our naïve assumption that motivation is something a person has or doesn’t have. This leads to the erroneous conclusion that the more motivation a person has, the more likely she will achieve her goals and be successful. … As with friends, it isn’t how many friends you have; it is the quality and type of friendships that matter. This, then, begs the question: What are the quality and types of motivation, and which ones are better than the others?
Optimal and Suboptimal Motivation In her book, Fowler introduces six different kinds of motivation, and describes three as “optimal” (yielding the best benefits) and three as “suboptimal” (yielding lesser benefits). She aligns these along a “Spectrum of Motivation” and, in order of suboptimal-to-optimal, describes them as “Disinterested,” “External,” “Imposed,” “Aligned,” “Integrated,” and “Inherent.” Assume that you and your team are invited to a meeting. Again, the question isn’t “Are you motivated to attend the meeting,” but “Why are you motivated to attend the meeting?” • Disinterested. You just can’t care less. That meeting you went to—you just can’t find any value in it whatsoever. It felt like a waste of time, and as a result, it adds to your sense of feeling overwhelmed. • External. The meeting provided an opportunity for you to exert your position or power. It enabled you to take advantage of a promise for more budget, or it allowed you to show off your skills to your peers, and possibly raise your status with your boss in the room. • Imposed. You went to that meeting because you had to. Everybody else was going, and
74
Managed Coder
if you were going to avoid the guilt or the implicit “What makes you so much better than the rest of us that you can blow off the meeting like that?” gazes of your coworkers, you had to go. • Aligned. You found that the meeting had some value to you—you discovered something about the project/process/client that you didn’t know beforehand, or had the chance to explain to somebody else why a particular obstacle was blocking you and they (finally!) understood. • Integrated. This meeting helped you further a work or life purpose—you warned people in the room about an upcoming danger that nobody else seemed to recognize, or you got a tremendous “A-HA” moment out of the conversation about something that had been bothering you about your job or career for a while now. • Inherent. Strange as it may seem, you just enjoy meetings and thought it would be fun. The first three are the “suboptimal” motivational outlooks, and offering cash bonuses or prizes fall into the “external” category, the second-worst of the lot. Motivating somebody with peer pressure (the imposed approach) rates only slightly above that. Fowler suggests that using these approaches (prizes, rewards, threats of punishment, pressure, or using guilt/shame/emotional blackmail) are the “junk food” of motivation; they may feel good for a very short time, but inevitably they do far more damage to the system as a whole. Instead of offering up “motivational junk food,” Fowler suggests that we need to offer employees things that actually help them: the optimal motivational outlooks. Or, similarly, as employees, we need to approach management and tell them that these “bug hunts for money” don’t work and that you’d prefer to opt out of the exercise. (True story: While working as a contractor very early in my career, the company offered a “bug bounty” of $50 for every bug squashed. As a contractor, I wasn’t eligible. But as it turned out, I started drawing all the really hard bugs—the ones that weren’t easily fixed in an hour or two—and ended up being the only one sleeping under my desk the night before a big release. Did I make any money? Nope. But I earned the respect of
my peers—and my boss—and it felt really good to have tackled the hardest of the bugs in the list and squashed them all. It felt a lot better than the $750 that most of the rest of the team earned, in fact.) In essence, we need to understand a new model of psychological need, one that discards Maslow’s Pyramid of Needs (described in my November/ December 2015 CODE Magazine column) in favor of a simpler and more nuanced model: autonomy, relatedness, and competence, or ARC. • Autonomy. Our human need to perceive we have choices. We need to feel that what we’re doing is of our own volition. It’s our perception that we are the source of our actions. Those of you who’ve ever been around small babies and toddlers, think about feeding them—when we bring the spoon close to their mouths, they instinctively reach for the spoon or fork in order to do it themselves. They may or may not have the skill yet, but they need to be the source of the action—they need to be in control. “Autonomy doesn’t mean that managers are permissive or hands-off but rather that employees feel they have influence in the workplace. Empowerment may often be considered a cliché, but if people don’t have a sense of empowerment, their sense of autonomy suffers and so do their productivity and performance.” • Relatedness. Our need to care about and be cared about by others. We need to feel connected to others without concerns about ulterior motives. It’s our need to feel that we’re contributing to something greater than ourselves. For developers, in particular, this seems odd—isn’t software development a “heads-down” kind of job, where we are hunched over the keyboard for hours on end, slamming out line of code after line of code? And yet, years ago, when I interviewed Ward Cunningham (of agile and patterns fame) and asked why pair programming was such a hot topic among programmers, he looked at me square in the eye and said, “Think about it—you go home, and your family barely has any idea of what you (Continued on page 73)
codemag.com